Laravel pseudo daemons

May 17, 2020

You may already know that a daemon is simply a process that runs in the background instead of one that's under the direct control of a user. In Laravel, the command you're probably most familiar with running as a daemon is php artisan horizon, which starts the Horizon master process that spawns all the child workers. The Laravel docs give you great instructions on how to set up Supervisor to run this process so that if it ever crashes it automatically gets restarted, which is pretty important if you want your queued jobs to keep processing.

If you'd prefer, you can jump straight to the code here: https://github.com/aarondfrancis/laravel-pseudo-daemon.

In my job at Resolute, we process a whole lot of what we call "intake forms". These forms contain all the necessary information for a client to sign up with us: property addresses, contact information, internal options, etc. Once a form is submitted, it works it's way through a series of finite states:

  • Draft – the employee hasn't finished the form, but has saved it for later.
  • Submitted – the employee has submitted it and it's ready to be picked up and processed.
  • Validating – validation is occurring.
  • Processing – properties, contacts, and e-signature documents are being created.
  • Resolving – the form has run into a known error, and the system is trying to resolve the issue automatically.
  • Intervention Required – there has been some kind of error that can't be auto-resolved and a human needs to take a look at it.
  • Processed – everything is done, the form won't be looked at again.

Using Jobs

We used to dispatch jobs to handle each of these states, e.g. Intake\Validate, Intake\Process, Intake\Resolve etc. As you can imagine, making sure that all the correct jobs were dispatched in all the right scenarios got to be a bit of a chore. A form could bounce from validating to resolving back to validating and then to processing or intervention required.

The job dispatching coordination has to be perfect here otherwise you could end up with forms that are stuck in a particular state. Failed jobs also become a much bigger deal than they would otherwise be. If one of the jobs failed due to some transient error (which happens more than never!), then a form could again be stuck in a non-final state.

Using Commands

Coordinating via dispatching jobs became unruly pretty quickly, so we moved away from that to using a separate command for each state. intake:validate, intake:process, intake:resolve, etc. This was much easier for me to reason about. We keep a state column on the model and each command picks up the forms it is supposed to deal with. If a transient error occurs it's not a big deal, because that form will get picked up the next time the command runs!

We set up our schedule like this:

$schedule->command('intake:validate')->everyMinute()
->runInBackground()
->withoutOverlapping();
 
$schedule->command('intake:process')->everyMinute()
->runInBackground()
->withoutOverlapping();
 
$schedule->command('intake:resolve')->everyMinute()
->runInBackground()
->withoutOverlapping();
Code highlighting powered by torchlight.dev (A service I created!)

We made sure that withoutOverlapping was on, so that there wasn't a chance that the same form could get picked up twice.

This setup worked much much better but led to a bit of an annoying problem. If a single form needs to get:

  • validated
  • auto-resolved
  • validated again
  • finally processed

it could take up to 6 or so minutes from the time a user hits "submit" and the form is finally done processing. Each command runs at the beginning of each minute, so the form will need to wait around for up to a minute to get validated, another minute to get auto-resolved, etc.

6 minutes may not sound like a long time, but when the employee is on the phone with the client, 6 minutes might as well be a thousand years.

Running each command at the top of the minute just wasn't going to work.

Daemons?

Laravel goes to great lengths to make developers' lives easier at every layer of the stack.

Horizon pushed all of the queue configuration back into your code. If you wanted to change the number of workers or the queues or anything, all you had to do was change that setting in your code. Once you got the main horizon command running, you were home free.

In the same way, using Laravel's scheduler is an unbelievably nice layer on top of cron, making it so you never have to muck around with server configuration after you set up that first schedule:run entry.

Even with Horizon, you had to set up the supervisor entry and make sure you remember to terminate it as a part of your deploy script. Not a big deal for one daemon. (Especially not on Forge.)

Also, the Horizon daemon is well understood. Most Laravel developers know that horizon is kept alive by Supervisor. But do most developers know that intake:process is a daemon? Definitely not. I didn't want to go down the road of having some commands run from the scheduler and having some run opaquely from the Forge UI.

The thought of adding each new daemon in Forge, making sure it was killed on deploy, and communicating to the team which commands were daemons sounded like something I didn't want to take on.

Pseudo-Daemons!

Enter the pseudo-daemon. The... kind-of daemon. The almost daemon!

Besides being hysterically hard to spell, a pseudo-daemon is a just a Laravel command that uses the IsPseudoDaemon trait, which gives you a runAsPseudoDaemon method that will continually keep a process method alive.

Let's take a look:

<?php
 
class Validate extends Command
{
use IsPseudoDaemon;
 
protected $signature = 'intake:validate';
 
public function handle()
{
$this->runAsPseudoDaemon();
}
 
/**
* This is the main method that will be kept alive.
*/
public function process()
{
$forms = IntakeForms::readyForValidation()->get();
 
foreach ($forms as $form) {
// Validate the form...
}
}
}

The process method will be kept alive for as long as you want, all controlled by your code without any Supervisor configuration, and without having to change your deploy scripts to kill it!

I decided not to try to take over the handle method itself, because that signature varies from command to command if you are using Laravel's dependency injection there. It's also easier to understand for most Laravel developers, since we all expect to see a handle method in our commands.

Starting a Pseudo-Daemon

Let's start by taking a look at how you start a psuedo-demon. It's incredibly simple, you add it to your Console\Kernel:

$schedule->command('intake:validate')->everyMinute()
->runInBackground()
->withoutOverlapping();

This will instruct Laravel to try to run this command every minute, to place it in the background, and to not start another one until the first one is finished. That last part is crucial, since we'll be keeping it alive for much longer than one minute.

Every time a new minute rolls around Laravel will check to see if this command is running. If it is, it won't do anything. If it's not running, it will start it. Laravel handles that whole part for us, out of the box, for free.

Stopping a Pseudo-Daemon

Just like a regular daemon, a pseudo-daemon will need to be stopped anytime new code is deployed, otherwise the old process will keep on running with old code, leading to potentially undesirable outcomes.

If you're using on Laravel Forge with Envoyer, the trait will automatically handle stopping itself whenever you deploy fresh code. You don't have to do a single thing!

Because Envoyer uses symlinks to deploy your code with zero-downtime, the IsPseudoDaemon trait can look to see where the current symlink is pointing. Whenever the symlink changes, that means new code has been deployed and the daemon should die.

This is what comes out of the box:

// IsPseudoDaemon.php
 
public function restartWhenChanged()
{
return $this->currentForgeEnvoyerRelease();
}
 
public function currentForgeEnvoyerRelease()
{
$pwd = trim(shell_exec('pwd'));
 
if (Str::startsWith($pwd, '/home/forge/') && Str::endsWith($pwd, '/current')) {
return shell_exec('readlink ' . escapeshellarg($pwd));
}
}

If you're not on Forge with Envoyer, you can extend the restartWhenChanged() method and return whatever you want. You can read a git hash, a build time, or do anything else. Anytime we detect that the data is different, the loop breaks.

public function restartWhenChanged()
{
// Restart whenever the git hash changes.
// https://stackoverflow.com/a/949391/1408651
return shell_exec('git rev-parse HEAD');
}

There may be other times when you want or need to stop the daemon, and you can do that by simply returning the constant PseudoDaemonControl::STOP from your process command.

I've also provided restartAfterNumberOfTimesRun() and restartAfterMinutes() methods that you can use to control max runtime if you please.

By default it will only run 1 time if your app is not in production, because running a command 1,000 times would make your tests take a very, very long time.

Sleeping For a Bit

If there is nothing for your command to do at a given moment, you don't necessarily want the process method to be invoked multiple times per second. To account for that, in between each invocation of the process method, the trait will sleep for the number of seconds returned by pseudoDaemonSleepSeconds(), which is 7 seconds by default.

If you want to temporarily disable the sleeping, you can return PseudoDaemonControl::DONT_SLEEP from the process command at any time. (PS: DO_SLEEP also exists, but since the default is sleeping, I only included that constant to potentially make your code more readable.)

Pseudo-Conclusion

We've been using this little helper for several months now at Resolute and it's really nice to have these processes running and not have to worry about them. We use them to handle intake forms (obviously) and various kinds of data imports.

Hopefully this has been helpful for you, feel free to hit me up on Twitter if you have any questions or suggestions!

Me

Thanks for reading! My name is Aaron and I write, make videos , and generally try really hard .

If you ever have any questions or want to chat, I'm always on Twitter.

You can find me on YouTube on my personal channel or my behind the scenes channel.

If you love podcasts, I got you covered. You can listen to me on Mostly Technical .