Rationing Laravel's queue workers memory & CPU consumption

Mar 10, 2021 2 min read

If you're running your queue workers on a server with limited resources, or a server that's also used to serve HTTP requests and do other tasks, it's important to ration the resource used by those workers.

Workers Memory Consumption

Over time, while processing your Laravel Jobs, some references will pile up in the server memory that won't be detected by PHP's garbage collector and will cause the server to crash at some point.

The solution is simple though; restart the workers more often.

php artisan queue:work --max-jobs=1000 --max-time=3600

You can use the --max-jobs and --max-time options on the php artisan queue:work command to limit the number of jobs the worker may process or the time it should stay up. Once the limit is reached, the worker process will exit and your process manager will start a fresh instance.

If you happen to know there's a certain job that may require a lot of memory allocation and you want to ensure the worker restarts after finishing such job, you may add this to the end of the handle() method:

public function handle()
{
    app('queue.worker')->shouldQuit  = 1;
}

Queue Memory Consumption

If you're using Redis to store your queued jobs, you should know that redis is an in-memory store. The payload of each of your jobs is stored in the server memory until the job is processed and deleted. The longer the jobs stay in the queue, the more memory you'll need to allocate.

To limit the memory used by the queue store, make sure the jobs are processed as fast as possible. You can do that by starting more workers to process jobs in parallel.

You also need to make sure the job payload only includes the minimum amount of data needed by the job instance. Don't pass large objects or data structures to the job's constructor.

Workers CPU Consumption

If your server has 2 CPU cores, it can only run 2 tasks in parallel. That doesn't mean you can only run 2 workers on that server. When the number of processes is more than the number of cores, an advanced context switching mechanism kicks in and allows multiple processes to share CPU resources. In other words, the OS will switch between multiple tasks and let them use the available CPU cores, this switching is done very fast it's undetectable by humans.

Now that we know how the OS ration the available CPU resources, we understand the importance of prioritizing the processes. If your server runs a couple of workers and at the same time serves HTTP requests, you may want to give the worker processes less priority so the OS allocates more CPU bandwidth to the processes that serve HTTP requests.

You can do that by starting the worker process with a high nice value:

nice -n 10 php artisan queue:work

A high nice value translates to a low priority. The OS will let this worker process wait a little as it gives more bandwidth to other important processes like php-fpm or nginx.

Allowed values are between 0 and 19

Another thing you can do is ensure the worker rests between jobs and if it finds the queue empty:

php artisan queue:work --rest=0.5 --sleep=5

This worker process will wait half a second between jobs and wait 5 seconds if it finds the queue empty. Resting workers are idle, which gives the OS the chance to allocate more CPU power to other processes.

If you want to learn more about Laravel's queue system, make sure to check Laravel Queues in Action! I've put everything I know about the queue system in an eBook format along with several real-life uses cases. Check it out for a crash course, a cookbook, a guide, and a reference.


I'm Mohamed Said. I work with companies and teams all over the world to build and scale web applications in the cloud. Find me on twitter @themsaid.


Get updates in your inbox.

Menu

Log in to access your purchases.

Log in

Check the courses I have published.


You can reach me on Twitter @themsaid or email [email protected].