Modern Node.js apps often need to perform background jobs. Offloading to a job queue is a great way to preserve web performance when faced with sections of code that are too slow or resource-intensive to handle during an HTTP request. If your app needs to send emails, generate PDFs, process images, or aggregate data, you probably need background jobs.
Offloading these jobs (sometimes called tasks) to a job queue ensures your web process remains responsive and keeps latency down. A typical setup is to have your web processes enqueue jobs to an external system, and one or more worker processes consume and execute those jobs asynchronously.
This work…
Modern Node.js apps often need to perform background jobs. Offloading to a job queue is a great way to preserve web performance when faced with sections of code that are too slow or resource-intensive to handle during an HTTP request. If your app needs to send emails, generate PDFs, process images, or aggregate data, you probably need background jobs.
Offloading these jobs (sometimes called tasks) to a job queue ensures your web process remains responsive and keeps latency down. A typical setup is to have your web processes enqueue jobs to an external system, and one or more worker processes consume and execute those jobs asynchronously.
This works well for keeping your web processes free and performant.
So you’ve got a Node.js app, and you know what needs to be passed off to a job queue. But do you know what job queueing system to use?
If you’re looking for a quick answer, I won’t make you wait. BullMQ is right most of the time. But let’s take a look at our options!
Bull and BullMQ for job queues
BullMQ is definitely the most popular Node.js job queue (especially if you also consider Bull).
It is a powerful queue library backed by Redis, known for its high performance and rich feature set. Bull can process a large volume of jobs quickly by leveraging Redis and an efficient implementation under the hood.
👀 Note
Understanding Bull vs BullMQ: One really important thing to note is that Bull’s original library is now in maintenance mode. The authors have moved efforts to BullMQ, a modern TypeScript rewrite that will receive new features going forward.
Jobs are persisted in Redis, so they won’t be lost if a worker crashes. Bull provides job persistence, automatic retries, error handling, and priority queues. Together, this gives you an unbeatable expectation of reliability.
BullMQ also supports multiple workers consuming the same queue, and you can configure concurrency (the number of jobs a single worker can process in parallel). This horizontal scaling ability means BullMQ can handle a lot of load and is also perfect for autoscaling, which we’ll get into later.
BullMQ is essentially a new (major) version of Bull, with mostly the same API and using Redis, but with improved internals. If you’re already using Bull, that’s fine. But if you’re starting fresh, consider BullMQ so you get long-term support and benefit from the improvements.
Since they’re Redis-based, Bull and BullMQ are naturally suited for modern web apps that may run across multiple processes. It’s no surprise Ruby’s Sidekiq uses Redis too. All workers connect to the same Redis instance, so adding more worker processes (whether permanently or by autoscaling) increases the throughput of job processing. Jobs will be pulled by any available worker.
BullMQ includes mechanisms to detect stalled jobs, such as requeueing failed jobs. For most web applications, a single Redis-backed queue can coordinate dozens of workers reliably. If your app already uses Redis, BullMQ fits in nicely. If not, you’ll need to introduce Redis just for the queue, which is probably a worthwhile tradeoff for the reliability it provides in most cases.
Bee-Queue for job queues
Bee-Queue is another popular Redis-backed job queue for Node. It’s designed with a focus on simplicity and speed, inspired by the shortcomings of older libraries. Like BullMQ, Bee-Queue requires a Redis instance to operate, a common theme we’ll continue to see.
Bee-Queue intentionally has a smaller feature set than BullMQ, trading breadth of features for low complexity and high performance. It gives us all of the core job queueing capabilities, but leaves out some of the advanced features of BullMQ.
This tradeoff is right for some people, as it’s notably easier to get started.
The library’s API is relatively straightforward. You create a queue, define a job processor function, and enqueue jobs. My time reading Bee-Queue’s examples and documentation has been stress-free as they’re very easy to understand. This can translate to faster initial setup and less overhead in learning the tool, something that’s really underrated in medium-sized software projects.
Despite being lightweight, Bee-Queue does include essentials for production. You get persistence in Redis, job completion callbacks, and even rate limiting and retry logic. It supports job timeouts, retry attempts, and will handle “stalled job” detection.
What it lacks is some features of Bull and BullMQ, like built-in priority levels or repeatable (scheduled) jobs.
Multiple Bee-Queue worker processes can consume from the same queue even if they’re on different machines, making scaling as simple as running more workers. This makes it a great fit for autoscaling scenarios.
In practice, you’d run one or more worker processes with Bee-Queue. If you need more throughput, just increase the number of workers, and jobs will be distributed across them. If you’re okay with using Redis (and most Node apps can add Redis via a managed service fairly easily), Bee-Queue provides a nice balance of simplicity and performance.
Still, it’s been 2 years since the last release of Bee-Queue, and the lack of recent maintenance/development may put off a lot of developers.
Agenda for job queues
Agenda is a different breed of job queue for Node when compared to BullMQ and BeeQueue. It is primarily a job scheduler built on MongoDB, not Redis! It focuses on scheduling jobs (think cron jobs and delayed jobs), but it also supports immediate job queuing with concurrency control.
Agenda is a popular choice, especially for teams already using MongoDB, since it uses your MongoDB database to store job information. If I were in a project not already using MongoDB, this wouldn’t be my first choice.
Agenda’s features overlap with BullMQ and Bee-Queue in some areas, but it has its own philosophy. Agenda stores jobs in a MongoDB collection, so if your application already uses MongoDB, you don’t need an extra infrastructure component for the queue. Jobs are persisted to the database, which ensures durability.
Agenda can also work with other databases (it supports a few Mongo-like interfaces), giving some flexibility in persistence. Still, it shines in scheduling future or recurring jobs. It offers a human-readable syntax (but still supports cron syntax) and the ability to schedule jobs at specific dates or intervals.
For example, you can schedule a job to run every day at 8 am, or run once a week, all using cron patterns or (close to) plain English. This makes Agenda ideal for background jobs that need to run on a schedule.
Agenda runs as a single process scheduler. It pulls jobs from Mongo and processes them in the same process. It does support concurrency (multiple jobs at once in one process) and can be scaled to multiple processes using MongoDB’s locking mechanism (to ensure two processes don’t run the same job).
However, scaling horizontally with Agenda is not as straightforward as with Redis queues. Agenda is generally single-master, meaning one instance should be scheduling to avoid duplicate scheduling of recurring jobs, though multiple workers can cooperate on different jobs. It’s not impossible to scale horizontally, of course, but the path isn’t as straightforward.
Agenda is probably best suited for applications that need cron-like scheduling and already use MongoDB. If you have a Node app in production that’s already using Mongo, you can use Agenda to schedule jobs without introducing Redis. It’s great for things like daily reports, periodic cleanup jobs, or any job that must run X times a day/week without needing to support another infrastructure piece.
Using a message broker like RabbitMQ
Instead of using a Node-specific library, you can opt for a message broker service such as RabbitMQ, Amazon SQS, or Google Cloud Tasks. These are not Node.js libraries. They’re external systems that Node can interface with through their APIs or client libraries.
For example, RabbitMQ is a robust open-source message queue that many large systems use. In a Node app, you might use a package to publish and consume messages from RabbitMQ.
The advantage of brokers like RabbitMQ is primarily reliability and advanced messaging patterns like acknowledgments and dead-letter queues.
Similarly, cloud services like AWS SQS or even Google Cloud Tasks are fully managed queues. They remove the need to run Redis or RabbitMQ yourself, which is attractive to a lot of people. These can scale virtually indefinitely and handle autoscaling scenarios by design.
The trade-off with using external cloud queues is that you’ll have to implement some features in your application code, like deciding how to schedule jobs or doing retries. Also, there’s a bit more latency as calls go over the network. Developer experience might not be as seamless as using a Node library, but if you prefer not to manage any infrastructure, they are a very reasonable option.
Autoscaling your workers
Scaling Node job queues is a necessary part of running them in production. Offloading intensive jobs to queues doesn’t do much for the performance of the queue processing itself, which isn’t that performant.
There are two big levers you can pull to scale your Node job queues. Vertical scaling means using more powerful workers with more threads/processes. Meanwhile, horizontal scaling increases the number of worker processes or machines. Comprehensive solutions require attention to both.
As we talked about above, the major Node job queues support horizontal scaling without too much hassle, so it’s worth putting some effort into. You can do this manually, but it’s best practice to set up an autoscaler.
This lets you keep your hands off, adding worker processes when your existing processes can’t keep up with demand, and removing them when demand allows, which saves you costs. Still, most autoscalers leave much to be desired. Heroku’s autoscaler doesn’t work for workers, and other major platforms that have support use CPU as the autoscaling metric, which is not an optimal way to measure demand on asynchronous worker processes.
Judoscale is a powerful autoscaler that you can add to most any hosting setup. The autoscaling algorithm scales based on queue latency, which is a much better indicator of queue well-being than CPU usage. If you’re running a Node app in production, try Judoscale’s free plan to see if it’s right for you.
Comparing Node job queue options and making a decision
My opinion here is somewhat controversial in that I think you should value developer experience a lot in your decision-making. That means using BullMQ unless you really need a ton of extra features, in which case use a message broker like RabbitMQ.
If your app environment already includes a certain datastore, leaning into that can simplify setup. For instance, if you use Redis, Bull or BullMQ will be straightforward to add. If you use MongoDB, Agenda might integrate more naturally. A solution that fits your existing stack usually means less friction for you, which I think you should place a premium on.