Queue the haters: the work queue in Redis

If you are reading this, you probably work in B2B SAAS as no one else would ever choose to put a work queue in their software. Alas, most of us work on less than sexy features in seemingly boring jobs. But the reality is that those boring jobs are the ones that pay the bills. Businesses want to automate away their problems without throwing more people at them, so the work queue was born.
If you are looking for the code real quick, it's right here.
I’ve worked at a handful of companies over the years, and almost all of them were in some way a B2B SAAS company. All of them had either built a work queue, or had a work queue on their road map.
What is a work queue? Good question, well you didn’t ask it but I’m going to tell you because you are reading this article. A work queue is a list of items that need to be completed by a group of individuals. To give you an example let’s look at a restaurant.
There are multiple queues inside of a restaurant, and operating a restaurant is all about moving people through these queues as fast as you can. The more tables you turn over, the more money you make, and the more money you make the better chance you have at funding that midlife crisis full of sports cars and “bitches” or “bros” depending on which form of sexism you prefer.
Anyway, you have 4 main queues when you look at the journey of the customer.
- Seating queue
- Ordering queue
- Paying queue
- GTFO queue
I don’t know if there is a real need for the 4th, but if I was turning tables that would absolutely be a metric for me.
The roles in a restaurant are clear, you have
- host(ess)
- Server
- Cook
- Busser
And they really are concerned about filling and emptying each of those queues.
If we were to represent this in software we would have a role or permission for each of these personas and a queue for each of the queue steps above.
Your first Work Queue
This is where the fun begins. I chose Redis, aka localstorage for servers, because it helps solve problems on the backend well, and to be honest, it’s the one that seemed to fit best. We will be using Sigue, a simple npm package used for GraphQL. GraphQL is like SOAP api standard, but unlike SOAP there are still people that like it.
A Quick Note
I did evaluate other technologies for this, at least instead of Redis (keyed). Azure service bus being one of them. It has a cool Peek & Lock feature that would have worked very well for this, but the same connection, yes connection, has to be used to abandon or complete the message. This has limitations where our backend could have multiple instances running in either Kubernetes or as a serverless function. If you don’t have this requirement and have only 1 webserver, Azure Service Bus could be a nice tool for this.
Kafka was also looked at, but since it looked to have mostly queues only, I stuck to Redis, something I know, and that has more datatypes, like Sorted Sets.
The Pseudo Code
Let’s be real, I don’t care about having all 4 queues represented, so we aren’t going to go into each of them. We are only going to make the reservations queue. That queue will be what the hostess to keep track of who has and hasn't been seated. They will pull the name to look for the party with the reservation, then after 2 minutes if they aren’t found they are placed back in the queue, or removed.
Let's start with a reservation, and let's make the data model for a reservation. It first needs a name, phone number, and the amount of people that are in the group. We will store that in the database and not in the queue. Sigue uses sequelize, so let's make that model.

Next we need to make the queue. I am going to use the npm package ioredis to make connections with Redis, and handle the queues. Redis has a handful of datatypes, one is a list, which most people call a queue. And it simply takes in a string and moves them either in or out of the front or the back of the list. One of its limitations is the inability to reference each of the items in the list and update them, or delete them from the list. Nor can you update the items once in the list. Once they are in the list, they kind of just sit there until you need them.
Enter Sorted Sets. A sorted set allows for the same concept of a queue but based on a score. That score determines the importance of the item, and makes it so you can insert items wherever you want in the queue and at any time. This fits our needs a little better.
This is a make believe example, but in the real world a work queue will generally have something that someone has to work on, and if they don’t get it done, it needs to be released so someone else can work on it. Let’s add that requirement to this queue.
So now we will have a sorted set named reservations. To implement the Peek & Lock like in Azure Service bus, we will add another sorted set named reservations_delayed. Here is the Redis code that will be used to help us write to the queues as needed. You can reference it over in the project repo.

Each reservation will be represented in the sorted set with it’s id (primary key) and its score which in this case will be an epoch based date. Basically, it's a date represented by a number, so the sooner you made your reservation, the sooner you will get seated.
Now we just need to hook it up to the API and add some testing. Remember, I am using Sigue right now, which helps get the server up and running. Using Sigue, We will add 2 custom resolvers, one to get a new item, and one to complete the item once the server has found their party.
We still need to add a party to the queue once their reservation is created, so now let's add the addToQueue call to the afterSave hook on the model. This should add the party to the queue.

The GraphQL queries
To write the graphQL queries let's use Apollo's sandbox. We will need to start the server so that Apollo Sandbox can do view what is in the API. Check out my blog next week where we cover Docker, but for now just run the docker compose up -d
command to get Redis up and running. Then we can start the node api server by running npm start
, as shown below.

Once up and running you can type in your http://127.0.0.1:4000/graphql
as your server, and if it is all setup correctly, you should be able to see the graphQL endpoints for your server, including the types that are returned. This is meant to be used to explore an API, so feel free to poke around.
Create a reservation
We will attempt to create a reservation, as if someone was taken in by the hostess.

You can follow along by clicking here, or just setting up your own query for CreateReservation
. Once created you should be able to run it, and see the result:

So that seems to have added it to the SQLite database that we are running locally. And since we added the addToQueue
function to the afterSave
Sequelize lifecycle hook, we should be able to see the data in Redis as well. I like to use TablePlus for my data development, it is the best client I have found on the Mac for running SQL. But it also has a Redis viewer. Let's check to make sure the data is there.

Sure enough, it's there as well as the one test reservation I ran before this.
A table is ready
Now let's pretend we called the group up for their reservation. We will call the getNext
graphQL query to do this. This places them in a delayed queue until they are seated. Let's run that graphQL query and then check on it in Redis.


Seating the party at the restaurant
Ok, as the hostess, let's pretend we seated the group and we need to remove them from the queue. We will remove record 11 that represents the party that was just seated, and number 10 too. And let's double check the delayed queue after we complete the records.


Ensuring stability with testing
This is all fine and good, but this is tedious checking every part of this process. Automating the test for this is the right thing to do if you want to ensure your code continues to run in the future, so to do that, we will run the tests that were written in mocha. I prefer to run SOME integration tests over many unit tests. These tests test the above flow.
To run the tests, in the terminal type npm run test
and watch the output.

Conclusion
This could be a really good way to handle work queues, especially for large organizations with a lot of data. Most of the time, this is probably overkill. This can be implemented with a database and something like a status column.
If you are a startup, please don't use Redis for this. Get your work queue out the door, and know you can implement this in the future.
I added all of the steps because that is what a real feature would look like. Concept to automated test. As long as your CI/CD is set up, then you can trust this won't ever be a bug in the future.
And as always, get it out the door, but don't skip any steps.