Skip to content

Fireworq is a lightweight, high-performance, language-independent job queue system.


Notifications You must be signed in to change notification settings


Repository files navigation


Build Status Coverage Status Docker Hub MySQL Support

Fireworq is a lightweight, high-performance job queue system with the following abilities.

  • Portability - It is available from ANY programming language which can talk HTTP. It works with a single binary without external dependencies.

  • Reliability - It is built on top of RDBMS (MySQL), so that jobs won't be lost even if the job queue process dies. You can apply an ordinary replication scheme to the underlying DB for the reliability of the DB itself.

  • Availability - It supports primary/backup nodes. Only one node becomes primary simultaneously and the others become backup nodes. A backup node will automatically be active when the primary node dies.

  • Scalability - It always works with a single dispatcher per queue which can concurrently dispatch jobs to workers via HTTP. Scalability of workers themselves should be maintained by a load balancer in the ordinary way. This means that adding a worker will never harm performance of grabbing jobs from a queue.

  • Flexibility - It supports the following features.

    • Multiple queues - You can define multiple queues and use them in different ways: for example, one for a low priority queue for a limited number of high latency workers and another one for a high priority queue for a large number of low latency workers.
    • Delayed jobs - You can specify a delay for each job, which will make the job dispatched after the delay.
    • Job retrying - You can specify the maximum number of retries for each job.
  • Maintainability - It can be managed on a Web UI. It also provides metrics suitable for monitoring.

Run the following commands and you will get the whole system working all at once. Make sure you have Docker installed before running these commands.

$ docker run -p 8080:8080 fireworq/fireworq --driver=in-memory --queue-default=default

Pressing Ctrl+C will gracefully shut it down.

Note that in-memory driver is not for production use. It is intended to be used for just playing with Fireworq without a storage middleware.

Preparing a Worker

First of all, you need a Web server which does an actual work for a job. We call it a 'worker'.

A worker must accept a POST request with a body, which is typically a JSON value, and respond a JSON result. For example, if you have a worker at localhost:3000, it must handle a request like the following.

POST /work HTTP/1.1
Host: localhost:3000

HTTP/1.1 200 OK

{"status":"success","message":"It's working!"}

The response JSON must have status field, which describes whether the job succeeded. It must be one of the following values.

Value Meaning
"success" The job succeeded.
"failure" The job failed and it can be retried.
"permanent-failure" The job failed and it cannot be retried.

Any other values are regarded as "failure". The HTTP status code is always ignored.

Enqueuing a Job to Fireworq

Let's make the job asynchronous using Fireworq. All you have to do is to make a POST request to Fireworq with a worker URL and a job payload. If you have a Fireworq docker instance and your docker host IP (from the container's point of view) is, then requesting something like the following will enqueue exactly the same job in the previous example.

$ curl -XPOST -d '{"url":"","payload":{"id":12345}}' http://localhost:8080/job/foo

When Fireworq gets ready to grab this job, it will POST the payload to the url. When the job is completed on the worker, the log output of Fireworq should say something like this.

fireworq_1  | {"level":"info","time":1507128673123,"tag":"","action":"complete","queue":"default","category":"foo","id":2,"status":"completed","created_at":1507128673025,"elapsed":98,"url":"","payload":"{\"id\":12345}","next_try":1507128673025,"retry_count":0,"retry_delay":0,"fail_count":0,"timeout":0,"message":"It's working!"}

Further Reading

See the full list of API endpoints for the details of the API.

There is only a set of API endpoints provided by Fireworq itself to inspect running queues. They are useful for machine monitoring but not intended for human use.

Instead, use Fireworqonsole, a powerful Web UI which enables monitoring stats of queues, inspecting running or failed jobs and defining queues and routings.

Web UI

You can configure Fireworq by providing environment variables on starting a daemon. There are many of them but we only describe important ones here. See the full list for the other variables.


    Specifies a data source name for the job queue and the repository database in a form user:password@tcp(mysql_host:mysql_port)/database?options. This is for a manual setup and is mandatory for it.


    Specifies the name of a default queue. A job whose category is not defined via the routing API will be delivered to this queue. If no default queue name is specified, pushing a job with an unknown category will fail.

    If you already have a queue with the specified name in the job queue database, that one is used. Or otherwise a new queue is created automatically.


    Specifies the default interval, in milliseconds, at which Fireworq checks the arrival of new jobs, used when polling_interval in the queue API is omitted. The default value is 200.


    Specifies the default maximum number of jobs that are processed simultaneously in a queue, used when max_workers in the queue API is omitted. The default value is 20.

  • Copyright (c) 2017 The Fireworq Authors. All rights reserved.
  • Fireworq is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.