Skip to content
Fireworq is a lightweight, high-performance, language-independent job queue system.
Go Shell Other
Branch: master
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
config Fix typo 'descripiton' to 'description' Nov 15, 2019
data Add new SQL template for job list inspection. Aug 22, 2019
dispatcher Add tests Oct 14, 2019
doc Fix typo Oct 15, 2019
log OSS release. Dec 19, 2017
model OSS release. Dec 19, 2017
repository Use go-assets. Feb 17, 2018
script Ignore bots in the author list. Dec 6, 2019
service Do not block pushing jobs during config reload. Nov 20, 2019
test Change to accept order argument for find jobs methods. Aug 22, 2019
.gitignore Use go-assets. Feb 17, 2018
.mailmap Canonicalize the author name. Dec 19, 2017
.travis.yml Update to Go 1.12 Jun 6, 2019 Update the list of authors. Jan 22, 2020 Provide instructions to make a new release. Nov 22, 2019
LICENSE OSS release. Dec 19, 2017
Makefile Use variable for go binary. Aug 26, 2019 Use diff highlighting. Sep 27, 2018
go.mod Update module golang/mock to v1.4.0 Jan 22, 2020
go.sum Update module golang/mock to v1.4.0 Jan 22, 2020
main.go Use go-assets. Feb 17, 2018
main_non_unix_test.go OSS release. Dec 19, 2017
main_test.go OSS release. Dec 19, 2017
main_unix_test.go OSS release. Dec 19, 2017
renovate.json Do `go mod tidy` after update. Dec 9, 2019


Build Status Coverage Status

Fireworq is a lightweight, high-performance job queue system with the following abilities.

  • Portability - It is available from ANY programming language which can talk HTTP. It works with a single binary without external dependencies.

  • Reliability - It is built on top of RDBMS (MySQL), so that jobs won't be lost even if the job queue process dies. You can apply an ordinary replication scheme to the underlying DB for the reliability of the DB itself.

  • Availability - It supports primary/backup nodes. Only one node becomes primary simultaneously and the others become backup nodes. A backup node will automatically be active when the primary node dies.

  • Scalability - It always works with a single dispatcher per queue which can concurrently dispatch jobs to workers via HTTP. Scalability of workers themselves should be maintained by a load balancer in the ordinary way. This means that adding a worker will never harm performance of grabbing jobs from a queue.

  • Flexibility - It supports the following features.

    • Multiple queues - You can define multiple queues and use them in different ways: for example, one for a low priority queue for a limited number of high latency workers and another one for a high priority queue for a large number of low latency workers.
    • Delayed jobs - You can specify a delay for each job, which will make the job dispatched after the delay.
    • Job retrying - You can specify the maximum number of retries for each job.
  • Maintainability - It can be managed on a Web UI. It also provides metrics suitable for monitoring.

Getting Started

Run the following commands and you will get the whole system working all at once. Make sure you have Docker installed before running these commands.

$ git clone
$ cd fireworq
$ script/docker/compose up

When Fireworq gets ready, it will listen on localhost:8080 (on the host machine). Specify FIREWORQ_PORT environment variable if you want Fireworq to listen on a different port.

$ FIREWORQ_PORT=1234 script/docker/compose up

Pressing Ctrl+C will gracefully shut it down.

Behind HTTP proxy

If you are behind HTTP proxy, script/docker/compose up will fail. Add following configuration to script/docker/docker-compose.yml:

+    build:
+      args:
+        - http_proxy=http://.../
+        - https_proxy=http://.../

Using the API

Preparing a Worker

First of all, you need a Web server which does an actual work for a job. We call it a 'worker'.

A worker must accept a POST request with a body, which is typically a JSON value, and respond a JSON result. For example, if you have a worker at localhost:3000, it must handle a request like the following.

POST /work HTTP/1.1
Host: localhost:3000

HTTP/1.1 200 OK

{"status":"success","message":"It's working!"}

The response JSON must have status field, which describes whether the job succeeded. It must be one of the following values.

Value Meaning
"success" The job succeeded.
"failure" The job failed and it can be retried.
"permanent-failure" The job failed and it cannot be retried.

Any other values are regarded as "failure". The HTTP status code is always ignored.

Enqueuing a Job to Fireworq

Let's make the job asynchronous using Fireworq. All you have to do is to make a POST request to Fireworq with a worker URL and a job payload. If you have a docker-composed Fireworq instance and your docker host IP (from the container's point of view) is, then requesting something like the following will enqueue exactly the same job in the previous example.

$ curl -XPOST -d '{"url":"","payload":{"id":12345}}' http://localhost:8080/job/foo

When Fireworq gets ready to grab this job, it will POST the payload to the url. When the job is completed on the worker, the log output of Fireworq should say something like this.

fireworq_1  | {"level":"info","time":1507128673123,"tag":"","action":"complete","queue":"default","category":"foo","id":2,"status":"completed","created_at":1507128673025,"elapsed":98,"url":"","payload":"{\"id\":12345}","next_try":1507128673025,"retry_count":0,"retry_delay":0,"fail_count":0,"timeout":0,"message":"It's working!"}

Further Reading

See the full list of API endpoints for the details of the API.

Inspecting Running Queues

There is only a set of API endpoints provided by Fireworq itself to inspect running queues. They are useful for machine monitoring but not intended for human use.

Instead, use Fireworqonsole, a powerful Web UI which enables monitoring stats of queues, inspecting running or failed jobs and defining queues and routings.

Web UI


You can configure Fireworq by providing environment variables on starting a daemon. There are many of them but we only describe important ones here. See the full list for the other variables.


    Specifies a data source name for the job queue and the repository database in a form user:password@tcp(mysql_host:mysql_port)/database?options. This is for a manual setup and is mandatory for it.


    Specifies the name of a default queue. A job whose category is not defined via the routing API will be delivered to this queue. If no default queue name is specified, pushing a job with an unknown category will fail for a manual setup. A docker-composed instance uses default as a default value.

    If you already have a queue with the specified name in the job queue database, that one is used. Or otherwise a new queue is created automatically.


    Specifies the default interval, in milliseconds, at which Fireworq checks the arrival of new jobs, used when polling_interval in the queue API is omitted. The default value is 200.


    Specifies the default maximum number of jobs that are processed simultaneously in a queue, used when max_workers in the queue API is omitted. The default value is 20.

Other Topics


  • Copyright (c) 2017 The Fireworq Authors. All rights reserved.
  • Fireworq is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.
You can’t perform that action at this time.