Skip to content
Switch branches/tags
This branch is 248 commits ahead, 2 commits behind Sulverus:master.

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

What is Tarantool

Tarantool is a Lua application server integrated with a database management system. It has a "fiber" model which means that many Tarantool applications can run simultaneously on a single thread, while the Tarantool server itself can run multiple threads for input-output and background maintenance. It incorporates the LuaJIT -- "Just In Time" -- Lua compiler, Lua libraries for most common applications, and the Tarantool Database Server which is an established NoSQL DBMS. Thus Tarantool serves all the purposes that have made node.js and Twisted popular, plus it supports data persistence.

The database API allows for permanently storing Lua objects, managing object collections, creating or dropping secondary keys, making changes atomically, configuring and monitoring replication, performing controlled fail-over, and executing Lua code triggered by database events. Remote database instances are accessible transparently via a remote-procedure-invocation API.

For more information, visit


If you just want to quickly try out tarantool, run this command:

$ docker run --rm -t -i tarantool/tarantool

This will create a one-off Tarantool instance and open an interactive console. From there you can either type tutorial() or follow official documentation.

About this image

This image is a bundle containing Tarantool itself, and a combination of lua modules and utilities often used in production. It is designed to be a building block for modern services, and as such has made a few design choices that set it apart from the systemd-controlled Tarantool.

First, if you take this image and pin a version, you may rely on the fact that you won't get updates with incompatible modules. We only do major module updates while changing the image version.

Entry-point script provided by this image uses environment variables to configure various "external" aspects of configuration, such as replication sources, memory limits, etc... If specified, they override settings provided in your code. This way you can use docker-compose or other orchestration and deployment tools to set those options.

There are a few convenience tools that make use of the fact that there is only one Tarantool instance running in the container.

What's on board

  • avro-schema: Apache Avro scheme for your data
  • expirationd: Automatically delete tuples based on expiration time
  • queue: Priority queues with TTL and confirmations
  • connpool: Keep a pool of connections to other Tarantool instances
  • shard: Automatically distribute data across multiple instances
  • http: Embedded HTTP server with flask-style routing support
  • curl: HTTP client based on libcurl
  • pg: Query PostgreSQL right from Tarantool
  • mysql: Query MySql right from Tarantool
  • memcached: Access Tarantool as if it was a Memcached instance
  • metrics: Metric collection library for Tarantool
  • prometheus: Instrument code and export metrics to Prometheus monitoring
  • mqtt: Client for MQTT message brokers
  • gis: store and query geospatial data
  • gperftools: collect CPU profile to find bottlenecks in your code

If the module you need is not listed here, there is a good chance we may add it. Open an issue on our GitHub.

Data directories

  • /var/lib/tarantool is a volume containing operational data (snapshots, xlogs and vinyl runs)

  • /opt/tarantool is a place where users should put their lua application code

Convenience utilities

  • console: execute it without any arguments to open administrative console to a running Tarantool instance

  • tarantool_is_up: returns 0 if Tarantool has been initialized and is operating normally

  • tarantool_set_config.lua: allows you to dynamically change certain settings without the need to recreate containers.

How to use this image

Start a Tarantool instance

$ docker run --name mytarantool -p3301:3301 -d tarantool/tarantool

This will start an instance of Tarantool and expose it on port 3301. Note, that by default there is no password protection, so don't expose this instance to the outside world.

In this case, when there is no lua code provided, the entry point script initializes database using a sane set of defaults. Some of them can be tuned with environment variables (see below).

Start a secure Tarantool instance

$ docker run --name mytarantool -p3301:3301 -e TARANTOOL_USER_NAME=myusername -e TARANTOOL_USER_PASSWORD=mysecretpassword -d tarantool/tarantool

This starts an instance of Tarantool, disables guest login and creates user named myusername with admin privileges and password mysecretpassword.

As with the previous example, database is initialized automatically.

Connect to a running Tarantool instance

$ docker exec -t -i mytarantool console

This will open an interactive admin console on the running instance named mytarantool. You may safely detach from it anytime, the server will continue running.

This console doesn't require authentication, because it uses a local unix socket in the container to connect to Tarantool. But it requires you to have direct access to the container.

If you need a remote console via TCP/IP, use tarantoolctl utility as explained here.

Start a master-master replica set

You may start a replica set with docker alone, but it's more convenient to use docker-compose. Here's a simplified docker-compose.yml for starting a master-master replica set:

version: '2'

    image: tarantool/tarantool:1.10.2
      TARANTOOL_REPLICATION: "tarantool1,tarantool2"
      - mynet
      - "3301:3301"

    image: tarantool/tarantool:1.10.2
      TARANTOOL_REPLICATION: "tarantool1,tarantool2"
      - mynet
      - "3302:3301"

    driver: bridge

Start it like this:

$ docker-compose up

Adding application code with a volume mount

The simplest way to provide application code is to mount your code directory to /opt/tarantool:

$ docker run --name mytarantool -p3301:3301 -d -v /path/to/my/app:/opt/tarantool tarantool/tarantool tarantool /opt/tarantool/app.lua

Where /path/to/my/app is a host directory containing lua code. Note that for your code to be actually run, you must execute the main script explicitly. Hence tarantool /opt/tarantool/app.lua, assuming that your app entry point is called app.lua.

Adding application code using container inheritance

If you want to pack and distribute an image with your code, you may create your own Dockerfile as follows:

FROM tarantool/tarantool:1.10.2
COPY app.lua /opt/tarantool
CMD ["tarantool", "/opt/tarantool/app.lua"]

Please pay attention to the format of CMD: unless it is specified in square brackets, the "wrapper" entry point that our Docker image provides will not be called. It will lead to inability to configure your instance using environment variables.

Environment Variables

When you run this image, you can adjust some of Tarantool settings. Most of them either control memory/disk limits or specify external connectivity parameters.

If you need to fine-tune specific settings not described here, you can always inherit this container and call box.cfg{} yourself. See official documentation on box.cfg for details.


Setting this variable allows you to pick the name of the user that is utilized for remote connections. By default, it is 'guest'. Please note that since guest user in Tarantool can't have a password, it is highly recommended that you change it.


For security reasons, it is recommended that you never leave this variable unset. This environment variable sets the user's password for Tarantool. In the above example, it is set to "mysecretpassword".


Optional. Specifying this variable will tell Tarantool to listen for incoming connections on a specific port. Default is 3301.


Optional. If specified tarantool will start http server on given port and expose prometheus metrics endpoint with common metrics (fibers, memory, network, replication, etc... )


Optional. Comma-separated list of URIs to treat as replication sources. Upon the start, Tarantool will attempt to connect to those instances, fetch the data snapshot and start replicating transaction logs. In other words, it will become a slave. For the multi-master configuration, other participating instances of Tarantool should be started with the same TARANTOOL_REPLICATION. (NB: applicable only to >=1.7)




Optional. Specifies how much memory Tarantool allocates to actually store tuples, in bytes. When the limit is reached, INSERT or UPDATE requests begin failing. Default is 268435456 (256 megabytes).


Optional. Used as the multiplier for computing the sizes of memory chunks that tuples are stored in. A lower value may result in less wasted memory depending on the total amount of memory available and the distribution of item sizes. Default is 1.05.


Optional. Size of the largest allocation unit in bytes. It can be increased if it is necessary to store large tuples. Default is 1048576.


Optional. Size of the smallest allocation unit, in bytes. It can be decreased if most of the tuples are very small. Default is 16.


Optional. Specifies how often snapshots will be made, in seconds. Default is 3600 (every 1 hour).


Optional. When set to "true" Tarantool tries to continue if there is an error while reading a snapshot file or a write-ahead log file. Skips invalid records, reads as much data as possible, print a warning in console and start the database.

Reporting problems and getting help

You can report problems and request features on our GitHub.

Alternatively you may get help on our Telegram channel.


How to contribute

Open a pull request to the master branch. A maintaner is responsible for merging the PR.

How to check

Say, we have updated 'dockerfiles/alpine_3.9' and want to check it:

$ TAG=2 OS=alpine DIST=3.9 VER=2.x PORT=5200 make -f build
$ docker run -it tarantool/tarantool:2
...perform a test...

Build pipelines

Fixed versions:

Docker tag Dockerfile
1.10.0 .. 1.10.3 dockerfile/alpine_3.5
1.10.4 .. 1.10.9 dockerfile/alpine_3.9
2.1.0 .. 2.1.2 dockerfile/alpine_3.5
2.1.3 dockerfile/alpine_3.9
2.2.0 .. 2.2.1 dockerfile/alpine_3.5
2.2.2 .. 2.2.3 dockerfile/alpine_3.9
2.3.0 dockerfile/alpine_3.5
2.3.1 .. 2.3.3 dockerfile/alpine_3.9
2.4.0 .. 2.4.3 dockerfile/alpine_3.9
2.5.0 .. 2.5.3 dockerfile/alpine_3.9
2.6.0 .. 2.6.3 dockerfile/alpine_3.9
2.7.0 .. 2.7.2 dockerfile/alpine_3.9
2.8.0 .. 2.8.1 dockerfile/alpine_3.9
2.9.0 dockerfile/alpine_3.9

Rolling versions:

Docker tag Dockerfile
1 dockerfile/alpine_3.9
2.1 .. 2.8 dockerfile/alpine_3.9
2, latest dockerfile/alpine_3.9

Special builds:

Docker tag Dockerfile
1.x-centos7 dockerfile/centos_7
2.x-centos7 dockerfile/centos_7

Release policy

All images are pushed to Docker Hub.

Fixed version tags (x.y.z) are frozen: we never update them.

Example of minor versions timeline:

  • x.y.0 - Alpha
    • x.y.1 - Beta
      • x.y.2 - Stable
        • x.y.3 - Stable

Rolling versions are updated to the last stable fixed version tags:

  • x.y == x.y.<last-z> (== means 'points to the same image')
  • 1 == 1.<last-y>.2
  • 2 == 2.<last-y>.2
  • latest == 2

Special stable builds (CentOS) are updated with the same policy as rolling versions:

  • 1.x-centos7 image offers a last 1.<last-y>.2 release
  • 2.x-centos7 image offers a last 2.<last-y>.2 release

Exceptional cases

As an exception we can deliver an important update for the existing tarantool release within x.y.z-r1, x.y.z-r2, ... tags.

When x.y.z-r<N> is released, the corresponding rolling releases (x.y, x and latest if x == 2) should be updated to point to the same image.

There is no strict policy, which updates should be considered important. Let's decide on demand and define the policy later.

TBD: How to notify users about the exceptional updates?

How to push an image (for maintainers)


$ export TAG=2
$ export OS=alpine DIST=3.9 VER=2.x  # double check the values!
$ PORT=5200 make -f build
$ docker push tarantool/tarantool:${TAG}


Docker images for tarantool database




No releases published


No packages published