Skip to content
/ tansu Public

A drop-in replacement for Apache Kafka with PostgreSQL and S3 storage engines written in 100% safe ๐Ÿฆบ async ๐Ÿš€ Rust ๐Ÿฆ€

License

Notifications You must be signed in to change notification settings

tansu-io/tansu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Tansu

Tansu is a drop-in replacement for Apache Kafka with PostgreSQL, S3 or memory storage engines. Without the cost of broker replicated storage for durability. Licensed under the GNU AGPL. Written in 100% safe ๐Ÿฆบ async ๐Ÿš€ Rust ๐Ÿฆ€

Features:

  • Kafka API compatible
  • Elastic stateless brokers: no more planning and reassigning partitions to a broker
  • Consensus free without the overhead of Raft or ZooKeeper
  • All brokers are the leader and ISR of any topic partition
  • All brokers are the transaction and group coordinator
  • No network replication or duplicate data storage charges
  • Spin up a broker for the duration of a Kafka API request: no more idle brokers
  • Available with PostgreSQL, S3 or memory storage engines

For data durability:

S3

Tansu requires that the underlying S3 service support conditional PUT requests. While AWS S3 does now support conditional writes, the support is limited to not overwriting an existing object. To have stateless brokers we need to use a compare and set operation, which is not currently available in AWS S3.

Much like the Kafka protocol, the S3 protocol allows vendors to differentiate. Different levels of service while retaining compatibility with the underlying API. You can use minio, r2 or tigis, among a number of other vendors supporting conditional put.

Tansu uses object store, providing a multi-cloud API for storage. There is an alternative option to use a DynamoDB-based commit protocol, to provide conditional write support for AWS S3 instead.

configuration

The storage-engine parameter is a S3 URL that specifies the bucket to be used. The following will configure a S3 storage engine using the "tansu" bucket (full context is in compose.yaml and .env):

Edit .env so that STORAGE_ENGINE is defined as:

STORAGE_ENGINE="s3://tansu/"

First time startup, you'll need to create a bucket, an access key and a secret in minio.

Just bring minio up, without tansu:

docker compose up -d minio

The minio console should now be running on http://localhost:9001, login using the default user credentials of "minioadmin", with password "minioadmin". Follow the bucket creation instructions to create a bucket called "tansu", and then create an access key and secret. Use your newly created access key and secret to update the following environment in .env:

# Your AWS access key:
AWS_ACCESS_KEY_ID="access key"

# Your AWS secret:
AWS_SECRET_ACCESS_KEY="secret"

# The endpoint URL of the S3 service:
AWS_ENDPOINT="http://localhost:9000"

# Allow HTTP requests to the S3 service:
AWS_ALLOW_HTTP="true"

Once this is done, you can start tansu with:

docker compose up -d tansu

Using the regular Apache Kafka CLI you can create topics, produce and consume messages with Tansu:

kafka-topics \
  --bootstrap-server localhost:9092 \
  --partitions=3 \
  --replication-factor=1 \
  --create --topic test

Producer:

echo "hello world" | kafka-console-producer \
    --bootstrap-server localhost:9092 \
    --topic test

Consumer:

kafka-console-consumer \
  --bootstrap-server localhost:9092 \
  --topic test \
  --from-beginning \
  --property print.timestamp=true \
  --property print.key=true \
  --property print.offset=true \
  --property print.partition=true \
  --property print.headers=true \
  --property print.value=true

Describe the consumer groups:

kafka-consumer-groups \
  --bootstrap-server localhost:9092 \
  --list

PostgreSQL

To switch between the minio and PostgreSQL examples, firstly shutdown Tansu:

docker compose down tansu

Switch to the PostgreSQL storage engine by updating .env:

# minio storage engine
# STORAGE_ENGINE="s3://tansu/"

# PostgreSQL storage engine -- NB: @db and NOT @localhost :)
STORAGE_ENGINE="postgres://postgres:postgres@db"

Bring Tansu back up:

docker compose up -d tansu

Using the regular Apache Kafka CLI you can create topics, produce and consume messages with Tansu:

kafka-topics \
  --bootstrap-server localhost:9092 \
  --partitions=3 \
  --replication-factor=1 \
  --create --topic test

Producer:

echo "hello world" | kafka-console-producer \
    --bootstrap-server localhost:9092 \
    --topic test

Consumer:

kafka-console-consumer \
  --bootstrap-server localhost:9092 \
  --topic test \
  --from-beginning \
  --property print.timestamp=true \
  --property print.key=true \
  --property print.offset=true \
  --property print.partition=true \
  --property print.headers=true \
  --property print.value=true

Or using librdkafka to produce:

echo "Lorem ipsum dolor..." | \
  ./examples/rdkafka_example -P \
  -t test -p 1 \
  -b localhost:9092 \
  -z gzip

Consumer:

./examples/rdkafka_example \
  -C \
  -t test -p 1 \
  -b localhost:9092

Feedback

Please raise an issue if you encounter a problem.

License

Tansu is licensed under the GNU AGPL.

About

A drop-in replacement for Apache Kafka with PostgreSQL and S3 storage engines written in 100% safe ๐Ÿฆบ async ๐Ÿš€ Rust ๐Ÿฆ€

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages