A distributed event bus that implements a RESTful API abstraction on top of Kafka-like queues
Switch branches/tags
ARUHA-145-examined-build ARUHA-157-recognise-new-kafka-brokers ARUHA-190-hila-api ARUHA-215-after-merge ARUHA-215-ut ARUHA-304 ARUHA-349-fix ARUHA-349 ARUHA-356 ARUHA-392-fix-codecov ARUHA-402-view-specific-schema-version ARUHA-419 ARUHA-455 ARUHA-456-block-consumers ARUHA-479 ARUHA-519 ARUHA-520-reporting-batch-size ARUHA-522 ARUHA-523 ARUHA-528-timeline-locking-when-publish ARUHA-535-cleanup-timelines ARUHA-554-zappr-fix ARUHA-554 ARUHA-583-fix-endpoint-security-config ARUHA-588 ARUHA-596-commit-performance ARUHA-596-vol3 ARUHA-633 ARUHA-648-improved-logging ARUHA-649-fix-user-journey ARUHA-651 ARUHA-668-investigate-cleanup-policy ARUHA-724 ARUHA-746-update-cleanup-time ARUHA-753 ARUHA-758-forbid-batchlimit-0 ARUHA-765-et-creation-auth ARUHA-769-subscription-stats-performance ARUHA-771-subscription-creation-problem ARUHA-792-log-slos ARUHA-802 ARUHA-806-fix-user-journey ARUHA-817 ARUHA-834 ARUHA-854-hash-partitioning-problem ARUHA-855 ARUHA-869 ARUHA-874-stats-performance-problem-vol2 ARUHA-874 ARUHA-912-release ARUHA-933-authz-documentation ARUHA-953-remove-read-write-scopes ARUHA-954-log-realm ARUHA-960 ARUHA-964-reformat-imports-vol2 ARUHA-979-name-rules-changes ARUHA-983-dont-authorize-sub-create ARUHA-1044-validate ARUHA-1075-db-table-for-admins ARUHA-1079-admin-override-et-permissions ARUHA-1081-fix-debug-log ARUHA-1093 ARUHA-1105 ARUHA-1168 ARUHA-1173-2 ARUHA-1173 ARUHA-1203 ARUHA-1275 ARUHA-1311-collect-ET-metrics ARUHA-1313 ARUHA-1330 ARUHA-1347 ARUHA-1360 ARUHA-1388 ARUHA-1397-release ARUHA-1397 ARUHA-1398 ARUHA-1419-kpi-toggling-fix ARUHA-1428 ARUHA-1442 ARUHA-1457-release ARUHA-1462 ARUHA-1488-1 ARUHA-1488 ARUHA-1500-hila-read-from-partition-api ARUHA-1501-choose-partition-hila ARUHA-1501-print-topology ARUHA-1520-intermediate-version ARUHA-1581 ARUHA-1604-flush-on-stream-timeout ARUHA-1624-out-of-memory ARUHA-1626-commit-sc500 ARUHA-1636 ARUHA-1640 ARUHA-1641-2 ARUHA-1641 ARUHA-1664-time-lag-stats ARUHA-1664-wip ARUHA-1674 ARUHA-1680-final-switch-to-strict ARUHA-1681-fixnpe ARUHA-1693 ARUHA-1757-additional-topic-properties ARUHA-1899 ARUHA-1952 ARUHA-1993-fix-docs-layout ARUHA-1997 ARUHA-2024 LappleApple-patch-1 adyach-patch-1 adyach-patch-2 adyach-patch-3 adyach-patch-4 adyach-patch-deps adyach-patch-now aruha-136-expose-effective-schema aruha-191-getting-started aruha-338-streaming-endpoint aruha-400-major-minor-patch aruha-602 aruha-677-versioning-dummy-changes aruha-681-remove-event-type-with-timelines aruha-713-change-partition-strategy aruha-809-fix-scopes aruha-905 aruha-929-authorization-refactoring aruha-931-cd aruha-931 aruha-961-check-reset-cursors aruha-979-release-version aruha-1015-base-branch aruha-1015-bug-tracking aruha-1015-jetty-500-threads aruha-1015-no-connection-crutch aruha-1033-test2 aruha-1211 aruha-1247-fetchlimit aruha-1247 aruha-1303 aruha-1322-1 aruha-1373-fix-evolution aruha-1391 aruha-1399-nakadi-producer-event aruha-1403-nakadi-consumption-kpi aruha-1681-subscription-authorization-api aruha-1818-ksql-output aruha-profiling automatic-schema-generation change-stack-trace-logging checkstyle-experiment dependencies-upgrade-201807 disable-oauth do-not-override-flow-id enable-forward-slashes-reject eventstore-api-fixed eventstore-latest eventstore-old feature/event-validation feature/topics-controller fix-exposed-stacktrace fix-timelines-admin-rejection gradle-improvements hikari-debug hila-streaming-preparations hp-logs ksql-poc light-subscription-stats logos master-pr-test master nakadi-jvm nakadi-low-level-api-review nakadi-on-play nakadi-stats new-streaming-format nsql optimization-experiment ordering-key-fields profiling-fl1 refactor-exceptions release-candidate release/R2016_04_13_11_02 release/R2016_05_04 release/R2016_07_18 release/R2016_08_02 release/R2016_08_22 release_2.3.3 remove-inmemory-repository-and-use-mockito remove-not-related-code remove-zappr revert-795-aruha-1373-fix-evolution revert-963-ARUHA-1952 satisfy-findbugs short-dns-cache sub-cache test-codecov timeline-producers timeline_crud_storage timeline_crud_timeline timeline_db timeline_locking update-documentation upgrade-libraries upgrade-zalando-problem use-internal-test versioned-topologies
Nothing to show
Clone or download
adyach Merge pull request #985 from zalando/sub-cache
Instance based subscription cache for cursor service
Latest commit 62bc7ea Dec 4, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Remove changelog. Use releases page instead for a list of changes. Oct 18, 2018
.idea Update codeStyleSettings.xml Sep 25, 2018
api ARUHA-906: Fix documentations build (#715) Jul 24, 2017
database ARUHA-1265 Alter scripts to use correct schema for user defined types Dec 4, 2018
docs fixed the reviews Nov 19, 2018
gradle/wrapper Bump gradle version Oct 11, 2018
plugins ARUHA-123: fix build Aug 17, 2016
src Merge branch 'master' into sub-cache Dec 4, 2018
tools switching to kafka client API to fetch offsets info Feb 12, 2016
.gitignore introduce code style diff file May 11, 2017
.travis.yml Remove unused and undesired code quality reporting tools from travis … Nov 14, 2018
.zappr.yaml aruha-747 zappr generated by compliance team May 3, 2017
CODE_OF_CONDUCT.md Correct Code of conduct (#728) Sep 20, 2017
CONTRIBUTING.md Fix typo Jun 19, 2017
Dockerfile Put nakadi-event-bus-api.yaml under the api/ folder in docker Sep 18, 2018
LICENSE Initial commit Oct 14, 2015
MAINTAINERS Updated maintainers list May 29, 2017
README.md Setup KPI event types by default Nov 12, 2018
SECURITY.md fix formatting Jul 5, 2017
build.gradle set postgres version Nov 16, 2018
checkstyle.xml ARUHA-534 Refactor subscription api to use different streamer Apr 19, 2017
docker-compose.yml Upgrade docker compose to the newest version Nov 14, 2018
gradlew Upgrade gradle to 4.8.1 Jun 22, 2018
gradlew.bat Upgrade gradle to 4.8.1 Jun 22, 2018

README.md

Nakadi Event Broker

Build Status codecov.io Codacy Badge

Nakadi is a distributed event bus broker that implements a RESTful API abstraction on top of Kafka-like queues, which can be used to send, receive, and analyze streaming data in real time, reliable and highly available manner.

One of the most prominent use case of Nakadi is to enable decoupling of micro-services by building data streams between producers and consumers.

Main users of nakadi are developers and analysts. Nakadi provides features like REST based integration, multi consumer, ordered delivery, interactive UI, fully managed, security, ensuring data quality, abstraction of big data technology, and push model based consumption.

Nakadi is in active developement and is currently in production inside Zalando as the backbone of our microservices sending millions of events daily with a throughput of more than hundreds gigabytes per second. In one line Nakadi is a high-scalability data-stream for enterprise engineering teams.

Nakadi Deployment Diagram

More detailed information can be found on our website.

Project goal

The goal of Nakadi (ნაკადი means stream in Georgian) is to provide an event broker infrastructure to:

  • Abstract event delivery via a secured RESTful API.

    This allows microservices teams to maintain service boundaries, and not directly depend on any specific message broker technology. Access can be managed individually for every queue and secured using OAuth and custom authorization plugins.

  • Enable convenient development of event-driven applications and asynchronous microservices.

    Event types can be defined with Event type schemas and managed via a registry. All events will be validated against the schema before publishing. This guarantees data quality and consistency for consumers.

  • Efficient low latency event delivery.

    Once a publisher sends an event using a simple HTTP POST, consumers can be pushed to via a streaming HTTP connection, allowing near real-time event processing. The consumer connection has keepalive controls and support for managing stream offsets using subscriptions.

Development status

  • Nakadi is high-load production ready.
  • Zalando uses Nakadi as its central Event Bus Service.
  • Nakadi reliably handles the traffic from thousands event types with the throughput of more than hundreds gigabytes per second.
  • The project is in active development.

Presentations

Features

  • Stream:
    • REST abstraction over Kafka-like queues.
    • CRUD for event types.
    • Event batch publishing.
    • Low-level interface (deprecated).
      • manual client side partition management is needed
      • no support of commits
    • High-level interface (Subscription API).
      • automatic redistribution of partitions between consuming clients
      • commits should be issued to move server-side cursors
  • Schema:
    • Schema registry.
    • Several event type categories (Undefined, Business, Data Change).
    • Several partitioning strategies (Random, Hash, User defined).
    • Event enrichment strategies.
    • Schema evolution.
    • Events validation using an event type schema.
  • Security:
    • OAuth2 authentication.
    • Per-event type authorization.
    • Blacklist of users and applications.
  • Operations:
    • STUPS platform compatible.
    • ZMON monitoring compatible.
    • SLO monitoring.
    • Timelines.
      • This allows transparently switch production and consumption to different cluster (tier, region, AZ) without moving actual data and any service degradation.
      • Opens possibility for implementation of other streaming technologies and engines besides Kafka (like AWS Kinesis, Google pub/sub etc.)

Read more about latest development on the releases page

Additional features that we plan to cover in the future are:

  • Support for different streaming technologies and engines. Nakadi currently uses Apache Kafka as its broker, but other providers (such as Kinesis) will be possible.
  • Filtering of events for subscribing consumers.
  • Store old published events forever using transparent fall back backup shortages like AWS S3.
  • Separate the internal schema register to standalone service.
  • Use additional schema formats and protocols like Avro, protobuf and others.

Related projects

The zalando-nakadi organisation contains many useful related projects like

How to contribute to Nakadi

Read our contribution guidelines on how to submit issues and pull requests, then get Nakadi up and running locally using Docker:

Dependencies

The Nakadi server is a Java 8 Spring Boot application. It uses Kafka 1.1.1 as its broker and PostgreSQL 9.5 as its supporting database.

Nakadi requires recent versions of docker and docker-compose. In particular, docker-compose >= v1.7.0 is required. See Install Docker Compose for information on installing the most recent docker-compose version.

The project is built with Gradle. The ./gradlew wrapper script will bootstrap the right Gradle version if it's not already installed.

Install

To get the source, clone the git repository.

git clone https://github.com/zalando/nakadi.git

Building

The gradle setup is fairly standard, the main tasks are:

  • ./gradlew build: run a build and test
  • ./gradlew clean: clean down the build

Some other useful tasks are:

  • ./gradlew acceptanceTest: run the ATs
  • ./gradlew fullAcceptanceTest: run the ATs in the context of Docker
  • ./gradlew startNakadi: build Nakadi and start docker-compose services: nakadi, postgresql, zookeeper and kafka
  • ./gradlew stopNakadi: shutdown docker-compose services
  • ./gradlew startStorages: start docker-compose services: postgres, zookeeper and kafka (useful for development purposes)
  • ./gradlew stopStorages: shutdown docker-compose services

For working with an IDE, the eclipse IDE task is available and you'll be able to import the build.gradle into Intellij IDEA directly.

Running a Server

From the project's home directory you can start Nakadi via Gradle:

./gradlew startNakadi

This will build the project and run docker compose with 4 services:

  • Nakadi (8080)
  • PostgreSQL (5432)
  • Kafka (9092)
  • Zookeeper (2181)

To stop the running Nakadi server:

./gradlew stopNakadi

Using Nakadi and its API

Please read the manual for the full API usage details.

Creating Event Types

The Nakadi API allows the publishing and consuming of events over HTTP. To do this the producer must register an event type with the Nakadi schema registry.

This example shows minimal undefined category event type with a wilcard schema -

curl -v -XPOST http://localhost:8080/event-types -H "Content-type: application/json" -d '{
  "name": "order.ORDER_RECEIVED",
  "owning_application": "order-service",
  "category": "undefined", 
  "schema": {
    "type": "json_schema",
    "schema": "{ \"additionalProperties\": true }"
  }
}'

Note: This is not recommended category and schema. It should be used only for the testing.

Read mode in the manual

Consuming Events

You can open a stream for an Event Type via the events sub-resource:

curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events 
    

HTTP/1.1 200 OK

{"cursor":{"partition":"0","offset":"82376-000087231"},"events":[{"order_number": "ORDER_001"}]}
{"cursor":{"partition":"0","offset":"82376-000087232"}}
{"cursor":{"partition":"0","offset":"82376-000087232"},"events":[{"order_number": "ORDER_002"}]}
{"cursor":{"partition":"0","offset":"82376-000087233"},"events":[{"order_number": "ORDER_003"}]}

You will see the events when you publish them from another console for example. The records without events field are Keep Alive messages.

Note: This is the low-level API should be used only for debugging. It is not recommended for production systems. For production systems please use Subscriptions API

Publishing Events

Events for an event type can be published by posting to its "events" collection:

curl -v -XPOST http://localhost:8080/event-types/order.ORDER_RECEIVED/events \
 -H "Content-type: application/json" \
 -d '[{
    "order_number": "24873243241"
  }, {
    "order_number": "24873243242"
  }]'


HTTP/1.1 200 OK  

Read more in the manual

Contributing

Nakadi accepts contributions from the open-source community.

Please read CONTRIBUTING.md.

Please also note our CODE_OF_CONDUCT.md.

Contact

This email address serves as the main contact address for this project.

Bug reports and feature requests are more likely to be addressed if posted as issues here on GitHub.

License

Please read the full LICENSE

The MIT License (MIT) Copyright © 2015 Zalando SE, https://tech.zalando.com

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.