This repository contains the source for the collection of Stream Processing Use Cases with ksqlDB.
Landing page: https://developer.confluent.io/ksqldb-recipes/
Recipes: https://confluentinc.github.io/ksqldb-recipes/
Goals of the project:
- Provide short, concrete descriptions of how ksqlDB is used in the real world—including SQL code.
- Make it easy to replicate that code end-to-end, with a
1-click
experience to populate the code into the ksqlDB editor in Confluent Cloud Console.
We welcome all contributions, thank you!
Contributing an idea? Submit a GitHub issue.
Contributing a full recipe to be published?
- Self-assign a recipe idea from the list in GitHub issues.
- Create a new branch (based off
main
) for the new recipe - Create a new subfolder for the new recipe, e.g.
docs/<industry>/<new-recipe-name>
. Note:<new-recipe-name>
is the slug in Confluent Cloud. Use hyphens, not underscores. - The recipe should follow the structure of existing recipes. Copy the contents of an existing recipe (e.g. aviation) or the template directory as the basis for your new recipe.
- index.md: explain the use case, why it matters, add a graphic if available
- source.json: JSON configuration to create Confluent Cloud source connectors to pull from a real end system
- source.sql: SQL-equivalent of
source.json
(this file is not referenced today inindex.md
, but getting ready for ksqlDB-connect integration) - manual.sql: SQL commands to insert mock data into Kafka topics, if a user does not have a real end system
- process.sql: this is the core code of the recipe, the SQL commands that correspond to the event stream processing
- sink.json: (optional) JSON configuration to create Confluent Cloud sink connectors to push results to a real end system
- sink.sql: (optional unless
sink.json
is provided) SQL-equivalent ofsink.json
(this file is not referenced today inindex.md
, but getting ready for ksqlDB-connect integration)
- Submit a GitHub Pull Request. Ensure the new recipe adheres to the checklist and then tag confluentinc/devx for review.
A recipe is more compelling if it uses Confluent Cloud fully-managed connectors, especially when the ksqlDB-connect integration is ready. But what if the recipe you want to write does not have a connector available in Confluent Cloud? Some options for your to consider, in order of preference:
- Stick with the original recipe idea, but use another connector in Confluent Cloud, that still fits the use case
- Pick a different recipe, maybe in the same industry, that uses a connector available in Confluent Cloud. This maximizes the impact of your recipe contribution
- Stick with your original recipe idea, and use a self-managed connector that runs locally. Follow precedent steps in this recipe
To view your new recipes locally, you can build a local version of the recipes site with mkdocs
.
-
Install
mkdocs
(https://www.mkdocs.org/)On macOS, you can use Homebrew:
brew install mkdocs pip3 install mkdocs pymdown-extensions pip3 install mkdocs-material pip3 install mkdocs-exclude
-
Build and serve a local version of the site. In this step,
mkdocs
will give you information if you have any errors in your new recipe file.python3 -m mkdocs serve
(If this doesn't work try
mkdocs serve
on its own) -
Point a web browser to the local site at http://localhost:8000 and navigate to your new recipe.
If you are a Confluent employee, you can publish using the mkdocs
GitHub integration. From the main
branch (in the desired state):
- Run the provided script,
./release.sh
- After a few minutes, the updated site will be available at https://confluentinc.github.io/ksqldb-recipes/