Table of Contents
This is a simple project to demonstrate running a FastAPI server as a Kafka Consumer and Producer, with the FastAPI server listening to Kafka topics and writing them to the database as they come in. The functionality of the FastAPI server is to listen to messages on a specific topic, and to write the messages to a PostGreSQL database. A POST endpoint is available for the user to write messages to topics.
To get a local copy up and running follow these simple steps.
This is an example of how to list things you need to use the software and how to install them.
- Ensure Python is installed and of version >3.7
python --version Python 3.12.3
- Clone the repo
git clone https://github.com/Leenoose/FastAPI-Kafka-SQL-Example.git cd FastAPI-Kafka-SQL-Example/
- Create virtual environment and switch into it for a clean environment to start
python -m venv .venv source .venv/bin/activate
- Install required dependencies
pip install -r requirements.txt
- Ensure you have a PostgreSQL running. If not, use Podman or Docker to run an image.
podman run -d --name <db-name> -p 5432:5432 -e POSTGRES_PASSWORD=<mypassword> -v /db-init-script.sql:/docker-entrypoint-initdb.d/init.sql postgres:latest #The command should run interchangeably with Docker. the -v flag copies the db-init-script.sql into the container and runs it at initialization
- Ensure that you have a Kafka Zookeeper and Kafka Server instance running, with a topic created.
#On your Local machine or a Kafka container bin/zookeeper-server-start.sh config/zookeeper.properties #On another terminal on your local machine or another Kafka container bin/kafka-server-start.sh config/server.properties #On another terminal (3) on your local machine or another Kafka container bin/kafka-topics.sh --create --topic <event-name> --bootstrap-server localhost:9092
- Run the FastAPI server
uvicorn main:app
Access the Swagger UI via localhost:8000 (or whichever port you are using for this project) and run the /producer/{topicname} POST request.
After doing so, check your database. The message that was sent in the POST request should have a new entry in the database.
To deploy on Red Hat OpenShift, follow the steps as follows.
- Provisioning Kafka
To provision a Kafka cluster, install the operator AMQ Streams. After doing so, use the Provided APIs to create a Kafka cluster. Feel free to name the cluster however you wish.
- Provision a PostgreSQL instance
Simply add a Database via the Developer Catalog, and select PostgreSQL. Note that when doing so, it is important to have a PostgreSQL Connection Username, PostgreSQL Connection Password, and a PostgreSQL Database Name. These will be used in the environment variables
- Cloning the project
Add the project to your OpenShift console using the git repo url (https://github.com/Leenoose/FastAPI-Kafka-SQL-Example.git). Use the default import strategy (Dockerfile) for this.
Note that the build for the projet will fail. This is because the environment variables are not being set yet, and the default values (localhost) are still being used, even though they are not applicable here. To remedy this, go to Builds, look for the name of this repo (if you did not change it when adding the project), and edit the build config. There should be a section to add environment variables. Update the values as follows
Name | Value |
---|---|
KAFKA_HOSTNAME | <kafka-cluster-url> |
DB_HOSTNAME | <postgres-cluster-url> |
DB_USER | PostgreSQL Connection username |
DB_PASSWORD | PostgreSQL Connection password |
DB_NAME | PostgreSQL Database Name |
For the HOSTNAME variables, refer to the hostname you have under Administrator > Networking > Services. Click into the individual services and copy the value under Service routing.
Distributed under the MIT License. See LICENSE.txt
for more information.