It s a little study about Kafka and crew
-
Updated
Jun 23, 2024 - Python
It s a little study about Kafka and crew
Simple datalake
Generate accurate and efficient dataflow to streamline software and data integration.
Consumer Producer Components
Generate Apache Avro schemas for Pydantic data models.
The project aims to create a real-time data application using Apache Kafka and Spark for stream processing, with data sourced from Kaggle and stored in Cassandra, prioritizing reliability, scalability, and security.
Define, govern, and model event data for warehouse-first product analytics.
Registry of schemas that use LinkML
Helper to create a compatibility layer between inputs in different formats and other parts of an application
This is a bunch of useful kafka examples
Projeto contem exemplo do uso de Kafka | Schema Registry | Avro | Docker | Kafdrop | Producer | Consumer
Streaming event pipeline with Apache Kafka and its ecosystem to simulate and display the status of train lines in real time
practice files of kafka
Simple JSON Schema Registry based on Django
Stream Bitcoin Price Training Data on Apache Kafka using Schema Registry as Avro Format Sink to BigQuery.
Faust dockerized application
Example GitHub Actions for Apache Kafka client application development for local and Confluent Cloud
Setup a development environment with Kafka, Schema registry with Authentication and adding a sample data to it all with Docker
Add a description, image, and links to the schema-registry topic page so that developers can more easily learn about it.
To associate your repository with the schema-registry topic, visit your repo's landing page and select "manage topics."