This is the code repository for Apache Kafka 1.0 Cookbook, published by Packt. It contains all the supporting project files necessary to work through the book from start to finish.
Apache Kafka provides a unified, high-throughput, low-latency platform to handle real-time data feeds. This book will show you how to use Kafka efficiently, and contains practical solutions to the common problems that developers and administrators usually face while working with it.
This practical guide contains easy-to-follow recipes to help you set up, configure, and use Apache Kafka in the best possible manner. You will use Apache Kafka Consumers and Producers to build effective real-time streaming applications. The book covers the recently released Kafka version 1.0, the Confluent Platform and Kafka Streams. The programming aspect covered in the book will teach you how to perform important tasks such as message validation, enrichment and composition.Recipes focusing on optimizing the performance of your Kafka cluster, and integrate Kafka with a variety of third-party tools such as Apache Hadoop, Apache Spark, and Elasticsearch will help ease your day to day collaboration with Kafka greatly. Finally, we cover tasks related to monitoring and securing your Apache Kafka cluster using tools such as Ganglia and Graphite.
All of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02.
The code will look like the following:
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replicationfactor 1 --partitions 1 --topic SNSBTopic
The reader should have some experience in programming with Java and some experience in Linux/Unix operating systems. The minimum configuration needed to execute the recipes in this book is: Intel ® Core i3 processor, 4 GB RAM, and 128 GB of disks. It is recommended to use Linux or Mac OS. Windows is not fully supported.
Click here if you have any feedback or suggestions.