Skip to content
Kafka to Google Big Query persistence service
Java Other
  1. Java 99.8%
  2. Other 0.2%
Branch: master
Clone or download
Latest commit 2cb216c Oct 15, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
config/checkstyle [Dinesh] Fixing offset state to close consumer on acknowledgement tim… Feb 14, 2019
docs fix broken link to test schema file Aug 2, 2019
env [Maulik] Implement update table on schema update Mar 28, 2019
gradle/wrapper Add gradle wrapper Aug 8, 2019
src
.gitignore [Maulik/Kush] Run travis for jdk8 Oct 10, 2019
.travis.yml [Maulik/Kush] Run travis for jdk8 Oct 10, 2019
Dockerfile #Fix running jolokia jar as jvm agent Mar 20, 2019
FAQ.md [Rajat] Add FAQ.md Feb 14, 2019
LICENSE Added Apache 2.0 License Feb 15, 2019
README.md Setup coveralls to show coverage information Aug 16, 2019
build.gradle Setup coveralls to show coverage information Aug 16, 2019
gradlew Add gradle wrapper Aug 8, 2019
gradlew.bat Add gradle wrapper Aug 8, 2019
run_descriptor_server.sh
settings.gradle [Dinesh | Rajat] Basic project setup with loading env configuration Feb 14, 2019

README.md

Beast

Build Status Maintainability codecov

Kafka to BigQuery Sink

Architecture

  • Consumer: Consumes messages from kafka in batches, and pushes these batches to Read & Commit queues. These queues are blocking queues, i.e, no more messages will be consumed if the queue is full. (This is configurable based on poll timeout)
  • BigQuery Worker: Polls messages from the read queue, and pushes them to BigQuery. If the push operation was successful, BQ worker sends an acknowledgement to the Committer.
  • Committer: Committer receives the acknowledgements of successful push to BigQuery from BQ Workers. All these acknowledgements are stored in a set within the committer. Committer polls the commit queue for message batches. If that batch is present in the set, i.e., the batch has been successfully pushed to BQ, then it commits the max offset for that batch, back to Kafka, and pops it from the commit queue & set.



Building & Running

Prerequisite

  • A kafka cluster which has messages pushed in proto format, which beast can consume
  • should have BigQuery project which has streaming permission
  • create a table for the message proto
  • create configuration with column mapping for the above table and configure in env file
  • env file should be updated with bigquery, kafka, and application parameters

Run locally:

git clone https://github.com/gojekfarm/beast
export $(cat ./env/sample.properties | xargs -L1) && gradle clean runConsumer

Run with Docker

The image is available in gojektech dockerhub.

export TAG=80076c77dc8504e7c758865602aca1b05259e5d3
docker run --env-file beast.env -v ./local_dir/project-secret.json:/var/bq-secret.json -it gojektech/beast:$TAG
  • -v mounts local secret file project-sercret.json to the docker mentioned location, and GOOGLE_CREDENTIALS should match the same /var/bq-secret.json which is used for BQ authentication.
  • TAGYou could update the tag if you want the latest image, the mentioned tag is tested well.

Running on Kubernetes

Create a beast deployment for a topic in kafka, which needs to be pushed to BigQuery.

  • Deploymet can have multiple instance of beast
  • A beast container consists of the following threads:
    • A kafka consumer
    • Multiple BQ workers
    • A committer
  • Deployment also includes telegraf container which pushes stats metrics Follow the instructions in chart for helm deployment

BQ Setup:

Given a TestMessage proto file, you can create bigtable with schema

# create new table from schema
bq mk --table <project_name>:dataset_name.test_messages ./docs/test_messages.schema.json

# query total records
bq query --nouse_legacy_sql 'SELECT count(*) FROM `<project_name>:dataset_name.test_messages LIMIT 10'

#  update bq schema from local schema json file
bq update --format=prettyjson <project_name>:dataset_name.test_messages  booking.schema

# dump the schema of table to file
bq show --schema --format=prettyjson <project_name>:dataset_name.test_messages > test_messages.schema.json

Produce messages to Kafka

You can generate messages with TestMessage.proto with sample-kafka-producer, which pushes N messages

Running Stencil Server

  • run shell script ./run_descriptor_server.sh to build descriptor in build directory, and python server on :8000
  • stencil url can be configured to curl http://localhost:8000/messages.desc

Contribution

  • You could raise issues or clarify the questions
  • You could raise a PR for any feature/issues To run and test locally:
git clone https://github.com/gojekfarm/beast
export $(cat ./env/sample.properties | xargs -L1) && gradlew test
  • You could help us with documentation
You can’t perform that action at this time.