Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Sink connector format option #12

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

jelledv
Copy link

@jelledv jelledv commented Dec 21, 2020

Add option to add a formatter when writing data to RabbitMQ:

  • Added the sink connector option "rabbitmq.format"
  • Before you had to use the ByteArrayConverter to use the RabbitMQ sink connector. Now you can use a formatter to put another format on your queue(s). Currently an implementation has been written for json and (non confluent) avro.
  • This change is backwards compatible, when no formatter is provided, the default value "bytes" is used. This requires you to use the "org.apache.kafka.connect.converters.ByteArrayConverter" converter like before

jelledv and others added 13 commits December 18, 2020 10:00
- Move description to the correct property
- Add the 2 other currently allowed values for "message.converter" property in the RabbitMQ source connector
- Before you had to use the ByteArrayConverter to use the RabbitMQ sink connector, now you can use more converters such as the avro converter and provide a formatter to convert the kafka connect agnostic format to for example json. Can be extended to avro, parquet, ...
- Renamed test topic in script to "rabbitmq-test". It is discourgaged to use dots
- Re-add the config directory to have a test config you can use when locally debugging with docker compose
- Update readme with new option
- We have a requirement where we do not want our Rabbitmq consumers to depend on our schema registry. Instead we give them a schema they can use to deserialize the data written from a queue.
- If you want confluent avro on the rabbitMq queue, you can still use the org.apache.kafka.connect.converters.ByteArrayConverter after putting confluent avro on your topic
- Added unit tests for all formatters
…n one mapping for queue to topic. Before it was only possible to have a Many (queues) to One (topic) mapping in the RabbitMQSourceConnector:

- Use Map to represent the queue and topic config where the old config is backwards compatible
- add queue variable in ConnectConsumer. It is the only way to know from which queue it is consuming. Use a new consumer per topic
…iple tasks. When the setting "rabbitmq.queue.topic.mapping" is not present. The previous implemention is used where every task consumes all the queues.
…-single-connector

Add one on one queue topic mapping single connector
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant