This repository has been archived by the owner on Apr 22, 2022. It is now read-only.
Allow for multiple endpoints and many-to-many mappings of endpoints to sinks #115
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Changes include: - A new configuration layout for supporting multiple sources, sinks and mappings. - No more enqueuing delay. Queues tend to either be full or empty, and are really only effective at smoothing jitter. Spinning to delay doesn't really help, so we just drop the events immediately instead of blocking. - No more session-binning strategy for HDFS. In practice we didn't use this, and it's complicated to do properly when multiple sources (with their own idea of sessions) might be feeding into a sink. For now, the code only supports the existing single browser-based source and HDFS/Kafka sinks. Configuration validation is also not yet implemented.
…mapping exceptions and turn them into useful error messages and leave configuratio invalid.
# Conflicts: # build.gradle
…operty is not optional. This is to make the constructor args match the reality of the config params, such that we can later use JavaDoc for the documentation. Default values are set using the defaultValue annotation, which is however not interpreted by the deserializer. We will later fix this using a custom deserializer in Jackson.
…or into dev/many-to-many
# Conflicts: # src/main/java/io/divolte/server/kafka/KafkaFlusher.java # src/main/java/io/divolte/server/kafka/KafkaFlushingPool.java # src/main/resources/reference.conf
All users of this class did this anyway.
- Eliminate a bunch of empty configuration files. - Change test layout to let tests run against the default configuration with the test server, or a base convenience configuration.
The browser source requires this or the tracking script won't correctly locate the event end-point.
No trailing period (.) required.
It's invoked by the constructor, and constructors shouldn't normally invoke instance methods because they might not expect the instance to be partially constructed.
… version of Kafka that we're using.
…appings in Divolte.
…ript is specified.
Changes include Sphinx markup, and updates to reflect that we now support more than a single mapping.
Plus a few typo corrections.
Good morning, I'm trying to send 2 custom events to kafka; the idea is send each event to its topic; but at the moment I receive both events in both topics. I have configured 2 mappings with different schemas and 2 sinks (one for each mapping). I don't know what is exactly happening, do you hace any idea? Thanks ! |
Did you find any solution for that ? I'm also running in the same issue. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is substantial change that allows to define multiple endpoints as event sources and route events through different mappings, using different schema's and sink them into different sinks (multiple HDFS locations, multiple Kafka topics).