Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kafka topic expiration and new consumers #60

Open
vincentbernat opened this issue Dec 8, 2017 · 1 comment
Open

Kafka topic expiration and new consumers #60

vincentbernat opened this issue Dec 8, 2017 · 1 comment

Comments

@vincentbernat
Copy link
Contributor

Hey!

The documentation doesn't really deal with how expiration should be handled in Kafka. For example, if all topics are set to expire in 24 hours, new consumers don't have most information to be able to bootstrap. A simple solution is to restart the collector every 24 hour. Maybe a openbmpd could force a rolling disconnect instead? Or maybe I am missing another solution?

@TimEvens
Copy link
Contributor

This is a well known issue with stateful data and using time series storage (Kafka). Some folks are trying to use influx or other TSDB to store BGP data, but they of course have the same problem when data is expired because it wasn’t refreshed.

Recommendation is definitely not to restart the collector, router, or peers. That would directly effect the network.

The recommendation is to have a fault tolerant and correctly insync consumer that is always running. This consumer then can sync other consumers with the rib table for which ever peers are of interest to that consumer. The new consumer can get the offset from the root consumer if it wants to consume live updates after sync.

The MySQL consumer can be used for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants