-
Notifications
You must be signed in to change notification settings - Fork 436
Add documentation #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
docs/conf.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like these versions mismatch? major.minor should match, the remaining .patch-SNAPSHOT could possibly differ.
docs/elasticsearch_connector.rst
Outdated
| * **Batching and Pipelining**: The connector supports batching and pipelined writing to Elasticsearch. | ||
| It accumulates messages in batches and allows concurrent processing of multiple batches. | ||
|
|
||
| * **Delivery Ordering**: When pipelining is turned off, the connector supports ordering of delivery |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this matter? Does Elastic maintain some kind of insert-order on docs? When would ordering of messages be useful? Perhaps you have an example you can illustrate with?
|
@ewencp Add more docs. PTAL. Thanks! |
docs/elasticsearch_connector.rst
Outdated
| (`use cases <https://www.elastic.co/blog/found-uses-of-elasticsearch>`_). The connector covers | ||
| both the analytics and key-value store use cases. For the analytics use case, | ||
| each message is in Kafka is treated as an event and the connector uses ``topic+partition+offset`` | ||
| as unique identifiers for events, which then converted to unique documents in Elasticsearch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as unique identifiers -> as a unique identifier
docs/elasticsearch_connector.rst
Outdated
| 1. The connector job that ingest data to the old indices continue writing to the old indices. | ||
| 2. Create a new connector job that writes to new indices. This will copy both some old data and | ||
| new data to the new indices as long as the data is in Kafka. | ||
| 4. Once the data in the old indices are moved to the new indices by the reindexing process, we |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing # 3.
|
@Ishiihara Looks good, there's a few cleanups but LGTM. |
Update eclipse parsson version (#4)
No description provided.