From 0a92dfaeaa0abb7a65d831aeae8dab73da3ad6ee Mon Sep 17 00:00:00 2001 From: Dan Swain Date: Mon, 7 Oct 2019 22:04:01 -0400 Subject: [PATCH] Try to fix markdown issues --- README.md | 64 +++++++++++++++++++++++++++--------------------------- kayrock.md | 8 +++---- new_api.md | 28 ++++++++++++------------ 3 files changed, 50 insertions(+), 50 deletions(-) diff --git a/README.md b/README.md index b611ea5c..66ad7e1b 100644 --- a/README.md +++ b/README.md @@ -19,13 +19,13 @@ documentation, KakfaEx supports the following Kafka features: -* Broker and Topic Metadata -* Produce Messages -* Fetch Messages -* Message Compression with Snappy and gzip -* Offset Management (fetch / commit / autocommit) -* Consumer Groups -* Topics Management (create / delete) +* Broker and Topic Metadata +* Produce Messages +* Fetch Messages +* Message Compression with Snappy and gzip +* Offset Management (fetch / commit / autocommit) +* Consumer Groups +* Topics Management (create / delete) See [Kafka Protocol Documentation](http://kafka.apache.org/protocol.html) and [A Guide to the Kafka Protocol](https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol) @@ -35,14 +35,14 @@ for details of these features. TL;DR: -* This is new implementation and we need people to test it! -* Set `kafka_version: "kayrock"` to use the new client implementation. -* The new client should be compatible with existing code when used this way. -* Many functions now suppoert an `api_version` parameter, see below for details, - e.g., how to store offsets in Kafka instead of Zookeeper. -* Version 1.0 of KafkaEx will be based on Kayrock and have a cleaner API - you - can start testing this API by using modules from the `KafkaEx.New` namespace. - See below for details. +* This is new implementation and we need people to test it! +* Set `kafka_version: "kayrock"` to use the new client implementation. +* The new client should be compatible with existing code when used this way. +* Many functions now suppoert an `api_version` parameter, see below for details, + e.g., how to store offsets in Kafka instead of Zookeeper. +* Version 1.0 of KafkaEx will be based on Kayrock and have a cleaner API - you + can start testing this API by using modules from the `KafkaEx.New` namespace. + See below for details. To support some oft-requested features (offset storage in Kafka, message timestamps), we have integrated KafkaEx with @@ -57,17 +57,17 @@ should have a new and cleaner API. The path we have planned to get to v1.0 is: -1. Add a Kayrock compatibility layer for the existing KafkaEx API (DONE, not released). -2. Expose Kayrock's API versioning through a select handful of KafkaEx API - functions so that users can get access to the most-requested features (e.g., - offset storage in Kafka and message timestamps) (DONE, not released). -3. Begin designing and implementing the new API in parallel in the `KafkaEx.New` - namespace (EARLY PROGRESS). -4. Incrementally release the new API alongside the legacy API so that early - adopters can test it. -5. Once the new API is complete and stable, move it to the `KafkaEx` namespace - (i.e., drop the `New` part) and it will replace the legacy API. This will be - released as v1.0. +1. Add a Kayrock compatibility layer for the existing KafkaEx API (DONE, not released). +2. Expose Kayrock's API versioning through a select handful of KafkaEx API + functions so that users can get access to the most-requested features (e.g., + offset storage in Kafka and message timestamps) (DONE, not released). +3. Begin designing and implementing the new API in parallel in the `KafkaEx.New` + namespace (EARLY PROGRESS). +4. Incrementally release the new API alongside the legacy API so that early + adopters can test it. +5. Once the new API is complete and stable, move it to the `KafkaEx` namespace + (i.e., drop the `New` part) and it will replace the legacy API. This will be + released as v1.0. Users of KafkaEx can help a lot by testing the new code. At first, we need people to test the Kayrock-based client using compatibility mode. You can do @@ -80,14 +80,14 @@ test out the new API as it becomes available. For more information on using the Kayrock-based client, see -* Github: [kayrock.md](https://github.com/kafka_ex/kafkaex/blob/master/kayrock.md) -* HexDocs: [kayrock-based client](kayrock.html) - +* Github: [kayrock.md](https://github.com/kafka_ex/kafkaex/blob/master/kayrock.md) +* HexDocs: [kayrock-based client](kayrock.html) + For more information on the v1.0 API, see -* Github: - [new_api.md](https://github.com/kafka_ex/kafkaex/blob/master/new_api.md) -* HexDocs: [New API](new_api.html) +* Github: + [new_api.md](https://github.com/kafka_ex/kafkaex/blob/master/new_api.md) +* HexDocs: [New API](new_api.html) ## Using KafkaEx in an Elixir project diff --git a/kayrock.md b/kayrock.md index a8476001..d7e9e52e 100644 --- a/kayrock.md +++ b/kayrock.md @@ -13,10 +13,10 @@ the desired outcomes. The new API will be designed to handle newer versions. Contents: -* [Using the New Client](#using-the-new-client) -* [Common Use Case - Store Offsets In - Kafka](#common-use-case-store-offsets-in-kafka) -* [Common Use Case - Message Timestamps / New Storage Format](#common-use-case-message-timestamps-new-storage-format) +* [Using the New Client](#using-the-new-client) +* [Common Use Case - Store Offsets In + Kafka](#common-use-case-store-offsets-in-kafka) +* [Common Use Case - Message Timestamps / New Storage Format](#common-use-case-message-timestamps-new-storage-format) ## Using the New Client diff --git a/new_api.md b/new_api.md index 2bf4a035..ff7e5a02 100644 --- a/new_api.md +++ b/new_api.md @@ -19,28 +19,28 @@ this section up-to-date with respect to what features have been implemented. Features implemented: -* Get latest offset for a partition as `{:ok, offset}` or `{:error, error_code}` - (no more fishing through the response structs). -* Get metadata for an arbitrary list of topics +* Get latest offset for a partition as `{:ok, offset}` or `{:error, error_code}` + (no more fishing through the response structs). +* Get metadata for an arbitrary list of topics ## Major Differences from the Legacy API -* There is currently no supervisor for clients. It is assumed that the user - will manage these when not used in a consumer group. (This does not apply to - clients started via the legacy `create_worker` API, which are started under the standard - supervision tree.) -* The client does not automatically fetch metadata for all topics as this can - lead to timeouts on large clusters. There should be no observable impact here - because the client fetches metadata for specific topics on-demand. -* A client is no longer "attached" to a specific consumer group. In the legacy - implementation this was a consequence of the way autocommit was handled. +* There is currently no supervisor for clients. It is assumed that the user + will manage these when not used in a consumer group. (This does not apply to + clients started via the legacy `create_worker` API, which are started under the standard + supervision tree.) +* The client does not automatically fetch metadata for all topics as this can + lead to timeouts on large clusters. There should be no observable impact here + because the client fetches metadata for specific topics on-demand. +* A client is no longer "attached" to a specific consumer group. In the legacy + implementation this was a consequence of the way autocommit was handled. ## Design Philosophy Two main design principles in the new client are driven by factors that made maintenance of the legacy API difficult: -1. Delegate and genericize API message version handling +1. Delegate and genericize API message version handling Kafka API message serialization and deserialization has been externalized to a library ([Kayrock](https://github.com/dantswain/kayrock)) that can easily @@ -49,7 +49,7 @@ maintenance of the legacy API difficult: to handle specific versions of specific messages at a low level in KafkaEx. -2. Separation of connection state management and API logic +2. Separation of connection state management and API logic As much as possible, we avoid putting API logic inside the client GenServer. Instead, we write functions that form Kayrock request structs based on user