Skip to content

Commit

Permalink
Try to fix markdown issues
Browse files Browse the repository at this point in the history
  • Loading branch information
dantswain committed Oct 8, 2019
1 parent bef84f6 commit 0a92dfa
Show file tree
Hide file tree
Showing 3 changed files with 50 additions and 50 deletions.
64 changes: 32 additions & 32 deletions README.md
Expand Up @@ -19,13 +19,13 @@ documentation,

KakfaEx supports the following Kafka features:

* Broker and Topic Metadata
* Produce Messages
* Fetch Messages
* Message Compression with Snappy and gzip
* Offset Management (fetch / commit / autocommit)
* Consumer Groups
* Topics Management (create / delete)
* Broker and Topic Metadata
* Produce Messages
* Fetch Messages
* Message Compression with Snappy and gzip
* Offset Management (fetch / commit / autocommit)
* Consumer Groups
* Topics Management (create / delete)

See [Kafka Protocol Documentation](http://kafka.apache.org/protocol.html) and
[A Guide to the Kafka Protocol](https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol)
Expand All @@ -35,14 +35,14 @@ for details of these features.

TL;DR:

* This is new implementation and we need people to test it!
* Set `kafka_version: "kayrock"` to use the new client implementation.
* The new client should be compatible with existing code when used this way.
* Many functions now suppoert an `api_version` parameter, see below for details,
e.g., how to store offsets in Kafka instead of Zookeeper.
* Version 1.0 of KafkaEx will be based on Kayrock and have a cleaner API - you
can start testing this API by using modules from the `KafkaEx.New` namespace.
See below for details.
* This is new implementation and we need people to test it!
* Set `kafka_version: "kayrock"` to use the new client implementation.
* The new client should be compatible with existing code when used this way.
* Many functions now suppoert an `api_version` parameter, see below for details,
e.g., how to store offsets in Kafka instead of Zookeeper.
* Version 1.0 of KafkaEx will be based on Kayrock and have a cleaner API - you
can start testing this API by using modules from the `KafkaEx.New` namespace.
See below for details.

To support some oft-requested features (offset storage in Kafka, message
timestamps), we have integrated KafkaEx with
Expand All @@ -57,17 +57,17 @@ should have a new and cleaner API.

The path we have planned to get to v1.0 is:

1. Add a Kayrock compatibility layer for the existing KafkaEx API (DONE, not released).
2. Expose Kayrock's API versioning through a select handful of KafkaEx API
functions so that users can get access to the most-requested features (e.g.,
offset storage in Kafka and message timestamps) (DONE, not released).
3. Begin designing and implementing the new API in parallel in the `KafkaEx.New`
namespace (EARLY PROGRESS).
4. Incrementally release the new API alongside the legacy API so that early
adopters can test it.
5. Once the new API is complete and stable, move it to the `KafkaEx` namespace
(i.e., drop the `New` part) and it will replace the legacy API. This will be
released as v1.0.
1. Add a Kayrock compatibility layer for the existing KafkaEx API (DONE, not released).
2. Expose Kayrock's API versioning through a select handful of KafkaEx API
functions so that users can get access to the most-requested features (e.g.,
offset storage in Kafka and message timestamps) (DONE, not released).
3. Begin designing and implementing the new API in parallel in the `KafkaEx.New`
namespace (EARLY PROGRESS).
4. Incrementally release the new API alongside the legacy API so that early
adopters can test it.
5. Once the new API is complete and stable, move it to the `KafkaEx` namespace
(i.e., drop the `New` part) and it will replace the legacy API. This will be
released as v1.0.

Users of KafkaEx can help a lot by testing the new code. At first, we need
people to test the Kayrock-based client using compatibility mode. You can do
Expand All @@ -80,14 +80,14 @@ test out the new API as it becomes available.

For more information on using the Kayrock-based client, see

* Github: [kayrock.md](https://github.com/kafka_ex/kafkaex/blob/master/kayrock.md)
* HexDocs: [kayrock-based client](kayrock.html)

* Github: [kayrock.md](https://github.com/kafka_ex/kafkaex/blob/master/kayrock.md)
* HexDocs: [kayrock-based client](kayrock.html)
For more information on the v1.0 API, see

* Github:
[new_api.md](https://github.com/kafka_ex/kafkaex/blob/master/new_api.md)
* HexDocs: [New API](new_api.html)
* Github:
[new_api.md](https://github.com/kafka_ex/kafkaex/blob/master/new_api.md)
* HexDocs: [New API](new_api.html)

## Using KafkaEx in an Elixir project

Expand Down
8 changes: 4 additions & 4 deletions kayrock.md
Expand Up @@ -13,10 +13,10 @@ the desired outcomes. The new API will be designed to handle newer versions.

Contents:

* [Using the New Client](#using-the-new-client)
* [Common Use Case - Store Offsets In
Kafka](#common-use-case-store-offsets-in-kafka)
* [Common Use Case - Message Timestamps / New Storage Format](#common-use-case-message-timestamps-new-storage-format)
* [Using the New Client](#using-the-new-client)
* [Common Use Case - Store Offsets In
Kafka](#common-use-case-store-offsets-in-kafka)
* [Common Use Case - Message Timestamps / New Storage Format](#common-use-case-message-timestamps-new-storage-format)

## Using the New Client

Expand Down
28 changes: 14 additions & 14 deletions new_api.md
Expand Up @@ -19,28 +19,28 @@ this section up-to-date with respect to what features have been implemented.

Features implemented:

* Get latest offset for a partition as `{:ok, offset}` or `{:error, error_code}`
(no more fishing through the response structs).
* Get metadata for an arbitrary list of topics
* Get latest offset for a partition as `{:ok, offset}` or `{:error, error_code}`
(no more fishing through the response structs).
* Get metadata for an arbitrary list of topics

## Major Differences from the Legacy API

* There is currently no supervisor for clients. It is assumed that the user
will manage these when not used in a consumer group. (This does not apply to
clients started via the legacy `create_worker` API, which are started under the standard
supervision tree.)
* The client does not automatically fetch metadata for all topics as this can
lead to timeouts on large clusters. There should be no observable impact here
because the client fetches metadata for specific topics on-demand.
* A client is no longer "attached" to a specific consumer group. In the legacy
implementation this was a consequence of the way autocommit was handled.
* There is currently no supervisor for clients. It is assumed that the user
will manage these when not used in a consumer group. (This does not apply to
clients started via the legacy `create_worker` API, which are started under the standard
supervision tree.)
* The client does not automatically fetch metadata for all topics as this can
lead to timeouts on large clusters. There should be no observable impact here
because the client fetches metadata for specific topics on-demand.
* A client is no longer "attached" to a specific consumer group. In the legacy
implementation this was a consequence of the way autocommit was handled.

## Design Philosophy

Two main design principles in the new client are driven by factors that made
maintenance of the legacy API difficult:

1. Delegate and genericize API message version handling
1. Delegate and genericize API message version handling

Kafka API message serialization and deserialization has been externalized to
a library ([Kayrock](https://github.com/dantswain/kayrock)) that can easily
Expand All @@ -49,7 +49,7 @@ maintenance of the legacy API difficult:
to handle specific versions of specific messages at a low level in KafkaEx.


2. Separation of connection state management and API logic
2. Separation of connection state management and API logic

As much as possible, we avoid putting API logic inside the client GenServer.
Instead, we write functions that form Kayrock request structs based on user
Expand Down

0 comments on commit 0a92dfa

Please sign in to comment.