Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restore stream-table duality description #8995

Merged
merged 4 commits into from
Jul 9, 2020

Conversation

JimGalasyn
Copy link
Contributor

The stream-table duality section was dropped inadvertently sometime after version 0.11.0, so this PR restores it.

</p>

<p>
In the <code>Kafka Streams DSL</code>, an input stream of an <code>aggregation</code> can be a KStream or a KTable, but the output stream will always be a KTable. This allows Kafka Streams to update an aggregate value upon the out-of-order arrival of further records after the value was produced and emitted. When such out-of-order arrival happens, the aggregating KStream or KTable emits a new aggregate value. Because the output is a KTable, the new value is considered to overwrite the old value with the same key in subsequent processing steps.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid those super long lines? Similar below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but that's the style throughout the Kafka docs. :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, and I try to get it into better shape incrementally (reading GitHub diffs with long lines is just a pain). Would go awesome if somebody (cough) could do a PR just fixing it throughout the docs -- the current lazy approach is somewhat tiring.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kk cool, I'll open a ticket.

@mjsax mjsax merged commit 7e66848 into apache:trunk Jul 9, 2020
Kvicii pushed a commit to Kvicii/kafka that referenced this pull request Jul 10, 2020
* 'trunk' of github.com:apache/kafka: (24 commits)
  KAFKA-10249: don't try to read un-checkpointed offsets of in-memory stores (apache#8996)
  MINOR: Restore stream-table duality description (apache#8995)
  MINOR: Create ChannelBuilder for each connection in ConnectionStressWorker workload
  KAFKA-10179: Pass correct changelog topic to state serdes (apache#8902)
  KAFKA-10235 Fix flaky transactions_test.py (apache#8981)
  MINOR: Closing consumerGroupService resources in SaslClientsWithInvalidCredentialsTest (apache#8992)
  MINOR: Define the term tombstone, since it's used elsewhere in the docs (apache#3480)
  KAFKA-10109: Fix double AdminClient creation in AclCommand
  KAFKA-10220: Add null check for configurationKey in AdminManager.describeConfigs()
  KAFKA-10225 Increase default zk timeout for system tests (apache#8974)
  MINOR; alterReplicaLogDirs should not fail all the futures when only one call fails (apache#8985)
  KAFKA-10134: Use long poll if we do not have fetchable partitions (apache#8934)
  KAFKA-10191 fix flaky StreamsOptimizedTest (apache#8913)
  KAFKA-10243; ConcurrentModificationException while processing connection setup timeouts (apache#8990)
  KAFKA-10239: Make GroupInstanceId ignorable in DescribeGroups (apache#8989)
  KAFKA-9930: Adjust ReplicaFetcherThread logging when processing UNKNOWN_TOPIC_OR_PARTITION error (apache#8579)
  MINOR: document timestamped state stores (apache#8920)
  KAFKA-10166: checkpoint recycled standbys and ignore empty rocksdb base directory (apache#8962)
  MINOR: prune the metadata upgrade test matrix (apache#8971)
  KAFKA-10017: fix flaky EosBetaUpgradeIntegrationTest (apache#8963)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants