Skip to content

Conversation

@guozhangwang
Copy link
Contributor

No description provided.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Topic Subscriptions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can replace 'must specify a consumer group' with 'must configure a consumer groupId'?

@guozhangwang
Copy link
Contributor Author

@hachikuji @jholoman Thanks for the comments, addressed.

@hachikuji
Copy link
Contributor

@guozhangwang LGTM. Did you want to document the timeout behavior of poll(timeout) here or in another issue?

@guozhangwang
Copy link
Contributor Author

I would prefer hold on documenting the timeout behavior since this is sort of an unsolved issues rather than a feature-by-design, and we are likely going to fix it.

@guozhangwang
Copy link
Contributor Author

@gwenshap could you take a look?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought configuring consumer groups is not mandatory? (i.e. "consumer can configure a consumer group" but not "must").

@gwenshap
Copy link
Contributor

gwenshap commented Nov 6, 2015

Left few comments @guozhangwang :)

@guozhangwang
Copy link
Contributor Author

Thanks @gwenshap , incorporated.

@gwenshap
Copy link
Contributor

gwenshap commented Nov 6, 2015

LGTM

@asfgit asfgit closed this in 76bcccd Nov 6, 2015
@guozhangwang guozhangwang deleted the K2745 branch October 7, 2016 21:46
efeg added a commit to efeg/kafka that referenced this pull request Jan 29, 2020
wyuka pushed a commit to wyuka/kafka that referenced this pull request Jan 21, 2023
…ller node (apache#425)

TICKET = LIKAFKA-49304
LI_DESCRIPTION =
Per the slack discussions
https://linkedin-randd.slack.com/archives/C014EKBE170/p1669951054188429?thread_ts=1667860160.319959&cid=C014EKBE170,
the current implementation of broker-to-controller requests results in Unauthorized errors if the cached controller node has been migrated to a different cluster and is still alive.

The impact is that any broker-to-controller requests, including the AlterISR requests, will be blocked, resulting in the permanent inconsistency of ISR info. We should handle such migrated controllers gracefully and use the correct controller instead of the previously cached obsolete controller.

This PR has the following changes

replace the "li.alter.isr.enable" config with the "li.deny.alter.isr" config, since the former is no longer needed and the latter is used for constructing an integration test to reproduce the problem above
change the logic in BrokerToControllerRequestThread such that we always query the latest controller node when a request is constructed. Doing this should not result in performance degradation since the ControllerNodeProvider is either a MetadataCacheControllerNodeProvider or RaftControllerNodeProvider, both of which retrieves the controller from the local cache.
EXIT_CRITERIA = When this change is accepted upstream and pulled into this repo.
davide-armand pushed a commit to aiven/kafka that referenced this pull request Dec 1, 2025
The Infinispan cache cluster is in the availability zone. The caching should
store only single copy of the value in the cluster. The effect of this change
is that on clusters that have more than single node in the availability zone there
will be extra network hop to read cached value from owning node unless the current
node is the owner. This leaves more room for caching the recent values as extra
copies are not needed. The best case would be to align the owner node to be the
same as the consumers consuming the topic-partition, this requires also consistent
broker selection for consumers within the availability zone.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants