Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kafka---记一次comsumer消费异常 #97

Open
AronChung opened this issue Jan 22, 2021 · 0 comments
Open

kafka---记一次comsumer消费异常 #97

AronChung opened this issue Jan 22, 2021 · 0 comments
Labels
Projects

Comments

@AronChung
Copy link
Owner

早上一到公司发现告警消息近百条,明显有异常,通过告警条件查得数据库中的数据没有更新。
由于数据库是通过comsumer消费后插入作为同步的,所以将问题定位到comsumer。
登录kafka eagle发现,数据堆积无消费。
image
查看日志发现:

org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

涉及两个参数:

  1. max.poll.interval.ms
  2. max.poll.records

当max.poll.records处理时间超过max.poll.interval.ms,则会判定为comsumer异常,接着触发rebalance,之后再commit,就会报以上异常。

解决思路:测试每条记录的消费时长,判断max.poll.records总耗时,并设置好max.poll.interval.ms要足够大,并且保证消息不堆积,能正常消费完。

解决方法:将max.poll.records调小,或者max.poll.interval.ms调大

@AronChung AronChung added this to Kafka in My Blog Jan 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
My Blog
  
Kafka
Development

No branches or pull requests

1 participant