New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consumers sometimes sees message timestamp as -1 #96
Comments
Are you using LogAppendTime (broker timestamps) or LogCreateTime (producer timestamps)? |
I use |
Also I don't use compression |
Make sure to set |
Yep, it's already turned on.
|
Have you verified the consumer side with any other client? |
Nope. Well, I'm going to try again to switch this producer to confluent-kafka-python then. |
I thought the producer was already on confluent-kafka-python, but the consumer was on kafka-python? |
Yes. |
Ah, sorry, yeah, I meant "switch this consumer to confluent-kafka-python" |
Did you try switching the consumer to confluent-kafka-python, and if so, did it fix the receive timestamps? |
@fillest if it is a problem with But def first verify using another client such as the confluent-kafka-python consumer |
Closing due to inactivity. |
I am facing the same issue. we are using
temporarily fixed the issue by using a different |
What's the broker version? (2.11 is the scala version) |
2.11 is kafka version this one |
So that's Kafka 2.2.0 (built for scala 2.11). On repeated connection failures the producer can downgrade to an older protocol future set (see https://github.com/edenhill/librdkafka/wiki/Broker-version-compatibility). |
Thanks. Shall try that.
…On Fri, May 10, 2019, 18:09 Magnus Edenhill ***@***.***> wrote:
So that's Kafka 2.2.0 (built for scala 2.11).
On repeated connection failures the producer can downgrade to an older
protocol future set (see
https://github.com/edenhill/librdkafka/wiki/Broker-version-compatibility).
To circumvent this behaviour, set the following additional config params:
'api.version.fallback.ms': 0, 'broker.version.fallback': '2.2.0'
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#96 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AKUCTMRYYHZMABZUUSECHFLPUVUH3ANCNFSM4CYZL4CA>
.
|
Hi @edenhill We are using the following We are using a java streams consumer. Would the above fix work for us as well? Should it be added both in producer(python) and consumer(java)? I assume that the producer has it as a default already? |
This configuration does not apply to the Java consumer. But with confluent-kafka-python it really shouldn't be needed, it already uses those settings as default. |
No. We are not using a custom timestamp to produce. It works fine for most of the time and then suddenly starts publishing these messages with negative timestamps. |
Do you know if the the sudden change follows a reconnect to the brokers or any other trackable event? |
Could possibly be. Didn't monitor it very closely. Had one of the zookeeper nodes restart and one of the brokers wasn't recognized. |
The only reason a client will stop sending timestamps is if it downgrades the connection to an older protocol version, and this shouldn't happen with v1.0.0. |
How did you set the timestamp type configuration? It's not one of the configs on topic level. |
confluent-kafka==0.9.2, librdkafka 0.9.2, Python 2.7.6, Ubuntu 14.04, kafka 0.10.1.0
All producers run the same code and run on similar hosts. The consumer uses
kafka-python==1.3.1
(instead of confluent-kafka) and record.timestamp sometimes is ok and sometimes (quite often) it is-1
.The text was updated successfully, but these errors were encountered: