Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

high rate producer error #959

Closed
lghinet opened this issue Oct 3, 2017 · 9 comments
Closed

high rate producer error #959

lghinet opened this issue Oct 3, 2017 · 9 comments

Comments

@lghinet
Copy link

lghinet commented Oct 3, 2017

Versions

Please specify real version numbers or git SHAs, not just "Latest" since that changes fairly regularly.
Sarama Version: latest
Kafka Version: 0.11
Go Version: 1.9 / windows

Configuration

What configuration values are you using for Sarama and Kafka?

Logs
2017/10/03 18:07:34 client/brokers registered new broker #11 at 10.1.3.90:9094
2017/10/03 18:07:34 client/brokers registered new broker #10 at 10.1.3.86:9094
2017/10/03 18:07:34 client/brokers registered new broker #9 at 10.1.3.88:9094
2017/10/03 18:07:34 Successfully initialized new client
2017/10/03 18:07:34 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2017/10/03 18:07:34 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2017/10/03 18:07:34 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2017/10/03 18:07:34 producer/broker/11 starting up
2017/10/03 18:07:34 producer/broker/10 starting up
2017/10/03 18:07:34 producer/broker/9 starting up
2017/10/03 18:07:34 producer/broker/11 state change to [open] on go2_teskbb/0
2017/10/03 18:07:34 producer/broker/11 state change to [open] on go2_teskbb/9
2017/10/03 18:07:34 producer/broker/11 state change to [open] on go2_teskbb/6
2017/10/03 18:07:34 producer/broker/11 state change to [open] on go2_teskbb/3
2017/10/03 18:07:34 producer/broker/11 state change to [open] on go2_teskbb/12
2017/10/03 18:07:34 producer/broker/9 state change to [open] on go2_teskbb/4
2017/10/03 18:07:34 producer/broker/10 state change to [open] on go2_teskbb/14
2017/10/03 18:07:34 producer/broker/9 state change to [open] on go2_teskbb/1
2017/10/03 18:07:34 producer/broker/10 state change to [open] on go2_teskbb/8
2017/10/03 18:07:34 producer/broker/10 state change to [open] on go2_teskbb/11
2017/10/03 18:07:34 producer/broker/10 state change to [open] on go2_teskbb/2
2017/10/03 18:07:34 producer/broker/9 state change to [open] on go2_teskbb/10
2017/10/03 18:07:34 producer/broker/10 state change to [open] on go2_teskbb/5
2017/10/03 18:07:34 producer/broker/9 state change to [open] on go2_teskbb/13
2017/10/03 18:07:34 producer/broker/9 state change to [open] on go2_teskbb/7
2017/10/03 18:07:34 Connected to broker at 10.1.3.90:9094 (registered as #11)
2017/10/03 18:07:34 Connected to broker at 10.1.3.86:9094 (registered as #10)
2017/10/03 18:07:34 Connected to broker at 10.1.3.88:9094 (registered as #9)
ERROR: Failed to produce message: kafka server: Message was too large, server rejected it to avoid allocation error.
......

Problem Description

Running this simple code, where i try to produce 10 mils very small messages i got the following error
and memory is to the roof around 4 Gb
i tried #805 but not much success

ERROR: Failed to produce message: kafka server: Message was too large, server rejected it to avoid allocation error.

config := sarama.NewConfig()
	config.Producer.RequiredAcks = sarama.WaitForLocal
	config.Producer.Return.Successes = false
	config.Metadata.Retry.Max = 10
	config.Metadata.Retry.Backoff = time.Second
	config.Producer.Partitioner = sarama.NewHashPartitioner
	//config.Producer.Flush.Bytes = 50 * 1024 * 1024
	//sarama.MaxRequestSize = 50 * 1024 * 1024
	//metrics.UseNilMetrics = true

	producer := startKafkaChannel(config)

	for i := 0; i < 10000000; i++ {
		message := &sarama.ProducerMessage{Topic: topic, Value: sarama.StringEncoder("100"), Partition: int32(partition)}
		producer.Input() <- message

		select {
		case msg := <-producer.Errors():
			printError("Failed to produce message: %s", msg.Err)
		default:

		}
	}
@eapache
Copy link
Contributor

eapache commented Oct 3, 2017

What SHA of sarama are you using?

Have you changed any configuration values (especially around maximum message sizes) on the broker?

Do you have compression enabled on the broker? You may be running into https://issues.apache.org/jira/browse/KAFKA-1718.

memory is to the roof around 4 Gb

This is not very surprising since you're generating 10 million ProducerMessage objects in a very short period of time.

@lghinet
Copy link
Author

lghinet commented Oct 3, 2017

thanks for the answer

default config on the broker. i played with MaxRequestSize and flush bytes on the client , as you can see in the comments

no commpression

i dont know the samara version, i did a go get -t github.com/Shopify/sarama
two days ago

@lghinet
Copy link
Author

lghinet commented Oct 3, 2017

i dont see how the message can be too large , the message is the same "100"

@lghinet
Copy link
Author

lghinet commented Oct 3, 2017

From 0.11 upgrade notes:
The broker configuration max.message.bytes now applies to the total size of a batch of messages. Previously the setting applied to batches of compressed messages, or to non-compressed messages individually.

hmmm

@eapache
Copy link
Contributor

eapache commented Oct 3, 2017

Yes, that sounds like it might be the issue.

@lghinet
Copy link
Author

lghinet commented Oct 4, 2017

in kafka max.message.bytes is 1000012, and in sarama is 1000000 , so we should be good , they say it should be less, and it is less

@eapache
Copy link
Contributor

eapache commented Oct 4, 2017

Yes, but Sarama does not apply that value to the total size of the batch, while Kafka (as of 0.11) does. If you set sarama.MaxRequestSize = 1000000 that should solve your problem.

@lghinet
Copy link
Author

lghinet commented Oct 4, 2017

is that ok ? to reduce sarama.MaxRequestSize from 100 * 1024 * 1024 to just 1 mil ?

@eapache
Copy link
Contributor

eapache commented Oct 4, 2017

That's basically what Kafka did with their change in 0.11 - producing is the only time client requests get that large.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants