Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need additional Kafka Producer/ConsumerConfig parameters exposed #587

Closed
kdunn926 opened this issue Jul 11, 2016 · 0 comments
Closed

Need additional Kafka Producer/ConsumerConfig parameters exposed #587

kdunn926 opened this issue Jul 11, 2016 · 0 comments
Milestone

Comments

@kdunn926
Copy link

kdunn926 commented Jul 11, 2016

I'm building a custom splitter processor module with reasonably large records and am hitting this exception:

org.apache.kafka.common.errors.RecordTooLargeException: The message is 1125040 bytes when serialized which is larger than
the maximum request size you have configured with the max.request.size configuration.

I've adjusted the max.request.size parameter on the Kafka server configuration although it looks like the Kafka Binder can override this setting on the client side. In my stdout logs below, I see max.request.size = 1048576 regardless of the server setting.

2016-07-11 16:17:21.077  INFO 83422 --- [           main] o.s.i.kafka.support.ProducerFactoryBean  : Using producer proper
ties => {bootstrap.servers=localhost:9092, linger.ms=0, acks=1, compression.type=none, batch.size=16384}
2016-07-11 16:17:21.085  INFO 83422 --- [           main] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values:
        compression.type = none
        metric.reporters = []
        metadata.max.age.ms = 300000
        metadata.fetch.timeout.ms = 60000
        acks = 1
        batch.size = 16384
        reconnect.backoff.ms = 10
        bootstrap.servers = [localhost:9092]
        receive.buffer.bytes = 32768
        retry.backoff.ms = 100
        buffer.memory = 33554432
        timeout.ms = 30000
        key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        retries = 0
        max.request.size = 1048576
        block.on.buffer.full = true
        value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        metrics.sample.window.ms = 30000
        send.buffer.bytes = 131072
        max.in.flight.requests.per.connection = 5
        metrics.num.samples = 2
        linger.ms = 0
        client.id =

Any chance we can expose this parameter to the Producer and Consumer clients, similar to this:
spring.cloud.stream.kafka.binder.fetchSize

Thanks!

-Kyle

@sabbyanandan sabbyanandan added this to the 1.1.0.M1 milestone Jul 11, 2016
@sabbyanandan sabbyanandan modified the milestones: 1.0.3.RELEASE, 1.1.0.M1 Jul 11, 2016
@sabbyanandan sabbyanandan modified the milestones: 1.1.0.M1, 1.0.3.RELEASE Jul 11, 2016
sobychacko added a commit to sobychacko/spring-cloud-stream that referenced this issue Feb 24, 2022
Avoid unnecessary assertions on listened parttions when
autorebalancing is enabled, but no listened partitions are found.

Polish concurrency assignment when listened partition are empty

polishing the provisioner

Resolves spring-cloud#512
Resolves spring-cloud#587
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants