You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm building a custom splitter processor module with reasonably large records and am hitting this exception:
org.apache.kafka.common.errors.RecordTooLargeException: The message is 1125040 bytes when serialized which is larger than
the maximum request size you have configured with the max.request.size configuration.
I've adjusted the max.request.size parameter on the Kafka server configuration although it looks like the Kafka Binder can override this setting on the client side. In my stdout logs below, I see max.request.size = 1048576 regardless of the server setting.
Avoid unnecessary assertions on listened parttions when
autorebalancing is enabled, but no listened partitions are found.
Polish concurrency assignment when listened partition are empty
polishing the provisioner
Resolvesspring-cloud#512Resolvesspring-cloud#587
I'm building a custom splitter processor module with reasonably large records and am hitting this exception:
I've adjusted the
max.request.size
parameter on the Kafka server configuration although it looks like the Kafka Binder can override this setting on the client side. In my stdout logs below, I seemax.request.size = 1048576
regardless of the server setting.Any chance we can expose this parameter to the Producer and Consumer clients, similar to this:
spring.cloud.stream.kafka.binder.fetchSize
Thanks!
-Kyle
The text was updated successfully, but these errors were encountered: