Skip to content

Kafka starts with 50 partitions bogging down system resources #502

@McSneaky

Description

@McSneaky

For some reason Kafka starts with 50 partitions, which takes unnecessary system resources

kafka-topics --describe events --zookeeper zookeeper:2181

Topic:__consumer_offsets    PartitionCount:50    ReplicationFactor:1    Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
    Topic: __consumer_offsets    Partition: 0    Leader: 1001    Replicas: 1001    Isr: 1001
    Topic: __consumer_offsets    Partition: 1    Leader: 1001    Replicas: 1001    Isr: 1001
...

50 partitions is extremely huge. Some systems are allocating 1 partition for 1k req/sec as their default. Which is relatively good. I doupt anyone will have 50k error requests incoming every second and runs Sentry off with docker-compoe

Since with docker-compose all partitions run on the same host it's just a waste of system resources to run so many partitions on single host

It would be nice to scale down default number of partitions to some. Or is there some exposed setting to start less of them (didn't find it from doc anywhere)

More reading: https://www.confluent.io/blog/how-choose-number-topics-partitions-kafka-cluster/

Running it on double the minimum requirements EC2 instance, recieving < 7k errors per day (evenly spread), which makes it about 0.08 req/sec, but 100% of RAM and CPU is constantly allocated by it, which makes server unresponsive and crashing constantly

Metadata

Metadata

Assignees

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions