Skip to content

Commit

Permalink
dcache:improve documentation for kafka properties
Browse files Browse the repository at this point in the history
Motivation

we have noticed a serious performence issue when kafka broker was done or nt available.

to fix this max.block.ms should be set to a lower value than the default wich is 60 sec.

the more files is going to be transfered the lower should be the number.

Acked-by: Tigran
Target: master, 8.0, 7.2, 7.1 7.0, 6.2
Require-book: yes
  • Loading branch information
mksahakyan committed May 9, 2022
1 parent 93d7afa commit 3fec613
Showing 1 changed file with 9 additions and 0 deletions.
9 changes: 9 additions & 0 deletions skel/share/defaults/kafka.properties
Expand Up @@ -36,6 +36,15 @@ dcache.kafka.topic = billing
# You can set the property value for other services with the help of prefix, which is explained in the next section.
# It is important to understand that Kafka Producer takes MILLISECONDS as value.

# If metadata is not available Kafka Producer is designed in a way that it blocks send() method for
# up to max.block.ms, which means that this method is not async
# the proposal to improve this has been (KIP-286
# https://cwiki.apache.org/confluence/display/KAFKA/KIP-286%3A+producer.send%28%29+should+not+block+on+metadata+update )
# dropped stating that ; `... the benefit of not having to wait for metadata is probably not worth the complexity added in producer.`
# So it is strongly recommended in case if there is no a reliable Kafka Cluster or you have only one Broker
# the max.block.ms should be set to lower numbers less than 60 000 (default) is recommended, when you have a use case of big number of
# files being Transferred. If you want complete non-blocking producer.send() can set max.block.ms to 0.

dcache.kafka.maximum-block = 60

(one-of?MILLISECONDS|SECONDS|MINUTES|HOURS|DAYS)\
Expand Down

0 comments on commit 3fec613

Please sign in to comment.