Closed
Description
We performed some basic tests for consumer but with big number of records >11M
for try await record in consumer.messages {
i = record.offset
ctr += 1
if ctr % 1000 == 0 {
print("read up to \(i), ctr: \(ctr)")
}
}
Unfortunately, without back pressure it leads to memory growth. For simplicity attaching screenshot:
At some point, when all entries are read from kafka, memory usage is declining but would be nice if it would be possible to limit it with configuration (e.g. NIO HighLow watermark backpressure strategy).
PS: it seems that back pressure was removed around here #66
Metadata
Metadata
Assignees
Labels
No labels