Skip to content

Conversation

@lhoguin
Copy link
Contributor

@lhoguin lhoguin commented Apr 25, 2023

The v1 index is not optimised for reading messages except when the entire segment is read. So we always do that.

This change was made because when the read is inefficient and TTL is used the queue can get unresponsive while getting the TTL messages dropped. In that case the queue may drop messages slower than they expire and as a result will not process any Erlang messages until it has dropped all messages in the queue.

Fix for #7939

The v1 index is not optimised for reading messages except when
the entire segment is read. So we always do that.

This change was made because when the read is inefficient and
TTL is used the queue can get unresponsive while getting the
TTL messages dropped. In that case the queue may drop messages
slower than they expire and as a result will not process any
Erlang messages until it has dropped all messages in the queue.
@lhoguin
Copy link
Contributor Author

lhoguin commented Apr 25, 2023

There's no need to document this in the 3.12 release notes because this is a fix to changes done for 3.12.

@michaelklishin michaelklishin merged commit 331a482 into main Apr 26, 2023
@michaelklishin michaelklishin deleted the lh-cqv1-ttl branch April 26, 2023 06:33
michaelklishin added a commit that referenced this pull request Apr 26, 2023
CQv1: Don't limit messages in memory based on consume rate (backport #7980)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants