Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always compact nodes in stream listpacks #11885

Merged
merged 1 commit into from Mar 7, 2023

Conversation

madolson
Copy link
Contributor

@madolson madolson commented Mar 7, 2023

This change attempts to alleviate a minor memory degradation for Redis 6.2 and onwards when using rather large objects (~2k) in streams. Introduced in #6281, we pre-allocate the head nodes of a stream to be 4kb, to limit the amount of unnecessary initial reallocations that are done. However, if we only ever allocate one object because 2 objects exceeds the max_stream_entry_size, we never actually shrink it to fit the single item. This can lead to a lot of fragmentation. For smaller item sizes this becomes less of an issue, as the fragmentation decreases as the items become smaller in size.

This commit also changes the MEMORY USAGE of streams, since it was reporting the lpBytes instead of the allocated size. This introduced an observability issue when diagnosing the memory issue, since Redis reported the same amount of used bytes pre and post change, even though the new implementation allocated more memory.

A note about #6281, I was trying to see if we discussed a reason to not also compress on size exceeded case, but couldn't find one. Assuming it was just a miss at the time, it looks like all the testing was done on very small sizes.

Before:

dev-dsk-matolson-2c-96119f9f % redis-cli XADD test "*" foo `python -c "print('x'*2500)"`
"1678157345210-0"

(23-03-07 2:49:05) <0> [~]  
dev-dsk-matolson-2c-96119f9f % redis-cli XADD test "*" foo `python -c "print('x'*2500)"`
"1678157346184-0"

(23-03-07 2:49:06) <0> [~]  
dev-dsk-matolson-2c-96119f9f % redis-cli XADD test "*" foo `python -c "print('x'*2500)"`
"1678157346706-0"

(23-03-07 2:49:06) <0> [~]  
dev-dsk-matolson-2c-96119f9f % redis-cli memory usage test                              
(integer) 14416

After:

dev-dsk-matolson-2c-96119f9f % redis-cli XADD test "*" foo `python -c "print('x'*2500)"`
"1678157253636-0"

(23-03-07 2:47:33) <0> [~]  
dev-dsk-matolson-2c-96119f9f % redis-cli XADD test "*" foo `python -c "print('x'*2500)"`
"1678157254074-0"

(23-03-07 2:47:34) <0> [~]  
dev-dsk-matolson-2c-96119f9f % redis-cli XADD test "*" foo `python -c "print('x'*2500)"`
"1678157254523-0"

(23-03-07 2:47:34) <0> [~]  
dev-dsk-matolson-2c-96119f9f % redis-cli memory usage test                              
(integer) 11344
Release Notes
Remove unused memory of stream nodes when new nodes are created for exceeding `stream-node-max-bytes`.

@madolson madolson added the release-notes indication that this issue needs to be mentioned in the release notes label Mar 7, 2023
@oranagra
Copy link
Member

oranagra commented Mar 7, 2023

not sure i would call this "fragmentation", i would rather call this "unused memory allocation".
i.e. it's memory we over allocated on purpose and forgot to shrink back.

btw, don't we want to also backport this to 6.2?

@madolson madolson changed the title Reduce excessive node fragmentation in stream listpacks Compact Redis nodes in stream listpacks Mar 7, 2023
@madolson madolson merged commit 2bb29e4 into redis:unstable Mar 7, 2023
13 checks passed
@madolson madolson changed the title Compact Redis nodes in stream listpacks Always compact nodes in stream listpacks Mar 7, 2023
Copy link
Contributor

@vitarb vitarb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

oranagra pushed a commit to oranagra/redis that referenced this pull request Mar 20, 2023
…dis#11885)

This change attempts to alleviate a minor memory usage degradation for Redis 6.2 and onwards when using rather large objects (~2k) in streams. Introduced in redis#6281, we pre-allocate the head nodes of a stream to be 4kb, to limit the amount of unnecessary initial reallocations that are done. However, if we only ever allocate one object because 2 objects exceeds the max_stream_entry_size, we never actually shrink it to fit the single item. This can lead to a lot of excessive memory usage. For smaller item sizes this becomes less of an issue, as the overhead decreases as the items become smaller in size.

This commit also changes the MEMORY USAGE of streams, since it was reporting the lpBytes instead of the allocated size. This introduced an observability issue when diagnosing the memory issue, since Redis reported the same amount of used bytes pre and post change, even though the new implementation allocated more memory.

(cherry picked from commit 2bb29e4)
@oranagra oranagra mentioned this pull request Mar 20, 2023
oranagra pushed a commit that referenced this pull request Mar 20, 2023
…1885)

This change attempts to alleviate a minor memory usage degradation for Redis 6.2 and onwards when using rather large objects (~2k) in streams. Introduced in #6281, we pre-allocate the head nodes of a stream to be 4kb, to limit the amount of unnecessary initial reallocations that are done. However, if we only ever allocate one object because 2 objects exceeds the max_stream_entry_size, we never actually shrink it to fit the single item. This can lead to a lot of excessive memory usage. For smaller item sizes this becomes less of an issue, as the overhead decreases as the items become smaller in size.

This commit also changes the MEMORY USAGE of streams, since it was reporting the lpBytes instead of the allocated size. This introduced an observability issue when diagnosing the memory issue, since Redis reported the same amount of used bytes pre and post change, even though the new implementation allocated more memory.

(cherry picked from commit 2bb29e4)
Mixficsol pushed a commit to Mixficsol/redis that referenced this pull request Apr 12, 2023
…dis#11885)

This change attempts to alleviate a minor memory usage degradation for Redis 6.2 and onwards when using rather large objects (~2k) in streams. Introduced in redis#6281, we pre-allocate the head nodes of a stream to be 4kb, to limit the amount of unnecessary initial reallocations that are done. However, if we only ever allocate one object because 2 objects exceeds the max_stream_entry_size, we never actually shrink it to fit the single item. This can lead to a lot of excessive memory usage. For smaller item sizes this becomes less of an issue, as the overhead decreases as the items become smaller in size.

This commit also changes the MEMORY USAGE of streams, since it was reporting the lpBytes instead of the allocated size. This introduced an observability issue when diagnosing the memory issue, since Redis reported the same amount of used bytes pre and post change, even though the new implementation allocated more memory.
oranagra pushed a commit to oranagra/redis that referenced this pull request Apr 17, 2023
…dis#11885)

This change attempts to alleviate a minor memory usage degradation for Redis 6.2 and onwards when using rather large objects (~2k) in streams. Introduced in redis#6281, we pre-allocate the head nodes of a stream to be 4kb, to limit the amount of unnecessary initial reallocations that are done. However, if we only ever allocate one object because 2 objects exceeds the max_stream_entry_size, we never actually shrink it to fit the single item. This can lead to a lot of excessive memory usage. For smaller item sizes this becomes less of an issue, as the overhead decreases as the items become smaller in size.

This commit also changes the MEMORY USAGE of streams, since it was reporting the lpBytes instead of the allocated size. This introduced an observability issue when diagnosing the memory issue, since Redis reported the same amount of used bytes pre and post change, even though the new implementation allocated more memory.

(cherry picked from commit 2bb29e4)
(cherry picked from commit 1718151)
@oranagra oranagra mentioned this pull request Apr 17, 2023
oranagra pushed a commit that referenced this pull request Apr 17, 2023
…1885)

This change attempts to alleviate a minor memory usage degradation for Redis 6.2 and onwards when using rather large objects (~2k) in streams. Introduced in #6281, we pre-allocate the head nodes of a stream to be 4kb, to limit the amount of unnecessary initial reallocations that are done. However, if we only ever allocate one object because 2 objects exceeds the max_stream_entry_size, we never actually shrink it to fit the single item. This can lead to a lot of excessive memory usage. For smaller item sizes this becomes less of an issue, as the overhead decreases as the items become smaller in size.

This commit also changes the MEMORY USAGE of streams, since it was reporting the lpBytes instead of the allocated size. This introduced an observability issue when diagnosing the memory issue, since Redis reported the same amount of used bytes pre and post change, even though the new implementation allocated more memory.

(cherry picked from commit 2bb29e4)
(cherry picked from commit 1718151)
enjoy-binbin pushed a commit to enjoy-binbin/redis that referenced this pull request Jul 31, 2023
…dis#11885)

This change attempts to alleviate a minor memory usage degradation for Redis 6.2 and onwards when using rather large objects (~2k) in streams. Introduced in redis#6281, we pre-allocate the head nodes of a stream to be 4kb, to limit the amount of unnecessary initial reallocations that are done. However, if we only ever allocate one object because 2 objects exceeds the max_stream_entry_size, we never actually shrink it to fit the single item. This can lead to a lot of excessive memory usage. For smaller item sizes this becomes less of an issue, as the overhead decreases as the items become smaller in size.

This commit also changes the MEMORY USAGE of streams, since it was reporting the lpBytes instead of the allocated size. This introduced an observability issue when diagnosing the memory issue, since Redis reported the same amount of used bytes pre and post change, even though the new implementation allocated more memory.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-notes indication that this issue needs to be mentioned in the release notes
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

None yet

3 participants