New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why in the redis master-slave architecture, the master node and the slave node have the same number of keys, but their data size (used_memory) is different, and the slave node occupies more than the master node? #12382
Comments
@klin111 I would like to confirm the following points:
|
few more things you can look into (in case it's not obvious from INFO output):
|
The master and slave configuration files are the same slave Server:redis_version:5.0.12 Clients: Memory: Persistence: Stats: Replication: CPU: Commandstats: Cluster: Keyspace: master Server:redis_version:5.0.12 Clients: Memory: Persistence: Stats: Replication: CPU: Commandstats: Cluster: Keyspace: ` |
thank you @oranagra
|
looking at the info you provided, i don't see such a big difference (master uses 10.30GB and slave uses 10.86GB). considering the difference is relatively small, i doubt we'll be able to spot anything in MALLOC-STATS. |
@oranagra MALLOC-STATS has too much content, can only provide important points? |
yes, i saw all that, and i commented that it's not a huge difference (500mb out of 10gb).. in buggy scenarios, i've seen much more (like 200%). in any case, i don't know how to find out the cause for this, this old version doesn't have any other information, and also it is somewhat likely that the problem was already solved anyway. |
@oranagra |
@klin111 It's hard to know from the information available that it could be due to some bug, do you still see any differences after reboot? |
the above is inaccurate or even incorrect. the argument about rehashing is valid, but at least in this case, not for the main dict, which, it's overhead is also included in used_memory_overhead, which is similar in the master and slave. |
Sorry for the wrong explanation. |
that's possible. not sure how common it is for a key to grow crossing the rehash limit and then become completely read-only. |
redis-5.0.12
slave
used_memory:11674502872
used_memory_human:10.87G
used_memory_rss:11916976128
used_memory_rss_human:11.10G
used_memory_peak:11674565992
used_memory_peak_human:10.87G
used_memory_peak_perc:100.00%
used_memory_overhead:42736840
used_memory_startup:1449864
used_memory_dataset:11631766032
used_memory_dataset_perc:99.65%
allocator_allocated:11674516248
allocator_active:11675152384
allocator_resident:11921485824
total_system_memory:17179869184
total_system_memory_human:16.00G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:12884901888
maxmemory_human:12.00G
maxmemory_policy:volatile-lru
allocator_frag_ratio:1.00
allocator_frag_bytes:636136
allocator_rss_ratio:1.02
allocator_rss_bytes:246333440
rss_overhead_ratio:1.00
rss_overhead_bytes:-4509696
mem_fragmentation_ratio:1.02
mem_fragmentation_bytes:242514280
mem_not_counted_for_evict:0
mem_replication_backlog:10485760
mem_clients_slaves:0
mem_clients_normal:66616
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
master
used_memory:11062165000
used_memory_human:10.30G
used_memory_rss:11444056064
used_memory_rss_human:10.66G
used_memory_peak:11076346928
used_memory_peak_human:10.32G
used_memory_peak_perc:99.87%
used_memory_overhead:43858826
used_memory_startup:1449864
used_memory_dataset:11018306174
used_memory_dataset_perc:99.62%
allocator_allocated:11062146648
allocator_active:11177414656
allocator_resident:11448696832
total_system_memory:17179869184
total_system_memory_human:16.00G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:12884901888
maxmemory_human:12.00G
maxmemory_policy:volatile-lru
allocator_frag_ratio:1.01
allocator_frag_bytes:115268008
allocator_rss_ratio:1.02
allocator_rss_bytes:271282176
rss_overhead_ratio:1.00
rss_overhead_bytes:-4640768
mem_fragmentation_ratio:1.03
mem_fragmentation_bytes:382010944
mem_not_counted_for_evict:0
mem_replication_backlog:10485760
mem_clients_slaves:16922
mem_clients_normal:1171712
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
@yossigo @oranagra @madolson
Sir, please help me
The text was updated successfully, but these errors were encountered: