-
Notifications
You must be signed in to change notification settings - Fork 23.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3.2.1 crash when as slave node sync with master #3343
Comments
Hello, unfortunately the binary is stripped so there are no symbols to understand exactly where this happened. Please could you send the Redis binary you are using? Thanks. This bug could actually be in different places:
The binary can give us some clue. |
|
@luweijie007 good trick ;-) Thanks apparently I was able to download it correctly. |
Apparently the crash happened inside the |
@luweijie007 I've a stress tester running. Would be great if you could run a serious memory testing suite in your slave. Or is it a VM instance? |
@antirez it not VM instance, it a entity machine. |
You noticed the same crash on two machines? At the same time and with a similar stack trace (while the server was doing an operation on lists)? |
@luweijie007 Can you share us coredump file while it crashed |
Core dump could also help, as to run |
@antirez looking at the assembly code, the difference between r8 and rdi is the value stored in ZIPLIST_TAIL_OFFESET, and this delta is exactly: 2^32-17. |
Thanks @oranagra, this is a very interesting hint... Also the OP informed that he saw this two times in two different machines. |
I put my money on |
@luweijie007 please could you tell me all the list commands you use in the master in order to populate the list? Thanks. |
@luweijie007 UPDATE: I can crash the quicklist implementation, so I can confirm there are bugs. I need another information, do you have compression enabled in quicklist? I'm referring to |
Yet another UPDATE:
I'm working to a fix. It is possible that I'll discover more issues since certain edge cases are hard to trigger and quicklist code is not as tested and safe as other parts of Redis data structures implementation. |
The quicklist takes a cached version of the ziplist representation size in bytes. The implementation must update this length every time the underlying ziplist changes. However quicklistReplaceAtIndex() failed to fix the length. During LSET calls, the size of the ziplist blob and the cached size inside the quicklist diverged. Later, when this size is used in an authoritative way, for example during nodes splitting in order to copy the nodes, we end with a duplicated node that may contain random garbage. This commit should fix issue #3343, however several problems were found reviewing the quicklist.c code in search of this bug that should be addressed soon or later. For example: 1. To take a cached ziplist length is fragile since failing to update it leads to this kind of issues. 2. The node splitting code needs auditing. For example it works just for a side effect of ziplistDeleteRange() to be able to cope with a wrong count of elements to remove. The code inside quicklist.c assumes that -1 means "delete till the end" while actually it's just a count of how many elements to delete, and is an unsigned count. So -1 gets converted into the maximum integer, and just by chance the ziplist code stops deleting elements after there are no more to delete. 3. Node splitting is extremely inefficient, it copies the node and removes elements from both nodes even when actually there is to move a single entry from one node to the other, or when the new resulting node is empty at all so there is nothing to copy but just to create a new node. However at least for Redis 3.2 to introduce fresh code inside quicklist.c may be even more risky, so instead I'm writing a better fuzzy tester to stress the internals a bit more in order to anticipate other possible bugs. This bug was found using a fuzzy tester written after having some clue about where the bug could be. The tester eventually created a ~2000 commands sequence able to always crash Redis. I wrote a better version of the tester that searched for the smallest sequence that could crash Redis automatically. Later this smaller sequence was minimized by removing random commands till it still crashed the server. This resulted into a sequence of 7 commands. With this small sequence it was just a matter of filling the code with enough printf() to understand enough state to fix the bug.
The quicklist takes a cached version of the ziplist representation size in bytes. The implementation must update this length every time the underlying ziplist changes. However quicklistReplaceAtIndex() failed to fix the length. During LSET calls, the size of the ziplist blob and the cached size inside the quicklist diverged. Later, when this size is used in an authoritative way, for example during nodes splitting in order to copy the nodes, we end with a duplicated node that may contain random garbage. This commit should fix issue #3343, however several problems were found reviewing the quicklist.c code in search of this bug that should be addressed soon or later. For example: 1. To take a cached ziplist length is fragile since failing to update it leads to this kind of issues. 2. The node splitting code needs auditing. For example it works just for a side effect of ziplistDeleteRange() to be able to cope with a wrong count of elements to remove. The code inside quicklist.c assumes that -1 means "delete till the end" while actually it's just a count of how many elements to delete, and is an unsigned count. So -1 gets converted into the maximum integer, and just by chance the ziplist code stops deleting elements after there are no more to delete. 3. Node splitting is extremely inefficient, it copies the node and removes elements from both nodes even when actually there is to move a single entry from one node to the other, or when the new resulting node is empty at all so there is nothing to copy but just to create a new node. However at least for Redis 3.2 to introduce fresh code inside quicklist.c may be even more risky, so instead I'm writing a better fuzzy tester to stress the internals a bit more in order to anticipate other possible bugs. This bug was found using a fuzzy tester written after having some clue about where the bug could be. The tester eventually created a ~2000 commands sequence able to always crash Redis. I wrote a better version of the tester that searched for the smallest sequence that could crash Redis automatically. Later this smaller sequence was minimized by removing random commands till it still crashed the server. This resulted into a sequence of 7 commands. With this small sequence it was just a matter of filling the code with enough printf() to understand enough state to fix the bug.
Dear @luweijie007 I hope I fixed the bug in commit 7041967. Please if possible could you apply it to your instances and report if this fixes the issue? Thanks. |
Note: it was verified that it can crash the test suite without the patch applied.
Note: it was verified that it can crash the test suite without the patch applied.
@antirez ,I am sorry , I take a holiday this days , so I has no time see your reply. |
Thank you @luweijie007, we believe the problem is fixed now, no need for the dump, mostly in need of checking if our fix solves the problem for you. Within 24/48 hours I'm releasing Redis 3.2.2 that includes this fix. Thanks. |
I‘m sorry to hear this message , I have to degrade my redis instances from v3.2.0 to v3.0.7 , and about 30 instances. |
Why downgrade to 3.0, and not upgrade to a version with the fix? |
@oranagra We need redis in mass production, v3.0.7(3.0.3) is a more stable release. |
The quicklist takes a cached version of the ziplist representation size in bytes. The implementation must update this length every time the underlying ziplist changes. However quicklistReplaceAtIndex() failed to fix the length. During LSET calls, the size of the ziplist blob and the cached size inside the quicklist diverged. Later, when this size is used in an authoritative way, for example during nodes splitting in order to copy the nodes, we end with a duplicated node that may contain random garbage. This commit should fix issue redis#3343, however several problems were found reviewing the quicklist.c code in search of this bug that should be addressed soon or later. For example: 1. To take a cached ziplist length is fragile since failing to update it leads to this kind of issues. 2. The node splitting code needs auditing. For example it works just for a side effect of ziplistDeleteRange() to be able to cope with a wrong count of elements to remove. The code inside quicklist.c assumes that -1 means "delete till the end" while actually it's just a count of how many elements to delete, and is an unsigned count. So -1 gets converted into the maximum integer, and just by chance the ziplist code stops deleting elements after there are no more to delete. 3. Node splitting is extremely inefficient, it copies the node and removes elements from both nodes even when actually there is to move a single entry from one node to the other, or when the new resulting node is empty at all so there is nothing to copy but just to create a new node. However at least for Redis 3.2 to introduce fresh code inside quicklist.c may be even more risky, so instead I'm writing a better fuzzy tester to stress the internals a bit more in order to anticipate other possible bugs. This bug was found using a fuzzy tester written after having some clue about where the bug could be. The tester eventually created a ~2000 commands sequence able to always crash Redis. I wrote a better version of the tester that searched for the smallest sequence that could crash Redis automatically. Later this smaller sequence was minimized by removing random commands till it still crashed the server. This resulted into a sequence of 7 commands. With this small sequence it was just a matter of filling the code with enough printf() to understand enough state to fix the bug.
Note: it was verified that it can crash the test suite without the patch applied.
The quicklist takes a cached version of the ziplist representation size in bytes. The implementation must update this length every time the underlying ziplist changes. However quicklistReplaceAtIndex() failed to fix the length. During LSET calls, the size of the ziplist blob and the cached size inside the quicklist diverged. Later, when this size is used in an authoritative way, for example during nodes splitting in order to copy the nodes, we end with a duplicated node that may contain random garbage. This commit should fix issue redis#3343, however several problems were found reviewing the quicklist.c code in search of this bug that should be addressed soon or later. For example: 1. To take a cached ziplist length is fragile since failing to update it leads to this kind of issues. 2. The node splitting code needs auditing. For example it works just for a side effect of ziplistDeleteRange() to be able to cope with a wrong count of elements to remove. The code inside quicklist.c assumes that -1 means "delete till the end" while actually it's just a count of how many elements to delete, and is an unsigned count. So -1 gets converted into the maximum integer, and just by chance the ziplist code stops deleting elements after there are no more to delete. 3. Node splitting is extremely inefficient, it copies the node and removes elements from both nodes even when actually there is to move a single entry from one node to the other, or when the new resulting node is empty at all so there is nothing to copy but just to create a new node. However at least for Redis 3.2 to introduce fresh code inside quicklist.c may be even more risky, so instead I'm writing a better fuzzy tester to stress the internals a bit more in order to anticipate other possible bugs. This bug was found using a fuzzy tester written after having some clue about where the bug could be. The tester eventually created a ~2000 commands sequence able to always crash Redis. I wrote a better version of the tester that searched for the smallest sequence that could crash Redis automatically. Later this smaller sequence was minimized by removing random commands till it still crashed the server. This resulted into a sequence of 7 commands. With this small sequence it was just a matter of filling the code with enough printf() to understand enough state to fix the bug.
Note: it was verified that it can crash the test suite without the patch applied.
The quicklist takes a cached version of the ziplist representation size in bytes. The implementation must update this length every time the underlying ziplist changes. However quicklistReplaceAtIndex() failed to fix the length. During LSET calls, the size of the ziplist blob and the cached size inside the quicklist diverged. Later, when this size is used in an authoritative way, for example during nodes splitting in order to copy the nodes, we end with a duplicated node that may contain random garbage. This commit should fix issue redis#3343, however several problems were found reviewing the quicklist.c code in search of this bug that should be addressed soon or later. For example: 1. To take a cached ziplist length is fragile since failing to update it leads to this kind of issues. 2. The node splitting code needs auditing. For example it works just for a side effect of ziplistDeleteRange() to be able to cope with a wrong count of elements to remove. The code inside quicklist.c assumes that -1 means "delete till the end" while actually it's just a count of how many elements to delete, and is an unsigned count. So -1 gets converted into the maximum integer, and just by chance the ziplist code stops deleting elements after there are no more to delete. 3. Node splitting is extremely inefficient, it copies the node and removes elements from both nodes even when actually there is to move a single entry from one node to the other, or when the new resulting node is empty at all so there is nothing to copy but just to create a new node. However at least for Redis 3.2 to introduce fresh code inside quicklist.c may be even more risky, so instead I'm writing a better fuzzy tester to stress the internals a bit more in order to anticipate other possible bugs. This bug was found using a fuzzy tester written after having some clue about where the bug could be. The tester eventually created a ~2000 commands sequence able to always crash Redis. I wrote a better version of the tester that searched for the smallest sequence that could crash Redis automatically. Later this smaller sequence was minimized by removing random commands till it still crashed the server. This resulted into a sequence of 7 commands. With this small sequence it was just a matter of filling the code with enough printf() to understand enough state to fix the bug.
Note: it was verified that it can crash the test suite without the patch applied.
Recently we started using list-compress-depth in tests (was completely untested till now). Turns this triggered test failures with the external mode, since the tests left the setting enabled and then it was used in other tests (specifically the fuzzer named "Stress tester for #3343-alike bugs"). This PR fixes the issue of the `recompress` flag being left set by mistake, which caused the code to later to compress the head or tail nodes (which should never be compressed) The solution is to reset the recompress flag when it should have been (when it was decided not to compress). Additionally we're adding some assertions and improve the tests so in order to catch other similar bugs.
This pr is following #9779 . ## Describe of feature Now when we turn on the `list-compress-depth` configuration, the list will compress the ziplist between `[list-compress-depth, -list-compress-depth]`. When we need to use the compressed data, we will first decompress it, then use it, and finally compress it again. It's controlled by `quicklistNode->recompress`, which is designed to avoid the need to re-traverse the entire quicklist for compression after each decompression, we only need to recompress the quicklsitNode being used. In order to ensure the correctness of recompressing, we should normally let quicklistDecompressNodeForUse and quicklistCompress appear in pairs, otherwise, it may lead to the head and tail being compressed or the middle ziplist not being compressed correctly, which is exactly the problem this pr needs to solve. ## Solution 1. Reset `quicklistIter` after insert and replace. The quicklist node will be compressed in `quicklistInsertAfter`, `quicklistInsertBefore`, `quicklistReplaceAtIndex`, so we can safely reset the quicklistIter to avoid it being used again 2. `quicklistIndex` will return an iterator that can be used to recompress the current node after use. ## Test 1. In the `Stress Tester for #3343-Similar Errors` test, when the server crashes or when `valgrind` or `asan` error is detected, print violating commands. 2. Add a crash test due to wrongly recompressing after `lrem`. 3. Remove `insert before with 0 elements` and `insert after with 0 elements`, Now we forbid any operation on an NULL quicklistIter.
This pr is following redis#9779 . ## Describe of feature Now when we turn on the `list-compress-depth` configuration, the list will compress the ziplist between `[list-compress-depth, -list-compress-depth]`. When we need to use the compressed data, we will first decompress it, then use it, and finally compress it again. It's controlled by `quicklistNode->recompress`, which is designed to avoid the need to re-traverse the entire quicklist for compression after each decompression, we only need to recompress the quicklsitNode being used. In order to ensure the correctness of recompressing, we should normally let quicklistDecompressNodeForUse and quicklistCompress appear in pairs, otherwise, it may lead to the head and tail being compressed or the middle ziplist not being compressed correctly, which is exactly the problem this pr needs to solve. ## Solution 1. Reset `quicklistIter` after insert and replace. The quicklist node will be compressed in `quicklistInsertAfter`, `quicklistInsertBefore`, `quicklistReplaceAtIndex`, so we can safely reset the quicklistIter to avoid it being used again 2. `quicklistIndex` will return an iterator that can be used to recompress the current node after use. ## Test 1. In the `Stress Tester for redis#3343-Similar Errors` test, when the server crashes or when `valgrind` or `asan` error is detected, print violating commands. 2. Add a crash test due to wrongly recompressing after `lrem`. 3. Remove `insert before with 0 elements` and `insert after with 0 elements`, Now we forbid any operation on an NULL quicklistIter.
The quicklist takes a cached version of the ziplist representation size in bytes. The implementation must update this length every time the underlying ziplist changes. However quicklistReplaceAtIndex() failed to fix the length. During LSET calls, the size of the ziplist blob and the cached size inside the quicklist diverged. Later, when this size is used in an authoritative way, for example during nodes splitting in order to copy the nodes, we end with a duplicated node that may contain random garbage. This commit should fix issue redis#3343, however several problems were found reviewing the quicklist.c code in search of this bug that should be addressed soon or later. For example: 1. To take a cached ziplist length is fragile since failing to update it leads to this kind of issues. 2. The node splitting code needs auditing. For example it works just for a side effect of ziplistDeleteRange() to be able to cope with a wrong count of elements to remove. The code inside quicklist.c assumes that -1 means "delete till the end" while actually it's just a count of how many elements to delete, and is an unsigned count. So -1 gets converted into the maximum integer, and just by chance the ziplist code stops deleting elements after there are no more to delete. 3. Node splitting is extremely inefficient, it copies the node and removes elements from both nodes even when actually there is to move a single entry from one node to the other, or when the new resulting node is empty at all so there is nothing to copy but just to create a new node. However at least for Redis 3.2 to introduce fresh code inside quicklist.c may be even more risky, so instead I'm writing a better fuzzy tester to stress the internals a bit more in order to anticipate other possible bugs. This bug was found using a fuzzy tester written after having some clue about where the bug could be. The tester eventually created a ~2000 commands sequence able to always crash Redis. I wrote a better version of the tester that searched for the smallest sequence that could crash Redis automatically. Later this smaller sequence was minimized by removing random commands till it still crashed the server. This resulted into a sequence of 7 commands. With this small sequence it was just a matter of filling the code with enough printf() to understand enough state to fix the bug.
Note: it was verified that it can crash the test suite without the patch applied.
log as below:
=== REDIS BUG REPORT START: Cut & paste starting from here ===
27648:S 24 Jun 12:16:32.265 # Redis 3.2.1 crashed by signal: 11
27648:S 24 Jun 12:16:32.265 # Crashed running the instuction at: 0x431599
27648:S 24 Jun 12:16:32.265 # Accessing address: 0x7fd7dc299d1e
27648:S 24 Jun 12:16:32.265 # Failed assertion: (:0)
------ STACK TRACE ------
EIP:
./redis-server *:6400 [cluster][0x431599]
Backtrace:
./redis-server *:6400 cluster[0x45caa9]
./redis-server *:6400 cluster[0x45cf9a]
/lib64/libpthread.so.0(+0xf130)[0x7fd79b0c0130]
./redis-server *:6400 [cluster][0x431599]
./redis-server *:6400 cluster[0x420381]
./redis-server *:6400 cluster[0x4453e4]
./redis-server *:6400 cluster[0x426935]
./redis-server *:6400 cluster[0x429947]
./redis-server *:6400 cluster[0x4362b5]
./redis-server *:6400 cluster[0x4210b8]
./redis-server *:6400 cluster[0x42136b]
./redis-server *:6400 cluster[0x41e35f]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7fd79ad11af5]
./redis-server *:6400 [cluster][0x41e5e5]
------ INFO OUTPUT ------
Server
redis_version:3.2.1
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:42468865c010fa71
redis_mode:cluster
os:Linux 3.10.0-123.el7.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.8.3
process_id:27648
run_id:61d307b257465f360b038ca9e7227729c8f7fc55
tcp_port:6400
uptime_in_seconds:329
uptime_in_days:0
hz:10
lru_clock:7124000
executable:/home/redis/redis-3.2.1/src/./redis-server
config_file:/home/redis/redis-3.2.1/redis.conf
Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:14962
blocked_clients:0
Memory
used_memory:2989199160
used_memory_human:2.78G
used_memory_rss:3073859584
used_memory_rss_human:2.86G
used_memory_peak:2989199160
used_memory_peak_human:2.78G
total_system_memory:16609554432
total_system_memory_human:15.47G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:1.03
mem_allocator:jemalloc-4.0.3
Persistence
loading:0
rdb_changes_since_last_save:14
rdb_bgsave_in_progress:0
rdb_last_save_time:1466741463
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
Stats
total_connections_received:1
total_commands_processed:18
instantaneous_ops_per_sec:0
total_net_input_bytes:1151760372
total_net_output_bytes:1040
instantaneous_input_kbps:49380.11
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
Replication
role:slave
master_host:10.15.144.103
master_port:6400
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:9950243389
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
CPU
used_cpu_sys:9.85
used_cpu_user:13.29
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
Commandstats
cmdstat_rpush:calls=2,usec=13,usec_per_call=6.50
cmdstat_lset:calls=12,usec=24,usec_per_call=2.00
cmdstat_select:calls=1,usec=2,usec_per_call=2.00
cmdstat_cluster:calls=3,usec=805,usec_per_call=268.33
Cluster
cluster_enabled:1
Keyspace
db0:keys=410635,expires=0,avg_ttl=0
hash_init_value: 1466527529
------ CLIENT LIST OUTPUT ------
id=2 addr=10.15.107.143:56641 fd=9 name= age=146 idle=64 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=cluster
id=3 addr=10.15.144.103:6400 fd=24 name= age=0 idle=0 flags=M db=0 sub=0 psub=0 multi=-1 qbuf=14962 qbuf-free=17806 obl=0 oll=0 omem=0 events=r cmd=lset
------ CURRENT CLIENT INFO ------
id=3 addr=10.15.144.103:6400 fd=24 name= age=0 idle=0 flags=M db=0 sub=0 psub=0 multi=-1 qbuf=14962 qbuf-free=17806 obl=0 oll=0 omem=0 events=r cmd=lset
argv[0]: 'LSET'
argv[1]: '660FXU'
argv[2]: '-1'
argv[3]: S N0'
27648:S 24 Jun 12:16:32.266 # key '660FXU' found in DB containing the following object:
27648:S 24 Jun 12:16:32.266 # Object type: 1
27648:S 24 Jun 12:16:32.266 # Object encoding: 9
27648:S 24 Jun 12:16:32.266 # Object refcount: 1
27648:S 24 Jun 12:16:32.266 # List length: 1
------ REGISTERS ------
27648:S 24 Jun 12:16:32.266 #
RAX:00000000000000ff RBX:000000000000001a
RCX:000000000000001a RDX:00007fd6dd090f93
RDI:00007fd7dc299d1e RSI:00007fd6dc299d3a
RBP:00007fd761292640 RSP:00007fffbe24d110
R8 :00007fd6dc299d30 R9 :00007fd79aa00180
R10:0000000000000022 R11:00007fd79420eea0
R12:0000000000000000 R13:000000000000001a
R14:00007fd6dc299d3a R15:000000000000000b
RIP:0000000000431599 EFL:0000000000010206
CSGSFS:cd00000000000033
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d11f) -> 00007fd761292660
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d11e) -> 000000000000001a
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d11d) -> 000000000000000a
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d11c) -> 00007fd7612934d8
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d11b) -> 00007fffbe24d1d0
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d11a) -> 0000000000000001
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d119) -> 00000000075bcd15
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d118) -> 00007f0000000002
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d117) -> 0000000000000001
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d116) -> 0000000000000001
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d115) -> 00007fd6dd090f93
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d114) -> 00007f0000000002
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d113) -> 0000001a00000001
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d112) -> 0000000000000001
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d111) -> 0000001c00000028
27648:S 24 Jun 12:16:32.266 # (00007fffbe24d110) -> 00007fd79aa00080
------ FAST MEMORY TEST ------
27648:S 24 Jun 12:16:32.267 # Bio thread for job type #0 terminated
27648:S 24 Jun 12:16:32.267 # Bio thread for job type #1 terminated
*** Preparing to test memory region 722000 (114688 bytes)
*** Preparing to test memory region 1c75000 (135168 bytes)
*** Preparing to test memory region 7fd6dbe00000 (3072327680 bytes)
*** Preparing to test memory region 7fd7931ff000 (8388608 bytes)
*** Preparing to test memory region 7fd793a00000 (8388608 bytes)
*** Preparing to test memory region 7fd794200000 (2097152 bytes)
*** Preparing to test memory region 7fd79aa00000 (2097152 bytes)
*** Preparing to test memory region 7fd79b0ac000 (20480 bytes)
*** Preparing to test memory region 7fd79b2c9000 (16384 bytes)
*** Preparing to test memory region 7fd79b9c7000 (16384 bytes)
*** Preparing to test memory region 7fd79b9f1000 (4096 bytes)
*** Preparing to test memory region 7fd79b9f2000 (4096 bytes)
*** Preparing to test memory region 7fd79b9f5000 (4096 bytes)
.O.O.O.O.O.O.O.O.O.O.O.O.O
Fast memory test PASSED, however your memory can still be broken. Please run a memory test for several hours if possible.
=== REDIS BUG REPORT END. Make sure to include from START to END. ===
Suspect RAM error? Use redis-server --test-memory to verify it.
thanks!
The text was updated successfully, but these errors were encountered: