-
Notifications
You must be signed in to change notification settings - Fork 23.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VM can't suppert Lists Or sets have much numbers。 #1
Comments
Hello, can you please elaborate a bit? Like how much numbers, how to replicate, and the error reported by Redis. Thanks! |
when I use command like redis-cli -r 10000000 lpush mylist aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa[1k content]。the redis will use a lot of memory。 |
redis_version:1.3.3 this info show redis can't VM one key。 |
my redis config is vm-max-memory 0 |
Hello, thanks for the info, could you please enter IRC irc.freenode.org #redis for a chat? |
Can't remember the details, but this issue is now closed. |
## Issues and solutions from #12817 1. Touch ProcessingEventsWhileBlocked and calling moduleCount() without GIL in afterSleep() - Introduced: Version: 7.0.0 PR: #9963 - Harm Level: Very High If the module thread calls `RM_Yield()` before the main thread enters afterSleep(), and modifies `ProcessingEventsWhileBlocked`(+1), it will cause the main thread to not wait for GIL, which can lead to all kinds of unforeseen problems, including memory data corruption. - Initial / Abandoned Solution: * Added `__thread` specifier for ProcessingEventsWhileBlocked. `ProcessingEventsWhileBlocked` is used to protect against nested event processing, but event processing in the main thread and module threads should be completely independent and unaffected, so it is safer to use TLS. * Adding a cached module count to keep track of the current number of modules, to avoid having to use `dictSize()`. - Related Warnings: ``` WARNING: ThreadSanitizer: data race (pid=1136) Write of size 4 at 0x0001045990c0 by thread T4 (mutexes: write M0): #0 processEventsWhileBlocked networking.c:4135 (redis-server:arm64+0x10006d124) #1 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c) #2 bg_call_worker <null>:83232836 (blockedclient.so:arm64+0x16a8) Previous read of size 4 at 0x0001045990c0 by main thread: #0 afterSleep server.c:1861 (redis-server:arm64+0x100024f98) #1 aeProcessEvents ae.c:408 (redis-server:arm64+0x10000fd64) #2 aeMain ae.c:496 (redis-server:arm64+0x100010f0c) #3 main server.c:7220 (redis-server:arm64+0x10003f38c) ``` 2. aeApiPoll() is not thread-safe When using RM_Yield to handle events in a module thread, if the main thread has not yet entered `afterSleep()`, both the module thread and the main thread may touch `server.el` at the same time. - Introduced: Version: 7.0.0 PR: #9963 - Old / Abandoned Solution: Adding a new mutex to protect timing between after beforeSleep() and before afterSleep(). Defect: If the main thread enters the ae loop without any IO events, it will wait until the next timeout or until there is any event again, and the module thread will always hang until the main thread leaves the event loop. - Related Warnings: ``` SUMMARY: ThreadSanitizer: data race ae_kqueue.c:55 in addEventMask ================== ================== WARNING: ThreadSanitizer: data race (pid=14682) Write of size 4 at 0x000100b54000 by thread T9 (mutexes: write M0): #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588) #1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84) #2 processEventsWhileBlocked networking.c:4138 (redis-server:arm64+0x10006d3c4) #3 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c) #4 bg_call_worker <null>:16042052 (blockedclient.so:arm64+0x169c) Previous write of size 4 at 0x000100b54000 by main thread: #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588) #1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84) #2 aeMain ae.c:496 (redis-server:arm64+0x100010da8) #3 main server.c:7238 (redis-server:arm64+0x10003f51c) ``` ## The final fix as the comments: #12817 (comment) Optimized solution based on the above comment: First, we add `module_gil_acquring` to indicate whether the main thread is currently in the acquiring GIL state. When the module thread starts to yield, there are two possibilities(we assume the caller keeps the GIL): 1. The main thread is in the mid of beforeSleep() and afterSleep(), that is, `module_gil_acquring` is not 1 now. At this point, the module thread will wake up the main thread through the pipe and leave the yield, waiting for the next yield when the main thread may already in the acquiring GIL state. 2. The main thread is in the acquiring GIL state. The module thread release the GIL, yielding CPU to give the main thread an opportunity to start event processing, and then acquire the GIL again until the main thread releases it. This is what #12817 (comment) mentioned direction. --------- Co-authored-by: Oran Agra <oran@redislabs.com>
## Issues and solutions from redis#12817 1. Touch ProcessingEventsWhileBlocked and calling moduleCount() without GIL in afterSleep() - Introduced: Version: 7.0.0 PR: redis#9963 - Harm Level: Very High If the module thread calls `RM_Yield()` before the main thread enters afterSleep(), and modifies `ProcessingEventsWhileBlocked`(+1), it will cause the main thread to not wait for GIL, which can lead to all kinds of unforeseen problems, including memory data corruption. - Initial / Abandoned Solution: * Added `__thread` specifier for ProcessingEventsWhileBlocked. `ProcessingEventsWhileBlocked` is used to protect against nested event processing, but event processing in the main thread and module threads should be completely independent and unaffected, so it is safer to use TLS. * Adding a cached module count to keep track of the current number of modules, to avoid having to use `dictSize()`. - Related Warnings: ``` WARNING: ThreadSanitizer: data race (pid=1136) Write of size 4 at 0x0001045990c0 by thread T4 (mutexes: write M0): #0 processEventsWhileBlocked networking.c:4135 (redis-server:arm64+0x10006d124) redis#1 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c) redis#2 bg_call_worker <null>:83232836 (blockedclient.so:arm64+0x16a8) Previous read of size 4 at 0x0001045990c0 by main thread: #0 afterSleep server.c:1861 (redis-server:arm64+0x100024f98) redis#1 aeProcessEvents ae.c:408 (redis-server:arm64+0x10000fd64) redis#2 aeMain ae.c:496 (redis-server:arm64+0x100010f0c) redis#3 main server.c:7220 (redis-server:arm64+0x10003f38c) ``` 2. aeApiPoll() is not thread-safe When using RM_Yield to handle events in a module thread, if the main thread has not yet entered `afterSleep()`, both the module thread and the main thread may touch `server.el` at the same time. - Introduced: Version: 7.0.0 PR: redis#9963 - Old / Abandoned Solution: Adding a new mutex to protect timing between after beforeSleep() and before afterSleep(). Defect: If the main thread enters the ae loop without any IO events, it will wait until the next timeout or until there is any event again, and the module thread will always hang until the main thread leaves the event loop. - Related Warnings: ``` SUMMARY: ThreadSanitizer: data race ae_kqueue.c:55 in addEventMask ================== ================== WARNING: ThreadSanitizer: data race (pid=14682) Write of size 4 at 0x000100b54000 by thread T9 (mutexes: write M0): #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588) redis#1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84) redis#2 processEventsWhileBlocked networking.c:4138 (redis-server:arm64+0x10006d3c4) redis#3 RM_Yield module.c:2410 (redis-server:arm64+0x10018b66c) redis#4 bg_call_worker <null>:16042052 (blockedclient.so:arm64+0x169c) Previous write of size 4 at 0x000100b54000 by main thread: #0 aeApiPoll ae_kqueue.c:175 (redis-server:arm64+0x100010588) redis#1 aeProcessEvents ae.c:399 (redis-server:arm64+0x10000fb84) redis#2 aeMain ae.c:496 (redis-server:arm64+0x100010da8) redis#3 main server.c:7238 (redis-server:arm64+0x10003f51c) ``` ## The final fix as the comments: redis#12817 (comment) Optimized solution based on the above comment: First, we add `module_gil_acquring` to indicate whether the main thread is currently in the acquiring GIL state. When the module thread starts to yield, there are two possibilities(we assume the caller keeps the GIL): 1. The main thread is in the mid of beforeSleep() and afterSleep(), that is, `module_gil_acquring` is not 1 now. At this point, the module thread will wake up the main thread through the pipe and leave the yield, waiting for the next yield when the main thread may already in the acquiring GIL state. 2. The main thread is in the acquiring GIL state. The module thread release the GIL, yielding CPU to give the main thread an opportunity to start event processing, and then acquire the GIL again until the main thread releases it. This is what redis#12817 (comment) mentioned direction. --------- Co-authored-by: Oran Agra <oran@redislabs.com>
fix some issues that come from sanitizer thread report. 1. when the main thread is updating daylight_active, other threads (bio, module thread) may be writing logs at the same time. ``` WARNING: ThreadSanitizer: data race (pid=661064) Read of size 4 at 0x55c9a4d11c70 by thread T2: #0 serverLogRaw /home/sundb/data/redis_fork/src/server.c:116 (redis-server+0x8d797) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #1 _serverLog.constprop.2 /home/sundb/data/redis_fork/src/server.c:146 (redis-server+0x2a3b14) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #2 bioProcessBackgroundJobs /home/sundb/data/redis_fork/src/bio.c:329 (redis-server+0x1c24ca) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) Previous write of size 4 at 0x55c9a4d11c70 by main thread (mutexes: write M0, write M1, write M2, write M3): #0 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1102 (redis-server+0x925e7) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #1 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1087 (redis-server+0x925e7) #2 updateCachedTime /home/sundb/data/redis_fork/src/server.c:1118 (redis-server+0x925e7) #3 afterSleep /home/sundb/data/redis_fork/src/server.c:1811 (redis-server+0x925e7) #4 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:389 (redis-server+0x85ae0) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #5 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85ae0) #6 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85ae0) #7 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) ``` 2. thread leaks in module tests ``` WARNING: ThreadSanitizer: thread leak (pid=668683) Thread T13 (tid=670041, finished) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:1036 (libtsan.so.2+0x3d179) (BuildId: 28a9f70061dbb2dfa2cef661d3b23aff4ea13536) #1 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:200 (blockonbackground.so+0x97fd) (BuildId: 9cd187906c57e88cdf896d121d1d96448b37a136) #2 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:169 (blockonbackground.so+0x97fd) #3 call /home/sundb/data/redis_fork/src/server.c:3546 (redis-server+0x9b7fb) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #4 processCommand /home/sundb/data/redis_fork/src/server.c:4176 (redis-server+0xa091c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #5 processCommandAndResetClient /home/sundb/data/redis_fork/src/networking.c:2468 (redis-server+0xd2b8e) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #6 processInputBuffer /home/sundb/data/redis_fork/src/networking.c:2576 (redis-server+0xd2b8e) #7 readQueryFromClient /home/sundb/data/redis_fork/src/networking.c:2722 (redis-server+0xd358f) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #8 callHandler /home/sundb/data/redis_fork/src/connhelpers.h:58 (redis-server+0x288a7b) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #9 connSocketEventHandler /home/sundb/data/redis_fork/src/socket.c:277 (redis-server+0x288a7b) #10 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:417 (redis-server+0x85b45) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) #11 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85b45) #12 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85b45) #13 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0) ```
…IREAT commands (#13310) 1. Don't allow HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT command expire parameters is negative 2. Remove a dead code reported from Coverity. when `unit` is not `UNIT_SECONDS`, the second `if (expire > (long long) EB_EXPIRE_TIME_MAX)` will be dead code. ```c # t_hash.c 2988 /* Check expire overflow */ cond_at_most: Condition expire > 281474976710655LL, taking false branch. Now the value of expire is at most 281474976710655. 2989 if (expire > (long long) EB_EXPIRE_TIME_MAX) { 2990 addReplyErrorExpireTime(c); 2991 return; 2992 } 2994 if (unit == UNIT_SECONDS) { 2995 if (expire > (long long) EB_EXPIRE_TIME_MAX / 1000) { 2996 addReplyErrorExpireTime(c); 2997 return; 2998 } 2999 expire *= 1000; 3000 } else { at_most: At condition expire > 281474976710655LL, the value of expire must be at most 281474976710655. dead_error_condition: The condition expire > 281474976710655LL cannot be true. 3001 if (expire > (long long) EB_EXPIRE_TIME_MAX) { CID 494223: (#1 of 1): Logically dead code (DEADCODE) dead_error_begin: Execution cannot reach this statement: addReplyErrorExpireTime(c);. 3002 addReplyErrorExpireTime(c); 3003 return; 3004 } 3005 } ``` --------- Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
…ytes struct to 64 Bytes): reduces LLC misses and Memory Loads (#13296) The following PR goes from 33 cacheline on getKeysResult struct (by default has 256 static buffer) ``` root@hpe10:~/redis# pahole -p ./src/server.o -C getKeysResult typedef struct { keyReference keysbuf[256]; /* 0 2048 */ /* --- cacheline 32 boundary (2048 bytes) --- */ /* typedef keyReference */ struct { int pos; int flags; } *keys; /* 2048 8 */ int numkeys; /* 2056 4 */ int size; /* 2060 4 */ /* size: 2064, cachelines: 33, members: 4 */ /* last cacheline: 16 bytes */ } getKeysResult; ``` to 1 cacheline with a static buffer of 6 keys per command): ``` root@hpe10:~/redis# pahole -p ./src/server.o -C getKeysResult typedef struct { int numkeys; /* 0 4 */ int size; /* 4 4 */ keyReference keysbuf[6]; /* 8 48 */ /* typedef keyReference */ struct { int pos; int flags; } *keys; /* 56 8 */ /* size: 64, cachelines: 1, members: 4 */ } getKeysResult; ``` we get around 1.5% higher ops/sec, and a confirmation of around 15% less LLC loads on getNodeByQuery and 37% less Stores. Function / Call Stack | CPU Time: Difference | CPU Time: 9462436 | CPU Time: this PR | Loads: Difference | Loads: 9462436 | Loads: this PR | Stores: Difference | Stores: 9462436 | Stores: This PR -- | -- | -- | -- | -- | -- | -- | -- | -- | -- getNodeByQuery | 0.753767 | 1.57118 | 0.817416 | 144297829 (15% less loads) | 920575969 | 776278140 | 367607824 (37% less stores) | 991642384 | 624034560 ## results on client side ### baseline ``` taskset -c 2,3 memtier_benchmark -s 192.168.1.200 --port 6379 --authenticate perf --cluster-mode --pipeline 10 --data-size 100 --ratio 1:0 --key-pattern P:P --key-minimum=1 --key-maximum 1000000 --test-time 180 -c 25 -t 2 --hide-histogram Writing results to stdout [RUN #1] Preparing benchmark client... [RUN #1] Launching threads now... [RUN #1 100%, 180 secs] 0 threads: 110333450 ops, 604992 (avg: 612942) ops/sec, 84.75MB/sec (avg: 85.86MB/sec), 0.82 (avg: 0.81) msec latency 2 Threads 25 Connections per thread 180 Seconds ALL STATS ====================================================================================================================================================== Type Ops/sec Hits/sec Misses/sec MOVED/sec ASK/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec ------------------------------------------------------------------------------------------------------------------------------------------------------ Sets 612942.14 --- --- 0.00 0.00 0.81332 0.80700 1.26300 2.92700 87924.12 Gets 0.00 0.00 0.00 0.00 0.00 --- --- --- --- 0.00 Waits 0.00 --- --- --- --- --- --- --- --- --- Totals 612942.14 0.00 0.00 0.00 0.00 0.81332 0.80700 1.26300 2.92700 87924.12 ``` ### comparison ``` taskset -c 2,3 memtier_benchmark -s 192.168.1.200 --port 6379 --authenticate perf --cluster-mode --pipeline 10 --data-size 100 --ratio 1:0 --key-pattern P:P --key-minimum=1 --key-maximum 1000000 --test-time 180 -c 25 -t 2 --hide-histogram Writing results to stdout [RUN #1] Preparing benchmark client... [RUN #1] Launching threads now... [RUN #1 100%, 180 secs] 0 threads: 111731310 ops, 610195 (avg: 620707) ops/sec, 85.48MB/sec (avg: 86.95MB/sec), 0.82 (avg: 0.80) msec latency 2 Threads 25 Connections per thread 180 Seconds ALL STATS ====================================================================================================================================================== Type Ops/sec Hits/sec Misses/sec MOVED/sec ASK/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec ------------------------------------------------------------------------------------------------------------------------------------------------------ Sets 620707.72 --- --- 0.00 0.00 0.80312 0.79900 1.23900 2.87900 89037.78 Gets 0.00 0.00 0.00 0.00 0.00 --- --- --- --- 0.00 Waits 0.00 --- --- --- --- --- --- --- --- --- Totals 620707.72 0.00 0.00 0.00 0.00 0.80312 0.79900 1.23900 2.87900 89037.78 ``` Co-authored-by: filipecosta90 <filipecosta.90@gmail.com>
VM can't suppert Lists Or sets have much numbers。
The text was updated successfully, but these errors were encountered: