New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis Crashes when updating wordpress plugins manually #10313
Comments
@zoddshop From the logs it doesn't look like redis crashed, did you try to connect to redis using |
redis is unavailable for about 1 minute and then goes back to working as normal. During that one minute time those logs were produced. This problem has been plagueing me for months and I am not sure how to catch the issue. It is strange how all of a sudden redis is using 50 Gigs of memory. |
It seems that redis is trying to be flushed and then crashes. But it only happens in the situation of manually upgrading plugins or installing plugins on wordpress. |
I missed |
Serverredis_version:4.0.9 Clientsconnected_clients:54 Memoryused_memory:24598247832 Persistenceloading:0 Statstotal_connections_received:778 Replicationrole:master CPUused_cpu_sys:1361.46 Commandstatscmdstat_get:calls=81914041,usec=194474986,usec_per_call=2.37 Clustercluster_enabled:0 Keyspacedb0:keys=13608310,expires=13608310,avg_ttl=50479738 |
@zoddshop It looks like redis is working fine.
|
My cpu is always 99% idle. THP is set to madvise Should I set a maxmemory limit on redis? I have no idea how it can use so much RAM. Still the question of why redis becomes unresponsive I don't know. |
Yea redis locks up on wordpress plugin updates. Seems there is a bigger issue. I will stop using redis and move to memcached |
@zoddshop Because during the rdb saving, your plugin upgrade may also upgrade all the data at the same time, causing the child processes to reallocate almost the same size of memory as the main process, you can turn off the rdb saving(config set save "") before the upgrade, and turn on the rdb saving after the upgrade. |
The memory did not double. It went 20x.. Is there anyway I can catch the error? I am really surprised no one else had this experience with wordpress. I am using a very common cache plugin called LiteSpeed Cache. |
I see the total number of keys is 13608310, Can you confirm that the memory consumption is still 20g after you restart redis? |
Serverredis_version:4.0.9 Clientsconnected_clients:83 Memoryused_memory:5752975888 Persistenceloading:1 Statstotal_connections_received:284 Replicationrole:master CPUused_cpu_sys:1.93 Commandstatscmdstat_command:calls=1,usec=257,usec_per_call=257.00 Clustercluster_enabled:0 Keyspacedb0:keys=1894671,expires=1894671,avg_ttl=0 root@server:~# redis-cli --bigkeys Scanning the entire keyspace to find biggest keys as well asaverage sizes per key type. You can use -i 0.1 to sleep 0.1 secper 100 SCAN commands (not usually needed).[00.00%] Biggest string found so far 'd4aa7terms.get_terms:c0b8a4c6fffb7653fec922faf16d5425:0.2 6614300 1645490545' with 23 bytes -------- summary ------- Sampled 3848391 keys in the keyspace! Biggest string found 'd4aa7post_meta.459' has 2118681 bytes 3848391 strings with 9639118066 bytes (100.00% of keys, avg size 2504.71) |
Should I turn off redis persistence, snapshotting, and aof? I am not really sure what these do. |
Your last info message is still loading (taking up 5g of memory), but it's almost certain that your dataset is likely to reach 20g. |
How do I change the save config? Yes should be on SSD. Is 2 minutes too long? |
You can change
litespeed just uses redis as a cache and will rebuild it automatically, so you can set the rdb save time to be longer, maybe you can use 2 minutes is not sure if it will be too long, at least redis saves 10 million data, and there is a lot of large data (up to 2M). |
I turned off persistence, RDP, AOF. That is not the issue. I don't know what is wrong with Redis flushdb... |
@zoddshop Do you mean the plugin upgrade use |
I think so.. |
The cache plugin litespeed cache is using flushdb |
@zoddshop Can you share the logs after you turn off persistence? |
redis_version:4.0.9 Clientsconnected_clients:36 Memoryused_memory:22239084696 Persistenceloading:0 Statstotal_connections_received:2621 Replicationrole:master CPUused_cpu_sys:2797.99 Commandstatscmdstat_echo:calls=965323,usec=486607,usec_per_call=0.50 Clustercluster_enabled:0 Keyspacedb0:keys=9008002,expires=9008002,avg_ttl=54092424 |
1320:signal-handler (1646052500) Received SIGTERM scheduling shutdown... 1326:M 28 Feb 07:50:23.742 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. |
Commandstats doesn't see |
Yes i am getting timeout SHould i turn off THP? |
Theres no problem with wordpress |
@zoddshop You can try to turn off THP, and change the somaxconn value to 1024. |
Nothing works. This is the error. Error DetailsAn error of type E_ERROR was caused in line 540 of the file /usr/local/lsws/Example/html/wordpress/wp-content/plugins/litespeed-cache/src/object-cache.cls.php. Error message: Uncaught RedisException: read error on connection to localhost:6379 in /usr/local/lsws/Example/html/wordpress/wp-content/plugins/litespeed-cache/src/object-cache.cls.php:540 |
@zoddshop Can you change config |
It happened again 30036:M 10 Mar 13:14:22.178 - DB 0: 14679805 keys (14679805 volatile) in 16777216 slots HT. |
@zoddshop See from the logs that during the plugin upgrade process, there are a lot of connections and frequent disconnections. |
ulimit 513158 How to check close wait processes? And I don't know If I am using connection pooling |
Use |
I updated a plugin successfully this time. I saw about 3-4 close waits |
i think wordpress can add pooling with should I be using that? |
but I think this is a redis problem not a mysql problem |
@zoddshop Yes, I think it can be turned on. |
theres no connection pooling for wordpress. but I think pconnect will use persistent connections. There is very little documentation on persistent conncetions. How can I check if it is working and actually persistent now? |
Even turning on persistent connection does not help. This is definitely a problem with redis. |
1311:M 12 Mar 20:23:15.606 - 34 clients connected (0 slaves), 44842515160 bytes in use |
The error comes from this php line |
Have you tried using |
do you mean to change default socket timeout for php? I will try that. |
I'm not sure why there is this discrepancy, maybe it has something to do with the request timeout, the phpfpm process will time out, if it does it will probably be killed by nginx (or other) and cause the upgrade to fail, you also need to check if phpfpm has been killed. |
It is not a timeout issue. It just hangs forever when updating. No close wait connections either. Resources all 99% free. This really seems to be a redis bug. |
Hmm it seems the upgrade succeeded this time but wordpress doesn't know it. I thought it was hanging but after refreshing the admin page it seems everything was installed. I think I give up on this issue. |
Crash report
[18-Feb-2022 08:21:54 UTC] [DBG] Now start to flush Db
[18-Feb-2022 08:22:57 UTC] socket error on read socket
[18-Feb-2022 08:22:57 UTC] socket error on read socket
[18-Feb-2022 08:22:58 UTC] socket error on read socket
[18-Feb-2022 08:23:05 UTC] socket error on read socket
[18-Feb-2022 08:23:07 UTC] socket error on read socket
[18-Feb-2022 08:23:07 UTC] socket error on read socket
[18-Feb-2022 08:23:09 UTC] socket error on read socket
[18-Feb-2022 08:23:10 UTC] Connection timed out
[18-Feb-2022 08:23:12 UTC] Connection timed out
[18-Feb-2022 08:23:13 UTC] Connection timed out
[18-Feb-2022 08:23:17 UTC] Connection timed out
[18-Feb-2022 08:23:25 UTC] Connection timed out
48373:C 18 Feb 03:16:00.579 * RDB: 2428 MB of memory used by copy-on-write
1329:M 18 Feb 03:16:02.062 * Background saving terminated with success
1329:M 18 Feb 03:17:03.027 * 10000 changes in 60 seconds. Saving...
1329:M 18 Feb 03:17:04.967 * Background saving started by pid 48637
48637:C 18 Feb 03:19:51.970 * DB saved on disk
48637:C 18 Feb 03:19:53.080 * RDB: 2930 MB of memory used by copy-on-write
1329:M 18 Feb 03:19:54.647 * Background saving terminated with success
1329:M 18 Feb 03:20:55.019 * 10000 changes in 60 seconds. Saving...
1329:M 18 Feb 03:20:57.091 * Background saving started by pid 48942
(The php error message came during this time period but no records in redis log)
48942:C 18 Feb 03:23:39.431 * DB saved on disk
48942:C 18 Feb 03:23:40.380 * RDB: 49370 MB of memory used by copy-on-write
1329:M 18 Feb 03:24:12.761 * Background saving terminated with success
1329:M 18 Feb 03:25:13.027 * 10000 changes in 60 seconds. Saving...
1329:M 18 Feb 03:25:13.080 * Background saving started by pid 2429
2429:C 18 Feb 03:25:14.996 * DB saved on disk
2429:C 18 Feb 03:25:15.034 * RDB: 6 MB of memory used by copy-on-write
1329:M 18 Feb 03:25:15.084 * Background saving terminated with success
1329:M 18 Feb 03:26:16.091 * 10000 changes in 60 seconds. Saving...
1329:M 18 Feb 03:26:16.153 * Background saving started by pid 2448
2448:C 18 Feb 03:26:18.423 * DB saved on disk
2448:C 18 Feb 03:26:18.463 * RDB: 12 MB of memory used by copy-on-write
1329:M 18 Feb 03:26:18.554 * Background saving terminated with success
1329:M 18 Feb 03:27:19.022 * 10000 changes in 60 seconds. Saving...
1329:M 18 Feb 03:27:19.090 * Background saving started by pid 2610
2610:C 18 Feb 03:27:22.866 * DB saved on disk
Additional information
The text was updated successfully, but these errors were encountered: