New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis evict almost all the keys when reach maxmemory #4496
Comments
@trevor211 there's not enough info here to really know what happened, but i don't think the incremental eviction has anything to do with it. the old code would have stopped evicting keys as soon as it went below the limit. i think this will be solved by the "client eviction" mechanism that we need to design (see the other ticket i linked above). @jianqingdu it's a little bit late, but maybe you can add more info on the traffic / load on the server at that time? any chance there were a lot of clients doing large MGET or GET in pipeline? |
A guess FYI @oranagra Currently, I think some above fixes may lessen the risk to evict most all keys with configuring limited output buffer size. But there still are bad cases, your issue #7676 may describe a main one. right? |
@ShooterIT do you mean to say that maybe this incident is already resolved by some fix that's already merged? IIRC, the fix in #5126 was for a problem that was created in redis 4.0, so not applicable for this specific report (v2.8). What's more likely and i think happens a lot is that each client eats just a little bit of memory (not reaching the output buffer limit), but together (when there's a spike of traffic) all the clients consume enough output buffers to induce eviction of all keys (not in a single event loop cycle). |
Yes, even though I really know what happened, redis used 40G, there is small possibility with rehashing, I also notice that maxmemory-policy is allkeys-lru. And I truly think #5126 make great sense and some problems mix with it. Incorrect not-counted memory computation just like a blasting fuse makes things terrible. I also want to provide my thoughts for you. For #7202 , I think it is more likely that many clients hit on the same event loop cycle and one or two client output buffer may reach configured limit when redis has heavy writing traffic. #7202 doesn't release memory but stop using more memory. maybe we should release async-free-clients' memory before evicting keys? For most clients, I think they don’t use much memory(10k extra clients will totally use 160M if every one uses 16k). Furthermore, we always take all clients buffer into maxmeory even before evicting. After some fix including #7653 (won't always continue to evict keys in one function), especially #5126, I think there is little possibility to evict too much keys. I notice the issues you mentioned in #7676 are old.
I truly agree it is a bad case, but i think it is rare. |
@ShooterIT i've seen this many times, including recently. i don't see a reason to think it is rarer than a case of multiple clients reaching the output buffer limit in the same event loop cycle (i.e. the later case requires bigger MGETs and for the limit to be reached in the same event loop cycle, whereas the first can happen happen solely, less timing specific and smaller values / commands). as i said, this specific incident can't be due to slave buffers mis-count, since IIRC that problem was introduced in 4.0 (changed the way output buffers are kept). but anyway, we fixed what we fixed, what's left is to fix the remaining issue (working on it slowly in the background), which will drop clients when they're combined memory usage grows, and do that before evicting keys, this will probably implicitly also drop clients that reached the output buffer limit earlier (before they cause mass key eviction). meanwhile, if you want to make a PR that will drop these sooner go ahead. |
@oranagra Oh, thanks for correcting me. Maybe the problems we have are different, for If we set |
@oranagra , we have several customers (3 in last 6 months) that have run into this issue and wanted to check in on the status of the remaining work. You mentioned that you were working on it slowly in the background. Is that still the case? Your proposed solution of dropping clients would be acceptable for our case. Thanks |
@Hornswoggles please describe your case, and mention which version you're using. |
Redis version: 2.8.23 @oranagra The issue is exactly as you've described here
|
@Hornswoggles why are you using an 8 year old software? |
@oranagra We run a platform. 2.8 was the initial version provided 10 years ago and not every customer has been in a position to upgrade. Version 6.2.8 is the latest version we make available. Would upgrading to 6.2 resolve this specific issue? If so we can request that our customers upgrade to resolve their issue. |
@Hornswoggles i'm not certain it'll solve the problem. maybe it'll improve things. a lot have changed since then and it's hard to keep track of that. one other thing that comes to mind is that in version 3.2 the reply list of the output buffer was changed from |
Redis version: 2.8.22
maxmemory-policy: allkeys-lru
Problem Description:
After Redis reach maxmeory 40G (the server has 128G memory), it was very slow in response and evict almost all keys, there is no any info in the log file, and can not reproduce the problem in the test environment
The text was updated successfully, but these errors were encountered: