Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance degrade 6.3.1 vs 6.2.2 #470

Open
academe-01 opened this issue Jul 15, 2022 · 10 comments
Open

Performance degrade 6.3.1 vs 6.2.2 #470

academe-01 opened this issue Jul 15, 2022 · 10 comments
Labels

Comments

@academe-01
Copy link

Hi,
I tested on different systems, but every time the results are the same, 6.3.x works worse than 6.2.x.

redis-benchmark -q -n 1000000 --threads 64

image

@smarakdas314
Copy link

smarakdas314 commented Aug 22, 2022

Second that 👍 also latency grew for us 4-5x times and thats ALOT. Waiting for next release for it to be better and faster than older version 6.2x or at least same performance with new functionality and bugfixes.

keydbupgrade

@jgerry2002
Copy link

jgerry2002 commented Aug 26, 2022

Same thing for us. On a single server it works ok, we had a master / master cluster with a small dataset. 6 or 7 streams, a couple with 500,000 key streams, most of them are smaller. Usual assortment of zset, hset, keys etc.

We saw the same issues. Latency was so bad that the UI or underlying processes would receive a disconnect from the server.

Summary:

  • AOF file loads are huge and time consuming
  • AOF partial reloads fail almost every time
  • Lots of errors regarding rreplay
  • Memory usage is doubled or tripled
  • Keydb itself stalls. We tried to adjust haproxy timeouts etc. We kept getting check errors where keydb would not respond to the haproxy check. Behavior was noticeable on the cli. Typing a simple command to check memory would hang for 1-3 seconds or more.
  • We upsized and tried two different VM types and the problems continued. We tried both ARM and X86 and we saw the behavior on both versions of 6.3.1.

It really seems to be something with the replication that is eating up resources over time. Unfortunately we had to roll back to 6.2.2 for stability reasons and an approaching deadline.

@micw
Copy link

micw commented Sep 1, 2022

Same here. We run a bunch of opertions for each event on a frequency of ~ 100 events/s. The operations are (in this order):
get, set, expire, hget, hset expiremember
On 6.2 it took ~40-50 ms (95% percentile). With 6.3.1, the required time increased to >200ms and the throuhput dropped to 50 events/s (so the impact is even higher that factor 4x).

CPU load of keydb increased significantly (~50%), memory usage by ~ 30-40%

@avermeer
Copy link

Still no reaction from KeyDB dev on this major performance issue?

I'm afraid that such issue could divert current KeyDB users to other Redis alternatives such as dragonflydb or cachegrand ...

@silviucpp
Copy link

I think @JohnSully already clarified this here: #494 (comment)

@micw
Copy link

micw commented Nov 13, 2022

I can see no clarification about this particular issue there.

@micw
Copy link

micw commented Aug 28, 2023

@JohnSully with 6.3.2 and 6.3.3 are two stable releases available. Is there a plan, if/when this performance issue will be addressed?

Kind regards,
Michael.

@JohnSully
Copy link
Collaborator

Hi @micw I have a 10% performance improvement coming if expires are used. The second highest priority is going to be FLASH performance.

I don’t have any immediate plans to address performance without expires but I’m expecting that to improve a bit as part of the flash investigation as we take more profiles.

@micw
Copy link

micw commented Aug 28, 2023

Hello @JohnSully Thank you for the fast reply. Are the +10% compared to 6.2 or to 6.3.1 (which has the massive performance degrade described in this issue here)?

@nickchomey
Copy link

nickchomey commented Aug 31, 2023

@micw can you confirm, then, that these issues persist in 6.3.3? Are you still using 6.2.2?

@JohnSully more generally than this ongoing issue, it seems like the development velocity of keydb has really slowed down tremendously as compared to before. Someone linked above to a comment you made about being unexpectedly focused on other projects within Snap, which is obviously fine. But do you anticipate returning focus to keydb at some point, and perhaps even an estimate of when? It would be helpful to me and others so that we can select our tooling accordingly. I'd REALLY love to use keydb, but it's hard to justify when it seems largely inactive. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

9 participants