-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Latency is high #3
Comments
Thanks for reporting this. Interesting. I'm going to dig a little. |
I'm seeing different results than you are. In my testing both Redis and Redcon are pretty close as far as latency is concerned. I used My Redis
Redcon
|
I'm benchmarking between two 20 core servers on 10GbE. That's a localhost test. Do you have 2 machines you can benchmark with over a real network? |
I have a couple machines I can test with. I'll keep you posted. |
I'm just testing on 2 20 core dual 10GbE bonded to 20GbE servers. I'll keep testing as well to make sure it's not an anomaly. |
In this benchmark the latency is close at the Redis
Redcon
|
At home I only have a one linux box and my macbook, and I couldn't find my thunderbolt/ethernet adapter. 👎 So fuck it. I ended up spinning up two servers on Digital Ocean. Both 2GB instances with private networking. I compiled Redis on both servers so that they both have the latest redis-server and redis-benchmark commands. On Server1 I launched Redis with On Server 2 I ran the redis-benchmark utility. Redis
|
Now when I run the Redcon clone as a single-theaded, the results are more in-line with Redis.
Redcon
|
Interesting results! I wonder where the contention is that is causing the tail latency to spike so much. |
Yeah I think something else in my codebase may be interfering. I'm benchmarking with your clone code and it's working more closely to Redis. Sorry for wasting some of your time. I'll dig into what's going on. I don't see anything consuming much CPU usage but there must be something interfering. |
Yeah. I'm curious too. Maybe there's some network setting in Another thought is around the 20-core processor server. Perhaps try with |
Our messages crossed paths. I'm glad to hear that it may something that's fixable. It's totally not a waste of time. I'm more than happy to help investigate these types of things. Thanks for using the project and let me know if there's anything I can help with. |
I found the problem. Add this to
Then at the top of
I'm not sure why this is causing issues because the channel isn't being used. |
It's happening on my side too. It's definitely something to do with the huge channel. I changed the size from 10,000,000 to 100,000 and it sped it up quite a bit. That channel sucks up 2GB of ram. 10,000,000 elements * 216 byte struct. |
Yeah. I was going to synchronize writes using a channel, but since sometimes Redcon can reach 2 million+ requests/second I thought I would give it a channel of 10 million to give it some room to not block. |
Perhaps a sync.RWMutex would be better. You could possibly have mu.Lock() that wraps Redcon commands that write and mu.RLock() wrap commands that read. That's the pattern that I use and it seems pretty good. |
Throughput of Redcon seems really good but latency is really high in comparison to Redis with shallow or deep pipelines. Even if you make the Redis commands NOOP.
I created a SET command that only wrote a response to benchmark the I/O performance of redcon versus Redis. Keep in mind Redis is doing key mutations while my benchmark is not.
Redis
Redcon
The text was updated successfully, but these errors were encountered: