-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the expected throughput? #7
Comments
I did this test several times with num_jobs = 3000; set_concurrency(50, 100) 1 time this result:
1 time this result:
Other 4 times: the job tasks ended in table mq_msgs but the program was still working endlessly. |
Hi @makorne, sorry I didn't get around to investigating this earlier - I would like to figure out what the problem is though. |
I can't seem to reproduce it. When I try with the parameters that caused it to hang for you (num_jobs = 10000, concurrency = [50, 1000]) I get these results:
Did you ever figure out what caused this? |
I've tried it too with different settings for concurrency up to (5000, 10000), but I couldn't reproduce it on my laptop |
Though after a couple of runs with (num_jobs = 100000, concurrency = [5000, 10000]) the process hangs with no activity and empty |
I've tried to locate the bug somehow and got this numbers:
Here's the code I used: imbolc@3399b41 |
Got it, at some point |
Ah, nice find! We should probably just abort if sending fails. |
Sure, but I new to async and couldn't find a way to pass the error back from a task without sacrificing performance |
I've addressed this in 0.3.0. |
@Diggsey Sorry to comment on a closed issue. I am not seeing the throughput on the stress test either. My system spec are at the bottom. I am running postgres 12.8 installed via the tool asdf if that matters. With or without release I am getting about the same results. I even tried edited main to [50,1000]
I know benchmarks depend on a lot of things and are really good for relative changes. I am wondering if there is anything you can think of that would cause the large difference? Thanks for your hard work! I am excited about the project.
|
@sbeckeriv I'm not sure TBH, this queue is not really designed for high throughput, but I do see much higher throughput than you're getting with much worse system specs. You are using an SSD right? |
@Diggsey yes. PM981a NVMe Samsung 1024GB (15302129) Ext4 file format full disk encryption. I know it doesnt have the symbols. I am working on it. It looks like there is a long pause for some reason. I will keep digging and let you know what I find. |
Hello again, I got a flame graph to report things but I dont know what to make of it. Maybe something will spark an idea for you. Thanks again for your work on this. Github does something funky with the svg file. I can zoom on it locally. The gist file at least supports hover. |
I tried your code on pg14 and latest sqlxmq. const MIN_CONCURRENCY: usize = 50;
const MIN_CONCURRENCY: usize = 50;
|
Hi!
Thank you for your great crate!
I am testing sqlxmq_stress and I dont see any high load for cores.
My results:
num_jobs = 1000; set_concurrency(50, 1000)
num_jobs = 10000; set_concurrency(50, 1000)
Took more than 2 hours and still works on Ryzen 5900HX / SSD.
I think may be it is hung?
How to prevent such situations and what is the expected throughput on recent hardware?
The text was updated successfully, but these errors were encountered: