-
-
Notifications
You must be signed in to change notification settings - Fork 6.2k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FastAPI Performance #2377
Comments
Is there any way to speed up the request per second? |
how do you run FastApi with uvicorn or gunicorn? |
I'm just doing this "uvicorn main:app --reload" |
I used autocannon for the test "autocannon -c 100 -d 40 -p 10 localhost:8000", 2 times the first one was for a warm up. If I'm doing this wrong then my apologies I'm new to using a framework with python. |
Okay |
Oh it's because I'm using windows |
What made you think the problem is related to your OS? |
|
@ycd Do you have any test which could show me the req per sec you got? |
Okay thanks. |
Thanks for the help here everyone! 👏 🙇 @mcbrowny35 "on par with" means:
Ref: https://www.merriam-webster.com/dictionary/on%20par%20with "on par with Go" or "on par with Node.js" would mean that it is comparable to those languages. And those languages include a wide variety of frameworks. It doesn't mean "faster than every other framework made with any of those languages". In that case, the phrase would have been "faster than Go or Node.js". And as you can see on the benchmarks, it can be a bit faster than some of the frameworks in those languages. Now, if you want to improve performance and deploy to production, you shouldn't use |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
The README.md file for FastAPI being on par seems misleading I think they could say that it's one of the fastest python frameworks to be more precise since I'm now going to edit this what I'm saying is FastAPI can't handle that many requests per second.
The text was updated successfully, but these errors were encountered: