Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FastAPI Performance #2377

Closed
wuluis opened this issue Nov 18, 2020 · 13 comments
Closed

FastAPI Performance #2377

wuluis opened this issue Nov 18, 2020 · 13 comments
Labels
question Question or problem question-migrate

Comments

@wuluis
Copy link

wuluis commented Nov 18, 2020

The README.md file for FastAPI being on par seems misleading I think they could say that it's one of the fastest python frameworks to be more precise since I'm now going to edit this what I'm saying is FastAPI can't handle that many requests per second.

@wuluis wuluis added the question Question or problem label Nov 18, 2020
@wuluis wuluis closed this as completed Nov 18, 2020
@wuluis wuluis reopened this Nov 18, 2020
@wuluis
Copy link
Author

wuluis commented Nov 18, 2020

Is there any way to speed up the request per second?

@includeamin
Copy link

how do you run FastApi with uvicorn or gunicorn?
what is your gunicorn config?
requests per second depend on many parameters.

@wuluis
Copy link
Author

wuluis commented Nov 18, 2020

I'm just doing this "uvicorn main:app --reload"

@wuluis
Copy link
Author

wuluis commented Nov 18, 2020

I used autocannon for the test "autocannon -c 100 -d 40 -p 10 localhost:8000", 2 times the first one was for a warm up. If I'm doing this wrong then my apologies I'm new to using a framework with python.

@includeamin
Copy link

check it out

@wuluis
Copy link
Author

wuluis commented Nov 18, 2020

Okay

@wuluis
Copy link
Author

wuluis commented Nov 18, 2020

Oh it's because I'm using windows

@ycd
Copy link
Contributor

ycd commented Nov 18, 2020

What made you think the problem is related to your OS?

@wuluis
Copy link
Author

wuluis commented Nov 18, 2020

Google

@wuluis
Copy link
Author

wuluis commented Nov 18, 2020

@ycd Do you have any test which could show me the req per sec you got?

@ycd
Copy link
Contributor

ycd commented Nov 19, 2020

There are a lot of tools out there, ali, hey, and well-rounded benchmarks like TechEmpowers.

@wuluis
Copy link
Author

wuluis commented Nov 19, 2020

Okay thanks.

@wuluis wuluis closed this as completed Nov 19, 2020
@tiangolo
Copy link
Owner

Thanks for the help here everyone! 👏 🙇


@mcbrowny35 "on par with" means:

at the same level or standard as (someone or something else)

Ref: https://www.merriam-webster.com/dictionary/on%20par%20with

"on par with Go" or "on par with Node.js" would mean that it is comparable to those languages. And those languages include a wide variety of frameworks. It doesn't mean "faster than every other framework made with any of those languages". In that case, the phrase would have been "faster than Go or Node.js".

And as you can see on the benchmarks, it can be a bit faster than some of the frameworks in those languages.

Now, if you want to improve performance and deploy to production, you shouldn't use --reload, that exclusively for development. And you shouldn't deploy to Windows. 90% of the cloud servers run on Linux. You should do that as well.

@tiangolo tiangolo reopened this Feb 28, 2023
Repository owner locked and limited conversation to collaborators Feb 28, 2023
@tiangolo tiangolo converted this issue into discussion #7049 Feb 28, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
question Question or problem question-migrate
Projects
None yet
Development

No branches or pull requests

4 participants