Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execute dependencies in parallel instead of sequentially awaiting them #639

Open
sm-Fifteen opened this issue Oct 22, 2019 · 6 comments
Open
Labels
feature New feature or request reviewed

Comments

@sm-Fifteen
Copy link
Contributor

Is your feature request related to a problem? Please describe.
FastAPI's Dependency injection system, while extremely useful, tends to run into a problem when several long-running dependencies are needed for a single route, such as with the example below.

from fastapi import Depends, FastAPI
from asyncio import sleep as async_sleep

app = FastAPI()

async def get_big_data():
    await async_sleep(5)
    return "𝐃𝐀𝐓𝐀"

async def compute_pi():
    await async_sleep(5)
    return "About 3, I think?"

async def do_nothing_for_a_bit():
    await async_sleep(5)
    return "Gotta let the CPU spin down for a bit, otherwise it could explode."

@app.get("/do_long_thing")
async def do_long_thing(
    big_dep1: str = Depends(get_big_data),
    big_dep2: str = Depends(compute_pi),
    big_dep3: str = Depends(do_nothing_for_a_bit)
):
    return {
        'big': big_dep1,
        'pi': big_dep2,
        'sleep': big_dep3,
    }

Running that application and thentrying to access /do_long_thing takes 15 seconds to load, even though none of those dependencies are reliant on each other and could very well be ran in parallel.

Describe the solution you'd like
Don't await coroutines and threadpools here. Collect them into a list instead and gather them.

if sub_dependant.use_cache and sub_dependant.cache_key in dependency_cache:
solved = dependency_cache[sub_dependant.cache_key]
elif inspect.isgeneratorfunction(call) or inspect.isasyncgenfunction(call):
stack = request.scope.get("fastapi_astack")
if stack is None:
raise RuntimeError(
async_contextmanager_dependencies_error
) # pragma: no cover
solved = await solve_generator(
call=call, stack=stack, sub_values=sub_values
)
elif is_coroutine_callable(call):
solved = await call(**sub_values)
else:
solved = await run_in_threadpool(call, **sub_values)

Describe alternatives you've considered
Asking that the user take care of running and gathering those dependencies themselves is pretty much the only solution to this problem available right now (that I can see, anyway, there might be another one), which undermines the usefulness of the dependency injection system.

@sm-Fifteen sm-Fifteen added the feature New feature or request label Oct 22, 2019
@sm-Fifteen sm-Fifteen changed the title [FEATURE] Execute dependencies in rarallel instead of sequentially awaiting them [FEATURE] Execute dependencies in parallel instead of sequentially awaiting them Oct 22, 2019
@euri10
Copy link
Contributor

euri10 commented Oct 22, 2019

that's indeed a good idea

@tiangolo
Copy link
Owner

This seems like a good idea. But I think it would be quite non-trivial to implement. It would require sorting and finding the dependencies before running them, to be able to know what can be parallelizable that doesn't require something else first.

But if you feel like trying to take a stab at it, a PR would be more than welcome. 💪

@adriangb
Copy link
Contributor

adriangb commented Oct 5, 2021

Hey @tiangolo , just a heads up it looks like @ariloulaleelay has a PR up to implement this in #3902, and I've also worked out how to do it in di (which is based on FastAPI's DI system), so there's at least a couple ideas on how to move forward with this 😄

@tiangolo tiangolo changed the title [FEATURE] Execute dependencies in parallel instead of sequentially awaiting them Execute dependencies in parallel instead of sequentially awaiting them Feb 24, 2023
@kfreezen
Copy link

kfreezen commented Apr 7, 2023

I'm wondering what's up with this issue. I noticed that PR #3902 for it was closed by the PR's author in May 2022. I've been investigating an overhead of 30ms-100ms depending on hardware performance, wondering if this could get that overhead down.

@ariloulaleelay
Copy link

Nobody wanted that PR been merged. So I gave up and closed it.

@iml1111
Copy link

iml1111 commented May 29, 2023

Is this issue not resolved yet? It would be very useful if improved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request reviewed
Projects
None yet
Development

No branches or pull requests

7 participants