Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does one setup a global timeout to all requests? #1752

Closed
9 tasks done
PLNech opened this issue Jul 21, 2020 · 22 comments
Closed
9 tasks done

How does one setup a global timeout to all requests? #1752

PLNech opened this issue Jul 21, 2020 · 22 comments

Comments

@PLNech
Copy link
Contributor

PLNech commented Jul 21, 2020

First check

Ticked all checks, then first commitment choice
  • I added a very descriptive title to this issue.
  • I used the GitHub search to find a similar issue and didn't find it.
  • I searched the FastAPI documentation, with the integrated search.
  • I already searched in Google "How to X in FastAPI" and didn't find any information.
  • I already read and followed all the tutorial in the docs and didn't find an answer.
  • I already checked if it is not related to FastAPI but to Pydantic.
  • I already checked if it is not related to FastAPI but to Swagger UI.
  • I already checked if it is not related to FastAPI but to ReDoc.
  • After submitting this, I commit to one of:
    • Read open issues with questions until I find 2 issues where I can help someone and add a comment to help there.
    • I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
    • Implement a Pull Request for a confirmed bug.

Description

Hi there, first of all many thanks for the work on FastAPI - this is now my goto framework for building Python-based REST APIs :)

My question is about adding a global timeout to any potential request served by the server. My use-case includes occasionally long loading times when I have to load a new model for a given user request, and instead of blocking for 30-50s (which would often timeout on the user side due to default connection timeouts), I would like to return a temporary error whenever any endpoint takes more than a given delay to complete.

Example

Today the only way I found to implement a timeout on every request is to wrap every endpoint method within a context manager like this one:

@contextmanager
def timeout_after(seconds: int):
    # Register a function to raise a TimeoutError on the signal.
    signal.signal(signal.SIGALRM, raise_timeout)
    # Schedule the signal to be sent after `seconds`.
    signal.alarm(seconds)

    try:
        yield
    finally:
        # Unregister the signal so it won't be triggered if the timeout is not reached.
        signal.signal(signal.SIGALRM, signal.SIG_IGN)

def raise_timeout(_, frame):
    raise TimeoutError

# Used as such:
@timeout_after(5)
@app.get("/1/version", tags=["Meta"],
         description="Check if the server is alive, returning the version it runs.",
         response_model=Version,
         response_description="the version of the API currently running.")
async def version() -> Version:
    return current_version

This is however quite cumbersome to add on every single function decorated as an endpoint.
Besides, it feels hacky: isn't there a better way to define app-level timeouts broadly, with a common handler, maybe akin to how ValidationErrors can be managed in a single global handler?

Environment

  • OS: [e.g. Linux / Windows / macOS]: Linux
  • FastAPI Version [e.g. 0.3.0]: 0.58.0
  • Python version: 3.7.7

Additional context

I looked into Starlette's timeout support to see if that was handled at a lower level. but to no avail.

@PLNech PLNech added the question Question or problem label Jul 21, 2020
@ZionStage
Copy link

Hi @PLNech

I am developing my own API using FastAPI and ran into the same "problem" as I am trying to add a global timeout to all my requests.

I am still new to fastapi but from what I understand I believe the "fastapi" way to do so would be to use a middleware as they are designed to be ran at every request by nature. As I searched on how to do so I found this
gitter community thread and thought it could maybe help you.

I am going to implement both your solution and the middleware based one and see which one I prefer and works best. Also note that there seems to be a problem with starlette 0.13.3 and higher so keep that in mind.

Also if you found a workaround by now I am more than interested.

Hope it helped you a bit

@PLNech
Copy link
Contributor Author

PLNech commented Aug 27, 2020

Hi @ZionStage, thanks for your message! I haven't found a workaround for now. Looking forward to continuing this conversation with you as we move forward on this topic :)

@ZionStage
Copy link

Hey @PLNech

I have implemented and tested the middleware and it seems to be working fine for me. Here is my code

import asyncio
import time


import pytest

from fastapi import FastAPI, Request, Response, HTTPException
from fastapi.responses import JSONResponse
from httpx import AsyncClient
from starlette.status import HTTP_504_GATEWAY_TIMEOUT

REQUEST_TIMEOUT_ERROR = 1  # Threshold

app = FastAPI() # Fake app

# Creating a test path
@app.get("/test_path")
async def route_for_test(sleep_time: float) -> None:
    await asyncio.sleep(sleep_time)

# Adding a middleware returning a 504 error if the request processing time is above a certain threshold
@app.middleware("http")
async def timeout_middleware(request: Request, call_next):
    try:
        start_time = time.time()
        return await asyncio.wait_for(call_next(request), timeout=REQUEST_TIMEOUT_ERROR)

    except asyncio.TimeoutError:
        process_time = time.time() - start_time
        return JSONResponse({'detail': 'Request processing time excedeed limit',
                             'processing_time': process_time},
                            status_code=HTTP_504_GATEWAY_TIMEOUT)

# Testing wether or not the middleware triggers
@pytest.mark.asyncio
async def test_504_error_triggers():
    # Creating an asynchronous client to test our asynchronous function
    async with AsyncClient(app=app, base_url="http://test") as ac:
        response = await ac.get("/test_path?sleep_time=3")
    content = eval(response.content.decode())
    assert response.status_code == HTTP_504_GATEWAY_TIMEOUT
    assert content['processing_time'] < 1.1

# Testing middleware's consistency for requests having a processing time close to the threshold 
@pytest.mark.asyncio
async def test_504_error_consistency():
    async with AsyncClient(app=app, base_url="http://test") as ac:
        errors = 0
        sleep_time = REQUEST_TIMEOUT_ERROR*0.9
        for i in range(100):
            response = await ac.get("/test_path?sleep_time={}".format(sleep_time))
            if response.status_code == HTTP_504_GATEWAY_TIMEOUT:
                errors += 1
        assert errors == 0

# Testing middleware's precision
# ie : Testing if it triggers when it should not and vice versa
@pytest.mark.asyncio
async def test_504_error_precision():
    async with AsyncClient(app=app, base_url="http://test") as ac:
        should_trigger = []
        should_pass = []
        have_triggered = []
        have_passed = []
        for i in range(200):
            sleep_time = 2 * REQUEST_TIMEOUT_ERROR * random.random()
            if sleep_time < 1.1:
                should_pass.append(i)
            else:
                should_trigger.append(i)
            response = await ac.get("/test_path?sleep_time={}".format(sleep_time))
            if response.status_code == HTTP_504_GATEWAY_TIMEOUT:
                have_triggered.append(i)
            else:
                have_passed.append(i)
        assert should_trigger == have_triggered

I created three tests, the first one is designed to see wether or not the middleware actually does its job.
The second one is just there to check if there is any consistency problem with a single request.
The third one is here to check if I ran into the same issue raised in the thread I mentioned.

As far as I am concerned the first two tests passed without a problem.
However the third one failed. There are requests that have triggered when they should not :

E           AssertionError: assert [3, 7, 10, 11, 12, 14, ...] == [3, 7, 8, 10, 11, 12, ...]
E             At index 2 diff: 10 != 8
E             Right contains 11 more items, first extra item: 165

This is the issue mentioned in the thread. I'll downgrade to starlette 0.13.2 and see if the test pass.

I might have made some mistakes or overlooked some things so I you ever have the chance to do some tests on your end let me know.

Cheers !

Note :
I wrote assert content['processing_time'] < 1.1 and not assert content['processing_time'] < 1 because the time I am monitoring isn't really the time it takes for python to execute the function (time to execute asyncio.wait_for and catching the exception I guess) . I do not know the convention in this case.

@thomas-maschler
Copy link
Contributor

@PLNech have you tried changing the timeout settings for gunicorn? By default it times out after 60 sec I believe but you can overwrite the settings.

https://docs.gunicorn.org/en/latest/settings.html#timeout
#551

@PLNech
Copy link
Contributor Author

PLNech commented Sep 17, 2020

@ZionStage: thanks for sharing your implementation, this looks promising! I'll make some room in our backlog to give it a try in our next sprint and will let you know how it goes :)

@PLNech
Copy link
Contributor Author

PLNech commented Sep 17, 2020

@thomas-maschler: thanks for the advice. Unfortunately I've tried using Gunicorn's timeout, but it triggers a full restart of the app, disrupting other users of the service (e.g. by unloading their models from memory). What I'm trying to achieve is rather to enforce a timeout on individual requests, without affecting any other work handled by this worker.

@tiangolo
Copy link
Owner

Thanks for the discussion here everyone!

Yes, indeed I think the solution would be with a middleware.

About the failing tests from @ZionStage, I understand there are no guarantees about sub-second precisions in async/await (I think Python in general). Either way, it would probably be impossible to expect absolute sub-second precision from something on the network. I would test only with integers to be sure.

But anyway, I think that's pretty much the right approach. ✔️

@github-actions
Copy link
Contributor

Assuming the original need was handled, this will be automatically closed now. But feel free to add more comments or create new issues or PRs.

@MasterScrat
Copy link

This is good to return an error message to the user in case of timeout, but is there a way to actually kill the request at the same time so it doesn't keep using resources?

@lamoni
Copy link

lamoni commented May 16, 2022

Bumping this for @MasterScrat's question. Wondering the same thing

@lionel-ovaert
Copy link

Another bump for @MasterScrat's question

@BarisicLuka
Copy link

@lionel-ovaert When raising the error once the time limit has been reached should stop any undergoing processes linked to the request, doesn't it ?

@dmelo
Copy link

dmelo commented Nov 21, 2022

Expanding on the middleware from @ZionStage , if the router uses non-asyncio blocking functions, it might end up missing the asyncio.TimeoutError. In the example bellow, tweaked from @ZionStage's code:

import asyncio
import time


import pytest

from fastapi import FastAPI, Request, Response, HTTPException
from fastapi.responses import JSONResponse
from httpx import AsyncClient
from starlette.status import HTTP_504_GATEWAY_TIMEOUT
import requests

REQUEST_TIMEOUT_ERROR = 1  # Threshold

app = FastAPI() # Fake app

# Creating a test path
@app.get("/test_path")
async def route_for_test(sleep_time: float) -> None:
    requests.get('https://i575rbl2mc.execute-api.us-east-1.amazonaws.com/sleep?time=3')
    return JSONResponse({}, status_code=200)

# Adding a middleware returning a 504 error if the request processing time is above a certain threshold
@app.middleware("http")
async def timeout_middleware(request: Request, call_next):
    try:
        start_time = time.time()
        return await asyncio.wait_for(call_next(request), timeout=REQUEST_TIMEOUT_ERROR)

    except asyncio.TimeoutError:
        process_time = time.time() - start_time
        return JSONResponse({'detail': 'Request processing time excedeed limit',
                             'processing_time': process_time},
                            status_code=HTTP_504_GATEWAY_TIMEOUT)

# Testing wether or not the middleware triggers
@pytest.mark.asyncio
async def test_504_error_triggers():
    # Creating an asynchronous client to test our asynchronous function
    async with AsyncClient(app=app, base_url="http://test") as ac:
        response = await ac.get("/test_path?sleep_time=3")
    content = eval(response.content.decode())
    assert response.status_code == HTTP_504_GATEWAY_TIMEOUT
    assert content['processing_time'] < 1.1

When running, we have that it lasted the whole execution of the router, way more than the timeout set on the middleware, and it returned 200, it bypassed the middleware:

❯ time pipenv run pytest c.py
================================================ test session starts ================================================
platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
rootdir: /home/dmelo/proj3/python/b
plugins: anyio-3.6.2, asyncio-0.20.2
asyncio: mode=strict
collected 1 item                                                                                                    

c.py F                                                                                                        [100%]

===================================================== FAILURES ======================================================
______________________________________________ test_504_error_triggers ______________________________________________

    @pytest.mark.asyncio
    async def test_504_error_triggers():
        # Creating an asynchronous client to test our asynchronous function
        async with AsyncClient(app=app, base_url="http://test") as ac:
            response = await ac.get("/test_path?sleep_time=3")
        content = eval(response.content.decode())
>       assert response.status_code == HTTP_504_GATEWAY_TIMEOUT
E       assert 200 == 504
E        +  where 200 = <Response [200 OK]>.status_code

c.py:43: AssertionError
============================================== short test summary info ==============================================
FAILED c.py::test_504_error_triggers - assert 200 == 504
================================================= 1 failed in 3.93s =================================================
pipenv run pytest c.py  0.80s user 0.10s system 19% cpu 4.603 total

I'm posting here in the hope that somebody either (a) managed to have a good implementation of requests timeout feature working or (b) knows how to make this middleware works even on those situations.

@LMalikov
Copy link

LMalikov commented Jan 12, 2023

Even if the router function contains async code it doesn't get interrupted/cancelled with this middleware solution.
The following example keeps printing Running... endlessly even though TimeoutException is triggered and underlying Task created by asyncio.wait_for(...) gets cancelled.

app = FastAPI()

@app.get("/long_running")
async def long_running():
    try:
        while True:
            print("Running...")
            await asyncio.sleep(1)
    except asyncio.CancelledError:  # This never happens :(
        print("Cancelled.")

@app.middleware("http")
async def timeout_middleware(request: Request, call_next):
    try:
        return await asyncio.wait_for(call_next(request), timeout=3)
    except asyncio.TimeoutError:
        return JSONResponse({'detail': 'Request processing time exceeded limit'}, 504)

@tiangolo shouldn't we hit except asyncio.CancelledError in this case? 🙏

@liyunrui
Copy link

liyunrui commented Jan 13, 2023

@LMalikov I got the same error. It looks like you need at least two middleware dectorators in the main.py but it's super wierd. For example,

@app.middleware("http")
async def add_process_time_header(request: Request, call_next):
    start_time = time.time()
    response = await call_next(request)
    process_time = time.time() - start_time
    # response.headers["X-Process-Time"] = str(process_time)
    print("adfadsfasdf")
    # response.headers["Test middleware"] = str(random.randint(1, 1000))
    return response


REQUEST_TIMEOUT_ERROR = 1.0 # seconds to wait for
from fastapi.responses import JSONResponse
from starlette.status import HTTP_504_GATEWAY_TIMEOUT

#Adding a middleware returning a 504 error if the request processing time is above a certain threshold
@app.middleware("http")
async def timeout_middleware(request: Request, call_next):
    try:
        start_time = time.time()
        return await asyncio.wait_for(call_next(request), timeout=REQUEST_TIMEOUT_ERROR)

    except asyncio.TimeoutError:
        process_time = time.time() - start_time
        res = timeout_fallback(process_time)
        return res

def timeout_fallback(process_time):
    response = JSONResponse({'detail': 'Request processing time excedeed limit',
                             'processing_time': process_time},
                            status_code=HTTP_504_GATEWAY_TIMEOUT)
    return response

@liyunrui
Copy link

Does anyone

@LMalikov I got the same error. It looks like you need at least two middleware dectorators in the main.py but it's super wierd. For example,

@app.middleware("http")
async def add_process_time_header(request: Request, call_next):
    start_time = time.time()
    response = await call_next(request)
    process_time = time.time() - start_time
    # response.headers["X-Process-Time"] = str(process_time)
    print("adfadsfasdf")
    # response.headers["Test middleware"] = str(random.randint(1, 1000))
    return response


REQUEST_TIMEOUT_ERROR = 1.0 # seconds to wait for
from fastapi.responses import JSONResponse
from starlette.status import HTTP_504_GATEWAY_TIMEOUT

#Adding a middleware returning a 504 error if the request processing time is above a certain threshold
@app.middleware("http")
async def timeout_middleware(request: Request, call_next):
    try:
        start_time = time.time()
        return await asyncio.wait_for(call_next(request), timeout=REQUEST_TIMEOUT_ERROR)

    except asyncio.TimeoutError:
        process_time = time.time() - start_time
        res = timeout_fallback(process_time)
        return res

def timeout_fallback(process_time):
    response = JSONResponse({'detail': 'Request processing time excedeed limit',
                             'processing_time': process_time},
                            status_code=HTTP_504_GATEWAY_TIMEOUT)
    return response

Does anyone know why? It's super weird. Basically, you need to have two @app.middleware("http"). Otherwise, the timeout exception won't work.

@liyunrui
Copy link

@galigutta
Copy link

Same problem as whats noted in the stackoverflow link above. The aysncio timeout is not respected.

@Naish21
Copy link

Naish21 commented Feb 7, 2023

I think this can be fixed using python 3.11 and asyncio.timeout_at instead asyncio.wait_for.
In the meantime (I'm using python 3.9) I'll try something else. I'll tell you if it works.

@Naish21
Copy link

Naish21 commented Feb 7, 2023

Workaround: I've created a decorator to use in the endpoints you want to raise a response 504:
(place it in a file named abort_after.py)

import functools
import signal
import sys

from fastapi.responses import JSONResponse
from starlette import status


class TimeOutException(Exception):
    """It took longer than expected"""


def abort_after(max_execution_time):
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            def handle_timeout(signum, frame):
                raise TimeOutException(f"Function execution took longer than {max_execution_time}s and was terminated")
            if sys.platform == 'win32':
                print("Won't be stopped in windows!")
            else:
                signal.signal(signal.SIGALRM, handle_timeout)
                signal.alarm(max_execution_time)
            result = func(*args, **kwargs)
            if sys.platform != 'win32':
                signal.alarm(0)
            return result
        return wrapper
    return decorator


def timeout_response() -> JSONResponse:
    return JSONResponse(
        {
            'detail': 'Request processing time excedeed limit',
        },
        status_code=status.HTTP_504_GATEWAY_TIMEOUT,
    )

Then you can use it in your endpoint:

import time
from fastapi import APIRouter
from abort_after import abort_after, TimeOutException, timeout_response

router = APIRouter()


@router.post(f"{URL_prefix}/test",
             tags=['Test'],
             )
async def test():
    try:
        long_func(60)
    except TimeOutException:
        return timeout_response()
    return {'Test': 'ok'}


@abort_after(5)
def long_func(seconds: int) -> None:
    time.sleep(seconds)

@rinzool
Copy link

rinzool commented Feb 17, 2023

Thanks @Naish21 I really like your solution!
Note that it only works with seconds as integer (so no timeout below 1s).
To use this solution with a floating number of second, one can replace

signal.alarm(max_execution_time)

with

signal.setitimer(signal.ITIMER_REAL, max_execution_time)

setitimer can work with floating number, so it is possible to define a timeout of 300ms for example (@abort_after(0.3))

@nicolasdespres
Copy link

I am afraid this solution will not play well in a concurrent environment since there is only one timer per process, whereas there will be many co-routines running concurrently within the same process.

@tiangolo tiangolo changed the title [QUESTION] How does one setup a global timeout to all requests? How does one setup a global timeout to all requests? Feb 24, 2023
@tiangolo tiangolo reopened this Feb 28, 2023
Repository owner locked and limited conversation to collaborators Feb 28, 2023
@tiangolo tiangolo converted this issue into discussion #7364 Feb 28, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
Development

No branches or pull requests