Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit concurrency to 1 #57

Closed
karolzlot opened this issue Jul 25, 2021 · 4 comments
Closed

Limit concurrency to 1 #57

karolzlot opened this issue Jul 25, 2021 · 4 comments

Comments

@karolzlot
Copy link

In my use case I want to limit concurrency to 1 for some endpoints to avoid race conditions.

I am looking for a limiter which will reliably not allow to have two requests at the same time to the same endpoint.

I wonder if this library will suit my needs?

@karolzlot
Copy link
Author

I would use ‘1 per second’ granularity.
But I wonder if concurrency will still be limited if requests take more than 1 second in this case?

@laurentS
Copy link
Owner

Hi @karolzlot
I don't think slowapi will do what you want. If a request takes more than 1 second to complete, when the second request comes in, let's say 1.1 sec after the 1st request started, the limiter backend would be queried, and return nothing (because the time window has expired), so the second request could go ahead, and I don't think this is what you want.
You could probably hack the code to replace timebased windows with something that expires when your request completes, but at that point I think what you need is something like a mutex or semaphore, which will be less cumbersome.

I'd be curious to know what your use case is, if you can share?

@karolzlot
Copy link
Author

karolzlot commented Jul 27, 2021

Thank you @laurentS

I was thinking about using file locks, but I will also check your suggestions about mutex and semaphore.


My use case:

I have many endpoints which work like this:

  1. Database is queried to check for new tasks to do. (For example take rows which have A column filled, but B column empty)
  2. Do all tasks (in for loop)
  3. Return "successfully finished /endpoint_abc" or similar message

Those endpoints are usually fired soon after new tasks are available (and also every 10 minutes just in case). In case two or more tasks will become available in the same time it could cause race conditions (each tasks would be executed two or more times which would result in for example two the same emails sent or two the same invoices issued).

To not worry about race conditions the easiest solution is to limit concurrency to 1. Benefit from concurrency speedup wouldn't be important in this case anyway, and I could always use async inside endpoint if needed.

Those endpoints doesn't take any REST parameters (it's by design). They just are fired and they know what to do, they only return short message to tell that everything went OK.

Because I need to limit concurrency to exactly 1, I don't need to worry about solution which would work also on multi-server setup (I don't need multi-server for endpoints which are anyway limited to 1 concurrency 😄 )

In past I used two solutions for limiting concurrency:

  1. Flask server in debug mode (it is by design limited)
  2. Run app in Google Cloud Run and set concurrency to 1

Now I just want to do the same, but with FastAPI and without Google Cloud Run. Also those solutions limit concurrency per server, but preferably I would like to limit concurrency per endpoint.

Also the best solution would be if endpoint could just wait a bit if the same endpoint is already executing at the same time.

@karolzlot
Copy link
Author

Now as I more read about it I think some kind of database lock may suit me better.

It's not that I need to limit concurrency per endpoint, it's more that I need to limit concurrency per row in database.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants