You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Apologies if this is obvious, but I just can't seem to find any documentation.
From reading the docs and looking at PR #289 it seems that even though you can have async methods inside hug, all requests will be processed synchronously. If I understand that correctly, if I have an API endpoint that makes a request to a slow DB, the delay in serving that requests will always hold up subsequent requests.
However, if I use gunicorn (or similar server) in front of hug I can spin up multiple workers. Does that mean that a slow DB request being served by one worker won't impact requests hitting another worker? (Maybe depending on the worker type?) That's my reading of the gunicorn docs, but my tests were not conclusive.
In general, are there guidelines for writing performant hug APIs where particular endpoints are slow (external API requests or pyodbc queries)?
The text was updated successfully, but these errors were encountered:
Apologies if this is obvious, but I just can't seem to find any documentation.
From reading the docs and looking at PR #289 it seems that even though you can have async methods inside hug, all requests will be processed synchronously. If I understand that correctly, if I have an API endpoint that makes a request to a slow DB, the delay in serving that requests will always hold up subsequent requests.
However, if I use gunicorn (or similar server) in front of hug I can spin up multiple workers. Does that mean that a slow DB request being served by one worker won't impact requests hitting another worker? (Maybe depending on the worker type?) That's my reading of the gunicorn docs, but my tests were not conclusive.
In general, are there guidelines for writing performant hug APIs where particular endpoints are slow (external API requests or pyodbc queries)?
The text was updated successfully, but these errors were encountered: