Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate aiohttp.wsgi #1108

Closed
asvetlov opened this issue Aug 23, 2016 · 24 comments
Closed

Deprecate aiohttp.wsgi #1108

asvetlov opened this issue Aug 23, 2016 · 24 comments
Labels
Milestone

Comments

@asvetlov
Copy link
Member

Currently the module is not 100% compatible with WSGI standard, it works bad with big payload but introduces 'async.reader' and 'async.writer' fields in WSGI environment dictionary.

It's not documented also.

Should we support it at all?

I suggest adding explicit deprecation and removing wsgi mentions from documentation.
After 1.5 years of deprecation period we may drop the module at all.

If somebody want to support the module -- please do it outside of aiohttp project.

@fafhrd91
Copy link
Member

gunicorn.gaiohttp depends on wsgi

@asvetlov
Copy link
Member Author

Uuups.
Sorry, I've forgot this fact.

@fafhrd91
Copy link
Member

we can replace gaiohttp with newer implementation that support both aiohttp.wsgi and aiohttp.web
and later deprecate aiohttp.wsgi support

@asvetlov asvetlov reopened this Aug 23, 2016
@asvetlov
Copy link
Member Author

Agree with the plan

@xpinguin
Copy link

Hi! I'd like to put my ten kopeks in, regarding aiohttp.wsgi. Turned out useless for me, as my goal was to seamlessly attach inherently synchronous wsgi app (WsgiDav in my case) to an asyncio application, where aiohttp is used only for HTTP-based protocols.
The major drawback is that aiohttp.wsgi supports only wsgi/async - custom wsgi extension, out of pep-3333 scope. This renders conforming (i.e. non-async) wsgi apps out of the game.

The solution was to develop the class which automatically performs 2way coroutine<->function conversion, thus allowing, for instance, to wrap async StreamReader into non-async one and pass it to an app. Well, no magic in here, drawbacks are obvious:

  • function->corofunc <=> loop.run_in_executor(function(...)), eg. default ThreadPool
  • corofunc->function <=> run_coroutine_threadsafe(corofunc(...))

... so at least we could easily have a stall in our wsgi app, although never in the main event loop, so we could adjust the pool size, swap pools, etc. - all hypothetical, had no occasion to try that out, as our real loads are laughable :)

For those, interested in actual code, here is the gist of the class (NB. it is from non-public application in alpha-stage so the clumsy code, I've also removed some aux. stuff, eg. our approach to routing, in order to be clean, just don't expect the code below to work as-is; moreover, WSGIHttpServerProtocol has poorly fitted my preference for abstractions used, so class is more-or-less application-like (conceptually, not literaly the web.Application)):
https://gist.github.com/xpinguin/e77a4a817aa1b1590de1743fa41f6671#file-wsgiapp-py

Helpers operating at the very heart, to better understand what the heck is going on:) https://gist.github.com/xpinguin/e77a4a817aa1b1590de1743fa41f6671#file-wsgiapp_helpers-py

corofunc<->func transforms, hardly any quirks, although may be interesting:
https://gist.github.com/xpinguin/e77a4a817aa1b1590de1743fa41f6671#file-coro_func_transform-py

Long story short, I do like aiohttp-core, very neat (unfortunately, couldn't say so about routers&resources). I am pretty much interested in the full-blown WSGI support, just not in the current form. What do you guys think about all that?

@fafhrd91
Copy link
Member

aiohttp.wsgi exists because of old version of gaiohttp, i would kill it if not that

@xpinguin
Copy link

I'm not from the webdev or mobile scene, so I could've misinterpreted the design goals behind aiohttp.wsgi, sorry then... Yet still, despite my ignorance about gunicorn and all the stuff alike, "aiohttp.wsgi" sounds just perfect for the module containing wsgi->aiohttp adaptor(s), isn't it?

@asvetlov
Copy link
Member Author

@xpinguin current aiohttp.wsgi is not compatible with sync WSGI apps and almost useless.
If you want to provide and support full fledged WSGI adapter -- please do it outside of aiohttp in third-party library.

@xpinguin
Copy link

Just to clarify:

  • I'm working on multi-protocol storage server, whether it'll be open-sourced or not is a question of the future unforseeable
  • aiohttp is one of the key components of that server, and likely will be
  • Automatic WSGI->aio/aiohttp adapter was developed, adapter tries its best to preserve IO asynchronicity (and probably could even try a bit more)
  • I could made some sort of FR/PR raising the code quality to aiohttp's standards, that's a lot of effort though, so the preliminary discussion in here
  • ... or I could just leave it buried in my app; not being arrogant, its just the maintenance of a separate library requires even more effort

@asvetlov got your reply while writing this comment, so I leave the clarification points, yet as I understood, the answer is "don't bother with PR, aiohttp has different design goals", correct?
No problem here, if so.
However, I'd like to note, that my reasoning stems from the fact of having wsgi module in aiohttp library itself. Library which is the first pointed to by googling "asyncio http server", googling that might be done by anyone, not only webdevs - that's actually how I've got to use aiohttp in my project (and my interest in asyncio primarily lies within the context of operational semantics, event-based concurrency is just a nice thing to have; your library gains wider audience, guess I'm not the only one...)

@asvetlov
Copy link
Member Author

asyncio http server doesn't mean wsgi http server :)
The main problems are maintenance and teastability.
Personally I never used aiohttp.wsgi and none of aiohttp contributors did it AFAIK.
There is a high risk of WSGI support breakage.
Thus I think the best solution is extracting WSGI code into separate library with responsible maintainer (not me).

@xpinguin
Copy link

xpinguin commented Nov 18, 2016

Personally I never used aiohttp.wsgi and none of aiohttp contributors did it AFAIK.

Ahh, ok then. That renders my suggestion irrelevant.

Thus I think the best solution is extracting WSGI code into separate library ...

Yes. So if anyone shows interest towards the idea, I'd probably do that.
Thanks anyway:)

@fafhrd91
Copy link
Member

@xpinguin i hope you understand that running sync and async code in same thread is bad idea. real approach should be spawn new thread for wsgi process and then forward data from aiohttp into that thread, but this project already exists, https://github.com/etianen/aiohttp-wsgi

@asvetlov btw aiohttp-wsgi seems pretty active, should we mention that in documentation?

@asvetlov
Copy link
Member Author

Why not? What exact place in our doc is the best from your perspective?

@fafhrd91
Copy link
Member

i think it should be faq, like "how to use wsgi app with aiohttp"

@xpinguin
Copy link

xpinguin commented Nov 18, 2016

@fafhrd91 I think you really should've read my first comment here, before diving in into the bliss of arrogance.

I am not even mentioning the source code, I've provided (though it is indeed a laborous procedure, without any sarcasm) - then you'd be able to get an idea why aiohttp-wsgi has probably a much simpler solution compared to WSGIApp class.
HINT: try to POST/PUT a huge blob into some wsgi-app, and watch resident-memory/disk-usage statistics for both solutions.

P.S. I've never seen aiohttp-wsgi before, just quickly skimmed over its code to find what I've been looking for:

while True:
                block = await request.content.readany()
                if not block:
                    break
                await body.write(block)

IMO, solution with ReadBuffer, overflowing to the file is suboptimal. I think that project could benefit from the approach employed in WSGIApp class, so, actually I could provide the results of my sufferings there :)
EDIT: though, to be honest, to_sync_func() approach has its own drawbacks

@xpinguin
Copy link

xpinguin commented Nov 18, 2016

Another problem with aiohttp-wsgi is latency, ie. you need to wait for the WSGI handler completion before passing the data back to client. OTOH, in case of WSGIApp result data begins to pass through almost immediately.

That's closely relates to the more broad concept which was touched slightly by AwaitTransparent_Reducing_AsyncGenerator class (in the provided gist), which I tend to call sync-async hybridization: an approach allowing sync app (esp. iterable-based WSGI app) to return an awaitable (or chain of them) capturing some computations "performed" synchronously. This, however, requires far deeper research.

@asvetlov
Copy link
Member Author

Well, it means that making proper bridge between aiohttp and sync WSGI is hard.
We cannot implement (and support) it properly in this project.

@xpinguin please send a link to your implementation when (and if) you publish it as Open Source.
I'll be happy to mention it in aiohttp documentation.

@xpinguin
Copy link

xpinguin commented Nov 18, 2016

@asvetlov yeah, the very existence of aiohttp-wsgi shows that there is an interest in such adapter. So I think now, it is worth an effort to release code provided in the gist (WSGIApp and helpers) above as a more reusable library. At least it could possibly save some poor soul from stepping on the rake while attaching something heavy like WsgiDav to aiohttp:)
btw, aiohttp-wsgi still has a significantly broader feature-set, than that provided by WSGIApp (eg. websockets or standalone server runner).

I'll let you know (hopefully) soon, thanks!

@etianen
Copy link

etianen commented Nov 21, 2016

Hello! I'm the author of aiohttp-wsgi. 😄 I would be very happy to have it added to the aiohttp documentation as a 3rd party app.

To explain some of my implementation decisions:

  • Incoming HTTP body data is buffered into memory, or loaded into a tempfile once it overflows, before the WSGI handler is called in a thread pool executor. This means that the WSGI app never waits on incoming network IO. If you wrapped the StreamReader in a sync wrapper, then you end up performing network IO in the thread pool, which can easily end up with thread pool starvation when if multiple clients are uploading large files over slow connections.

  • Outgoing HTTP data is fully buffered in memory before being output to the client. This is because it achieves vastly better throughput for most use-cases. Allowing the response body to be streamed involved lots of calls to call_soon_threadsafe, which has a noticeable performance cost.

The end result is that a WSGI app running in aiohttp-wsgi is a bit like running gunicorn behind nginx. All network IO is buffered, and the web workers only perform short-running HTTP requests.

@fafhrd91
Copy link
Member

@asvetlov lets mark aiohttp.wsgi as deprecated in 1.3.0

@fafhrd91
Copy link
Member

added PR to gunicorn benoitc/gunicorn#1418

@asvetlov
Copy link
Member Author

Cool!

@fafhrd91
Copy link
Member

dropping wsgi support in 2.0

@fafhrd91 fafhrd91 added this to the 2.0 milestone Feb 16, 2017
@lock
Copy link

lock bot commented Oct 29, 2019

This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a new issue for
related bugs.

If you feel like there's important points made in this discussion,
please include those exceprts into that new issue.

@lock lock bot added the outdated label Oct 29, 2019
@lock lock bot locked as resolved and limited conversation to collaborators Oct 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants