Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support async / await #241

Closed
zbyte64 opened this issue Mar 2, 2016 · 9 comments · Fixed by #289
Closed

Support async / await #241

zbyte64 opened this issue Mar 2, 2016 · 9 comments · Fixed by #289
Milestone

Comments

@zbyte64
Copy link

zbyte64 commented Mar 2, 2016

It would be nice if you could register coroutines or async functions as api endpoints. Currently I am using async but each endpoint is responsible for managing the event loop. So for now I have something like:

loop = asyncio.get_event_loop()

@hug.post('/')
def spinup_service():
    future1 = asyncio.ensure_future(call_asynchronous_provisioner())
    loop.run_until_complete(future1)
    future2 = asyncio.ensure_future(call_another_async(future1.result())
    loop.run_until_complete(future2)
    return future2.result()

Instead I would prefer the following to be allowed:

@hug.post('/')
async def spinup_service():
    result1 = await call_asynchronous_provisioner()
    return await call_another_async(result1)
@timothycrosley timothycrosley added this to the 3.0.0 milestone Mar 3, 2016
@timothycrosley
Copy link
Collaborator

100% agree, adding this to the roadmap for 3.0.0

Thanks!

~Timothy

@rodcloutier
Copy link
Contributor

Is the plan to fully support async requests or simply coroutines that would use async functionalities?

@zhwei820
Copy link

zhwei820 commented Apr 2, 2016

It seems like hug does not support async?

Because server always wait the run_until_complete over and starting receiving next request, quite weird to me.

hug's server log

Serving on port 8001...

foo

127.0.0.1 - - [03/Apr/2016 23:48:14] "GET /public HTTP/1.1" 200 62

foo

127.0.0.1 - - [03/Apr/2016 23:48:14] "GET /public HTTP/1.1" 200 62

foo

127.0.0.1 - - [03/Apr/2016 23:48:14] "GET /public HTTP/1.1" 200 62

foo

127.0.0.1 - - [03/Apr/2016 23:48:14] "GET /public HTTP/1.1" 200 62

foo

127.0.0.1 - - [03/Apr/2016 23:48:15] "GET /public HTTP/1.1" 200 62

foo

127.0.0.1 - - [03/Apr/2016 23:48:15] "GET /public HTTP/1.1" 200 62

foo

127.0.0.1 - - [03/Apr/2016 23:48:15] "GET /public HTTP/1.1" 200 62

foo

# code using hug    
import asyncio    
loop = asyncio.get_event_loop()    

@hug.get('/public')    
def public_api_call():    
    future1 = asyncio.ensure_future(hello())        
    loop.run_until_complete(future1)        
    return "Needs no authentication: channel  , os_type  , app_version  "    

@asyncio.coroutine    
def hello():        
    r = yield from asyncio.sleep(0.1)        
    print('foo')

However, when using tornado , i got log like this, which can receive request asynchronously.

[I 160403 23:55:15 web:1946] 200 GET / (127.0.0.1) 1198.95ms

[I 160403 23:55:15 web:1946] 200 GET / (127.0.0.1) 1198.73ms

[I 160403 23:55:15 web:1946] 200 GET / (127.0.0.1) 1198.42ms

[I 160403 23:55:15 web:1946] 200 GET / (127.0.0.1) 1198.22ms

[I 160403 23:55:15 web:1946] 200 GET / (127.0.0.1) 1198.17ms

foo

foo

[I 160403 23:55:15 web:1946] 200 GET / (127.0.0.1) 1135.65ms

[I 160403 23:55:15 web:1946] 200 GET / (127.0.0.1) 1135.08ms

foo

foo

#code use tornado
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado import gen

from tornado.options import define, options

define("port", default=8888, help="run on the given port", type=int)

class MainHandler(tornado.web.RequestHandler):
    @gen.coroutine
    def get(self):
        yield gen.sleep(1)
        print('foo')
        self.write("Hello, world")


def main():
    tornado.options.parse_command_line()
    application = tornado.web.Application([
        (r"/", MainHandler),
    ])
    http_server = tornado.httpserver.HTTPServer(application)
    http_server.listen(options.port)
    tornado.ioloop.IOLoop.current().start()


if __name__ == "__main__":
    main()

@rodcloutier
Copy link
Contributor

You are right. Hug runs the falcon framework under a wsgi server which is not async compatible. I am currently looking at some design that could make it support asynchronous methods.

rodcloutier added a commit to rodcloutier/hug that referenced this issue Apr 6, 2016
rodcloutier added a commit to rodcloutier/hug that referenced this issue Apr 7, 2016
rodcloutier added a commit to rodcloutier/hug that referenced this issue Apr 7, 2016
rodcloutier added a commit to rodcloutier/hug that referenced this issue Apr 7, 2016
rodcloutier added a commit to rodcloutier/hug that referenced this issue Apr 7, 2016
@timothycrosley timothycrosley modified the milestones: 2.1.0, 3.0.0 May 17, 2016
@ericfrederich
Copy link

@rodcloutier @timothycrosley any progress on asynchronous requests?

@zhwei820
Copy link

Hi, @timothycrosley , I moved your code from falcon to aiohttp to achieve non-blocking IO, proved to be workable.
For now, most of HTTP features of your hug are supported, while CLI and LOCAL are not tested yet.

git clone https://github.com/zhwei820/aio_hug

clone aio_hug && cd aio_hug
python _test_api_aio.py

Open http://localhost:8000/v2/hello with two DIFFERENT browser, you can see 2 long request do not impact each other.

@edevil
Copy link

edevil commented Dec 21, 2016

Is this going to be merged into master?

@feluxe
Copy link

feluxe commented Sep 11, 2017

There is a related issue on falcon: falconry/falcon#1008

@oersted
Copy link

oersted commented May 27, 2021

Any progress on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants