Skip to content
This repository has been archived by the owner on Mar 15, 2020. It is now read-only.

Streaming requests and responses #71

Closed
florimondmanca opened this issue Dec 25, 2018 · 0 comments · Fixed by #83
Closed

Streaming requests and responses #71

florimondmanca opened this issue Dec 25, 2018 · 0 comments · Fixed by #83
Assignees
Labels
enhancement New feature or improvement of an existing one refactor This will need refactoring the code

Comments

@florimondmanca
Copy link
Member

florimondmanca commented Dec 25, 2018

Is your feature request related to a problem? Please describe.
There is currently no way of processing the request as a stream or sending a response as a stream. Yet, this is useful when either the request's or the response's body is too large to afford loading it in full in memory.

Note: this is not the same as chunked requests or responses, which is determined by the Transfer-Encoding header.

Describe the solution you'd like
We should be able to read the request body as a stream or send a response as a stream.

Describe alternatives you've considered

  • Requests: it seems natural to iterate over the req object, i.e. async for chunk in req.
from bocadillo import API

api = API()

@api.route("/")
async def index(req, res):
    data = ""
    async for chunk in req:
        data += chunk
    res.text = data
  • Responses: the most natural approach would be to use an async generator to yield chunks of the response. The generator could be registered by decorating it, e.g. @res.stream.
from bocadillo import API
from asyncio import sleep

api = API()

@api.route("/stream/{word}")
async def stream_word(req, res, word):
    @res.stream
    async def streamed():
        async for chunk in req:
            await sleep(0.1)
            yield chunk

    # Use other attributes on `res`
    res.headers["x-foo"] = "foo"
    @res.background
    async def do_more():
        pass

Describe alternatives you've considered
I thought about providing an @stream decorator for view handlers themselves, for example:

from asyncio import sleep
from bocadillo import API, stream

async def streamed(word: str):
    for c in word:
        await sleep(0.1)
        yield c.encode()

# Function-based
@api.route("/stream/{word}")
@stream
async def stream_word(req, res, word):
    yield from streamed(word)

But that would not have allowed to use the response object at all. This is because the generator is called by Starlette while the response is being sent, and the Bocadillo Response is pretty much a Starlette response builder which does its work before the response is even created.

Additional context
Useful Starlette features: request.stream(), StreamingResponse

@florimondmanca florimondmanca added enhancement New feature or improvement of an existing one Status: Revision Needed labels Dec 25, 2018
@florimondmanca florimondmanca added refactor This will need refactoring the code and removed Status: Revision Needed labels Jan 2, 2019
@florimondmanca florimondmanca self-assigned this Jan 2, 2019
This was referenced Jan 2, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or improvement of an existing one refactor This will need refactoring the code
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant