Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-standard Response Content #126

Closed
ggpwnkthx opened this issue Aug 14, 2023 · 7 comments
Closed

Non-standard Response Content #126

ggpwnkthx opened this issue Aug 14, 2023 · 7 comments

Comments

@ggpwnkthx
Copy link
Contributor

https://github.com/commonism/aiopenapi3/blob/e0b4d67513c64530ad4004f624731376a2bd257c/aiopenapi3/v30/glue.py#L437C27-L437C27

Would it be appropriate to allow plugins to handle parsing data at this point in the code, or do you feel that's something well outside the scope of this project?

Also, when you get a chance, would you please push the current code base to pypi?

@commonism
Copy link
Owner

else:
"""
We have received a valid (i.e. expected) content type,
e.g. application/octet-stream
but we can't validate it since it's not json.
"""
return rheaders, ctx.received

You can use Message.received:

ctx = self.api.plugins.message.received(
operationId=self.operation.operationId,
received=result.content,
headers=result.headers,
status_code=status_code,
content_type=content_type,
)

class customContentParser(aiopenapi3.plugin.Message):
    def received(self, ctx: "Message.Context") -> "Message.Context":
        if ctx.content_type != "custom/unknown":
            return ctx
        try:
            ctx.received = myparser(ctx.received)
        except Exception as e:
            raise ResponseDecodingError(ctx.received, None, None) from e
        return ctx

api = OpenAPI.load_sync("…", plugins=[customContentParser()])

the responsedecodingerror will lack operation & response, but may do?
Let me know if this works for you.

I'm preparing a new release already.

@ggpwnkthx
Copy link
Contributor Author

Yes, that does work. Thank you. And thank you for preparing the release!

This is an edge case scenario, but I'm trying to ingest log data. The log files can be pretty large. Ideally, I'd like to iterate over the received data in chunks and save those chunks as individual files, then set the ctx.received value to a dict<index, filepath> value. This way I can avoid having the entire raw data set in RAM.

If I'm understanding the httpx documentation correctly, httpx.response.content does pipe all the data into RAM. I'd be curious of your thoughts on how to handle this kind of situation.

Thanks again!

@commonism
Copy link
Owner

commonism commented Aug 15, 2023

httpx can do streaming responses - https://www.python-httpx.org/quickstart/#streaming-responses

I see two options

  • compile the httpx request yourself
  • use aiopenapi3 to do the heavy lifting but use prepared httpx request to run it yourself
  • make aiopenapi3 capable of returning an iterator for a certain content type

I'd go for 2 - just guessing …

req = api.createRequest("operationId")
req._prepare(data=body,parameters={"a":1})
s = httpx.Session()
hr = req.build_req(s)
r = s.send(hr, stream=True)
for i in r.iter_chunks():
    …

Async via httpx.AsyncClient & aiter_chunks,
mutualTLS would be a "cert=(…)" to s.send().

@commonism
Copy link
Owner

As this is a pre-release, install/upgrade with pip install --pre --force aiopenapi3

https://pypi.org/project/aiopenapi3/#history
https://github.com/commonism/aiopenapi3/releases/tag/v0.3.0a9

@commonism
Copy link
Owner

I felt this was a use-case worth covering.
Would appreciate if you could give #144 a look.

@ggpwnkthx
Copy link
Contributor Author

Oh! This is awesome. If I'm following the commits correctly, the idea would be to use the stream rather than request method of the factory when we know a "download" is coming. Although the documentation reads to me a bit like this is or can be handled automatically in the event of an aiopenapi3.errors.ContentLengthExceededError exception.

@commonism
Copy link
Owner

I think you refer to the wording at - https://aiopenapi3.readthedocs.io/en/streaming/advanced.html#streaming-responses
I agree, it can be read that way, will change it.

Can not be handled automatically.
You have different return values, you have to close the session.

For the rest - yes, using ….stream()

    req = client.createRequest("file")
    headers, schema_, session, result = await req.stream(…)

    async for i in result.aiter_bytes():
        …

    await session.aclose()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants