-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-standard Response Content #126
Comments
aiopenapi3/aiopenapi3/v30/glue.py Lines 431 to 437 in e0b4d67
You can use aiopenapi3/aiopenapi3/v30/glue.py Lines 336 to 342 in e0b4d67
class customContentParser(aiopenapi3.plugin.Message):
def received(self, ctx: "Message.Context") -> "Message.Context":
if ctx.content_type != "custom/unknown":
return ctx
try:
ctx.received = myparser(ctx.received)
except Exception as e:
raise ResponseDecodingError(ctx.received, None, None) from e
return ctx
api = OpenAPI.load_sync("…", plugins=[customContentParser()]) the responsedecodingerror will lack operation & response, but may do? I'm preparing a new release already. |
Yes, that does work. Thank you. And thank you for preparing the release! This is an edge case scenario, but I'm trying to ingest log data. The log files can be pretty large. Ideally, I'd like to iterate over the received data in chunks and save those chunks as individual files, then set the If I'm understanding the httpx documentation correctly, Thanks again! |
httpx can do streaming responses - https://www.python-httpx.org/quickstart/#streaming-responses I see two options
I'd go for 2 - just guessing … req = api.createRequest("operationId")
req._prepare(data=body,parameters={"a":1})
s = httpx.Session()
hr = req.build_req(s)
r = s.send(hr, stream=True)
for i in r.iter_chunks():
… Async via httpx.AsyncClient & aiter_chunks, |
As this is a pre-release, install/upgrade with https://pypi.org/project/aiopenapi3/#history |
I felt this was a use-case worth covering. |
Oh! This is awesome. If I'm following the commits correctly, the idea would be to use the |
I think you refer to the wording at - https://aiopenapi3.readthedocs.io/en/streaming/advanced.html#streaming-responses Can not be handled automatically. For the rest - yes, using ….stream() req = client.createRequest("file")
headers, schema_, session, result = await req.stream(…)
async for i in result.aiter_bytes():
…
await session.aclose() |
https://github.com/commonism/aiopenapi3/blob/e0b4d67513c64530ad4004f624731376a2bd257c/aiopenapi3/v30/glue.py#L437C27-L437C27
Would it be appropriate to allow plugins to handle parsing data at this point in the code, or do you feel that's something well outside the scope of this project?
Also, when you get a chance, would you please push the current code base to pypi?
The text was updated successfully, but these errors were encountered: