-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to fetch raw byte as is. (without decompressing) #1524
Comments
Is this a request for server implementations of |
yea pretty much... thought i would bring it up here first doe... |
I think we'd need some pretty compelling use cases to consider this as it would be somewhat non-trivial to do as I understand it. |
Here is a compelling case for a Chrome extension:
|
Another use case could be to do partial download of something that's encoded and also supports range request. Say that i want to download something really large. I initiate a call const response = await fetch(url, {
method: 'GET',
raw: true,
headers: {
'accept-encoding': 'gzip, deflate'
}
}) From now on i will know
but i will not know what the actual data is unless i pipe it to a const progress = document.querySelector('progress')
const chunks = [] // ultra simple store
for await (const rawChunk of response.body) {
// show how much have been downloaded (not how much have been decompressed)
progress.value += rawChunk.byteLength
// store the chunks somewhere
chunks.push(rawChunk)
} With this in place i can provide a good solution for failed downloads. now that i have all chunks then i can go ahead and decompress it using the unfortunately we lose some very useful stuff with this raw option. can't use brotli decoding (due to lack of support in decompressionStream) another option would be to be able to hook in and inspect the data somehow before it's decompressed. so a alternative solution could be to do something like const response = await fetch(url, {
onRawData (chunk) {
// ...
}
})
// alternative considerations
const response = await fetch(url)
const clone = response.clone()
response.json().then(done, fail)
clone.rawBody.pipeTrough(monitor).pipeTo(storage) // consuming the rawBody makes `clone.body` locked and unusable. So i can say that i found two additional use cases beside a server proxy. 1) progress monitoring, 2) pausable / resumable download |
One other use case: If I wish for an application to cache the compressed response data with a custom storage layer, it would have been convenient if the application could take the data as encoded by the server and push it into the cache directly. At the moment, one can only grab the data in its decompressed form, which would either waste space in the cache or require the application to re-compress it. |
Out of curiosity, how did you do it with XMLHttpRequest? I didn't think that was possible. |
There is a need for proxy servers to simply pass data forward from A -> B without decompressing the data as it would invalidate the
content-length
andcontent-encoding
(Like a CORS proxy server for instance)So we need an option to disable transformation.
The text was updated successfully, but these errors were encountered: