-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streams not quite working correctly #6
Comments
I'm sure this used to work, but now throws "missing header" exceptions at the HTTP level communicating with Azure. That threw me quite a lot. It appears that the read stream the API provides no longer can be stated, i.e. details about its length is not available. When writing a stream to the API, it expects to know the content length from the start. The MS library was assuming the length was available just because a resource was provided, when in fact the length was unknown - null. This caused the request to the stream writing to the API to be corrupted - it had a null length. And that was throwing up all sorts of obscure errors. |
The read streams provided by the MS library are now copied to This does mean the full file is read from the API to the local server. Using the temp stream allows that to not cause memory issues. The temp stream will read the first 2Mbyte into memory, then fall back to local disk for any remaining bytes. An integration test has been added to demonstrate a streaming copy from one file to another. |
Please note: there is something very wrong withing the Azure file storage API, and I don't believe it is the PHP library that uses it. A very wide class of errors, such as a simple additional "." on the end of a path, makes Azure file storage through what can only be described as a "wobbly". It reports totally irrelevant errors concerning missing headers, invalid authentication etc. This is a big gotcha if you are not aware of this, and it can lead you down the wrong path diagnosing bugs for days if not careful. |
This workaround is going to be permanent, unless the MS filesystem library changes. Basically the API stream writer needs a resource property that the API stream reader does not provide, so they cannot be piped back-to-back. The workaround puts an intermediate local memory+disk streaming cache in between the two. Maybe a better way would be to put a check on the wrapper to the API writer. If the stream being passed in is statable, then pass it through, if not, then wrap the resource with a statable local resource ( |
Was trying to copy a file between two separate disks on Laravel:
Error from Azure is:
Workaround in my application for now:
My files are limited in size, so this is not such a problem. It would still be nice to get to the bottom of this issue though, albeit not with much urgency.
Given that workaround, the stream reading is working, but the stream writing is not.
This would be a simpler line than the stream reading, but I've left it in as a reminder:
The text was updated successfully, but these errors were encountered: