-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow clients retrieving api/download/ and api/download/blob to retrieve direct from S3 host #40
Comments
You're right, that S3 objects are currently proxied through the Send instance. There is currently nothing in place to support something like this, but I'm sure it can be implemented as long as the S3 instance has some public endpoint. I wonder what security implications this might have though. I can think of the following things:
These two points aren't a problem when Send is proxying because it enforces these limits. You might consider it an acceptable risk though.
Are you sure that, in the latest version, different paths are used for downloading the actual blobs? I don't recall there being any logic for this. The URL should have the Just a heads up, the |
At least for the B2 account I'm testing with, there's a way to allow only authorized connections using an ID and key. See here: The method described there doesn't help a whole lot with preventing unlimited downloads since it authorizes ALL downloads from cloudflare, but there's no reason the same authorization method won't work directly from send. I thought that might be a separate feature request after allowing direct connections in the first place though, especially since I think the different S3 providers probably handle authorization for public links differently and it could be a lot of work to support several of them. After some more testing, the /api/download/blob link seems to only show up in the Firefox private windows I was testing with. All other browsers and Firefox normal windows use just the /api/download/ format, which for some reason I can't get to work using the MITM rewrite I'm currently experimenting with. The blob works, the non-blob doesn't. Thanks for the headsup with the prefix, I was only testing and should've probably read the code a bit to figure where that came from. |
I assume this to be a comment on the two points I made regarding security here. There problem here lies in URL/token re-usability. All downloads through Send decrease the file download counter, once the file expires there is no way to download the file afterwards, even through the same link. If you provide a direct link to the client liking to the blob, you might decrease the download count with -1 on the Send server when sharing the direct link, but the client will be able to reuse that direct blob link as many times as he'd like (given it's token doesn't expire). There is no way for Send to take control of this. A single-use token for S3 files would solve this, but I'm not aware of this being a thing. Note that this may be a perfectly fine risk depending on your use case. I'd be perfectly fine with it if this was implemented in Send.
Nice find! Looking into the route configuration, it does indeed show two different blob download routes: Lines 96 to 101 in 825e394
Downloading worked differently back in the day. When the Send client detects an old browser it might fall back to this method, this old downloading method likely uses the old route then. Running a private window might somehow trigger this. The routes both link to the same download logic here, so I'm not really sure why this is the case. I'm sorry I don't know the exact details right now. I don't have much time right now to look into this further. |
I suspect the blob is triggered with private windows because it has some local storage limits in private mode; testing private windows with 2gb+ downloads triggers a "noStreamsOptionFirefox" message and downloads fail. No worries, thanks for responding so quickly and for keeping Send alive! |
Looks like I had a typo in my redirect, taking back the "works on blob/doesn't work otherwise" comment. Can confirm just doing a redirect for /api/download and /api/download/blob BOTH do work. |
Fantastic! Yeah, supporting this would be a nice addition. I wonder what configuration options Send should provide for this with regard to configuration. Would you like to give implementing this a try? |
I'm not much of a programmer myself, more sysadmin, but I can try and find help 🙂 |
Feel free to give it a try for sure. If you get stuck feel free to ask here. Otherwise I may be able to implement it after the weekend. |
I think there's 3 most popular S3 backends: AWS, Backblaze, and Minio. All three have their own formats for direct downloads: So maybe a single variable something like: would be enough? Or do it via a URL template, more flexible but harder to configure and pretty niche cases. |
For https://send.vis.ee/ I'm using Digital Ocean spaces, which uses a different format. Implementing a template system might actually be pretty easy as it shouldn't do much more than setting the file name an possibly some authentication details. Replacing a |
Use presigned URLs As for download counter, have a client-side filestream send events back to the server, only decrease on a completed download event. (I realize this isn't necessary) |
I agree that such approach could be a nice middle ground. It'll then be up to the webmaster to determine whether to enable such functionality. I sadly don't have any bandwidth to implement such feature right now though.
This must be implemented on the server side, and the counter must be decreased when requesting the download to prevent abuse. |
This suggestion was meant to go hand in hand with the previous presigned URL suggestion, with the "effectively" 1 use token I figured the counter didnt need to be server side as only the one stream was capable of being used anyways. I know it would still be abusable and isn't perfect, but it's the small work around for not being able to monitor the stream server-side. For abuse prevention with this method you could set a hard download cap and a soft cap, soft cap is determined by the client-side completion event and the hard cap is the server-side download request count that is a few more than the client-side. I realize part of the reason for send's existence is to be as secure as possible so it's not a great recommendation. I'm just trying to think of ways to make the presigned url's integrate okay |
Has this issue been abandoned or is it something that is still being worked on? |
Would it be possible to redirect the downloads to go directly to a the S3 host rather than proxying through the host that's running send? There's a bunch of CORS configuration that needs to happen, but it would be great as a way to allow a host with slow uploads to utilize bandwidth direct from the S3 host instead of limiting upload speed to that of the send host.
I've managed to get it working with a rewrite of /api/download/blob to my backblaze b2 url for each file via
/api/download/blob/(DOWNLOADid) -> /file/BUCKETNAME/1-DOWNLOADid but it would be nice if it were actually supported. Also I can't figure out how it sometimes calls /download/blob and sometimes just /download/ and it only works properly with blob.
The text was updated successfully, but these errors were encountered: