Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3: Support for multipart upload using presigned url #1603

Closed
oyeanuj opened this issue Jun 28, 2017 · 51 comments
Closed

S3: Support for multipart upload using presigned url #1603

oyeanuj opened this issue Jun 28, 2017 · 51 comments
Labels
feature-request A feature should be added or improved.

Comments

@oyeanuj
Copy link

oyeanuj commented Jun 28, 2017

Hi folks! Apologies if this doesn't qualify as an issue but posting it as a last resort after spending hours looking for a definitive answer to -

Does the SDK supports or plans to support multipart upload using presigned PUT urls?

I didn't find anything like that in the library documentation. The S3 blog has a post about Managed Uploads which intelligently chunk the upload but that doesn't seem to have any param for signing url.

The only thing I could find is #468 which is about a couple of years old. It seems that libraries like EvaporateJS and this older library by @Yuriy-Leonov seem to support it. That makes me think that this SDK would have support for it (which would be much more preferable) as well but I was unable to confirm that.

And if it doesn't support it today, is this something which is on the radar? And if not, is there a recommended way of implementing it (the chunking the large and signing process for the different part)?

Thank you!

@chrisradek
Copy link
Contributor

@oyeanuj
Are you trying to perform a multipart upload from a browser? The Managed Uploader you linked to accomplishes that, and works directly with File objects. You can also use it indirectly by calling s3.upload. s3.upload will perform a multipart upload behind the scenes if your file is larger than 5 MB.

The libraries you linked to don't appear to be using presigned urls to handle multipart uploads. EvaporateJS does ask for a signingUrl, but this is actually a url to a service you host that returns a v4 signature, which isn't the same as a presigned url.

Can you provide some more feedback on what you're trying to accomplish with the SDK?

@oyeanuj
Copy link
Author

oyeanuj commented Jun 29, 2017

@chrisradek thank you for responding!

My usecase was looking to upload a file from the client, without sending it through server-side (Ruby). I was using it in the context of React/Redux, so I didn't want to deal with getting forms through createPresignedPost. From my research, it seemed the simplest and often recommended way to do that was generating a presigned url to make a PUT Request on the client.

Since the question yesterday, I chatted with @jeskew and @dinvlad, and it seems like that if I wanted to do multipart sending the files to my server, or without createPresignedPost, I'd have to use STS token (which seems a little bit more complicated than creating presigned_url).

So, at this moment, I am doing simple upload without chunking or multipart support. But I'd love to be able to do that, since the presignedUrl method feels the cleanest to use to upload, and I will need to soon upload files upto 2GB. So FWIW, I'd love to put a vote in for that in your backlog.

(and yes, you are right that those libraries require a server-side signature, my bad)

@ssshah5
Copy link

ssshah5 commented Jul 18, 2017

I have a similar use case where I am using a pre-signed URL to upload a large zip file from the client side without interacting with Server. Multipart upload will be ideal. I agree with @oyeanuj that presignedUrl method feels the cleanest to use to upload and a multipart support to this would be ideal.

@jeskew jeskew added feature-request A feature should be added or improved. and removed Question labels Jul 19, 2017
@jeskew
Copy link
Contributor

jeskew commented Jul 19, 2017

@ssshah5 Presigned URLs for a multipart upload isn't something that a client library could offer on its own -- you would need to coordinate between the client and server to get the appropriate URLs signed -- but I'll mark this as a feature request. In the meantime, you might want to look at using the createPresignedPost method to construct an HTML form. That will allow the browser to manage access to the filesystem and upload the file as a multipart form.

@ssshah5
Copy link

ssshah5 commented Jul 19, 2017

@jeskew - Thank you for the details and escalating this as a feature request. For our use case - The client reaches out to server to get a pre-signed URL in order to PUT objects to object-storage. The server than generates the URL (using the S3 node module) and returns it back to the client side code (bash). The client side code than tries to use this single URL to upload the entire objects. Since the size of the object can be huge, we are using HTTP streaming data mechanism (chunked transfer encoding). However, this doesn't seem to be working with S3 object storage and I get an error back (Content-Length is missing) probably because it expects that transfer should send ending bits at the end of first chunk. This scenario works fine with Swift Object Storage where a single temporary URL allows to transmit streaming data without generating temporary (pre-signed) URLs for each chunk.
Do you think it would be possible to upload chunked data using a single pre-signed URL?
Thanks

@jeskew
Copy link
Contributor

jeskew commented Jul 19, 2017

@ssshah5 - S3 offers a mechanism for chunked transfer encoding, but it requires that each chunk be individually signed and that the length of the complete object be known beforehand.

@raphaeljoie
Copy link

@jeskew — I used to use the block blob upload feature of Azure with signed URI. It was really convenient to upload large file from client side, directly in a storage.

How does it work
1° you generate a signed URI on server side with a write access
2° client split the file, attribute an uuid to every chunk, upload them using the same signed URI
3° client send the list of uuids and Azure re-creates the file based on the chunks sent in 2°

If a well understand your last post, it is not possible on Amazon ? (because every single PUT chunk request must be individually signed).

If it's the case, how to upload big files, from client side, directly to S3, without sending the key to the clients ?

@absoludity
Copy link

For reference of the Azure API mentioned above, a blob storage client can be created with a presigned url (shared-access signature): https://github.com/Azure/azure-storage-node/blob/master/browser/azure-storage.blob.export.js#L35

ie. and with that, you can do a normal upload which handles the chunking and sending the list of parts automatically: https://github.com/Azure/azure-storage-node/blob/master/lib/services/blob/blobservice.browser.js#L70

@ithinco
Copy link

ithinco commented Jun 8, 2018

Really hope aws-sdk-js can provide this feature.

@Plasma
Copy link

Plasma commented Sep 11, 2018

Just encountered this; surprised to see there's no way in the SDK to leverage pre-signed URLs.

The reason is so we can offer multi-part upload, from the web browser, but of course keep AWS keys server-side only. How else is this supposed to work? Thanks!

@moralesalberto
Copy link

Agreed. This would be a great feature. I was trying to find a way to do it and found this post. Seems like it is not doable today.

@alfadaemon
Copy link

Agree, this IS a VERY needed feature. Hope we can see it available soon.

@sandyghai
Copy link

I was managed to achieve this in serverless architecture by creating a Canonical Request for each part upload using Signature Version 4. You will find the document here https://sandyghai.github.io/AWS-S3-Multipart-Upload-Using-Presigned-Url/

@delprofundo
Copy link

I was managed to achieve this in serverless architecture by creating a Canonical Request for each part upload using Signature Version 4. You will find the document here https://sandyghai.github.io/AWS-S3-Multipart-Upload-Using-Presigned-Url/

do you have a code example? the instructions aren't really that clear in my case.

@dusty
Copy link

dusty commented Oct 20, 2018

I was also looking for this and I ended up using STS to generate temporary security tokens for my client locked down to the particular bucket and path that I wanted to give them access to.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html

I found this video on youtube about it.

https://www.youtube.com/watch?v=4_csSXc_GNU

Perhaps that will help someone looking at this issue.

@tomasdev
Copy link

tomasdev commented Nov 27, 2018

I am on the same boat, my use case is also partial uploads for big files on a JS client side. I want people to be able to resume uploads if they lose their connection, without losing all previously uploaded chunks. And I don't want to expose any credentials (thus not using SDK on client)

I will update this comment once I solve it.
UPDATE: following @sandyghai guide, I was able to do it.

There may be syntax errors, as my backend does not use express, but I felt writing it ala express would help other devs understand it easier.

Context: I have an API (behind auth obviously) to which users can send files, and it uploads them to S3. As I didn't want to set IAM for each user of my app, nor put the SDK in the front-end, I decided to go with a back-end authorized approach.

app.post('/upload', (req, res) => {
  let UploadId = req.body.UploadId;

  const params = {
    Bucket: 'my-bucket-name',
    Key: req.body.filename
  };

  // Initialize the multipart - no need to do it on the client (although you can)
  if (req.body.part === 1) {
    const createRequest = await s3.createMultipartUpload(params).promise();
    UploadId = createRequest.UploadId;
  }

  // Save createRequest.UploadId in your front-end, you will need it. 
  // Also sending the uploadPart pre-signed URL for part #1
  res.send({
    signedURL: s3.getSignedUrl('uploadPart', {
      ...params,
      Expires: 60 * 60 * 24, // this is optional, but I find 24hs very useful
      PartNumber: req.body.part
    }),
    UploadId,
    ...params
  });
});

app.post('/upload-complete', (req, res) => {
  let UploadId = req.body.UploadId;

  const params = {
    Bucket: 'my-bucket-name',
    Key: req.body.filename
  };

  const data = await s3.completeMultipartUpload({
    ...params,
    MultipartUpload: {
      Parts: req.body.parts
    },
    UploadId
  }).promise();

  // data = {
  //   Bucket: "my-bucket-name", 
  //   ETag: "some-hash", 
  //   Key: "filename.ext", 
  //   Location: "https://my-bucket-name.s3.amazonaws.com/filename.ext"
  // }
  res.send({
    ...data
  });
});

TL;DR: it is possible, so feel free to close the ticket, IMHO.

@prestonlimlianjie
Copy link

Hi friends!

I realized that this was a topic that did not have much documentation, so I made a demo repo in case anyone wanted to reference my implementation of multipart+presigned uploads to S3.

https://github.com/prestonlimlianjie/aws-s3-multipart-presigned-upload

@shawnly
Copy link

shawnly commented Dec 19, 2018

I am on the same boat, my use case is also partial uploads for big files on a JS client side. I want people to be able to resume uploads if they lose their connection, without losing all previously uploaded chunks. And I don't want to expose any credentials (thus not using SDK on client)

I will update this comment once I solve it.
UPDATE: following @sandyghai guide, I was able to do it.

There may be syntax errors, as my backend does not use express, but I felt writing it ala express would help other devs understand it easier.

Context: I have an API (behind auth obviously) to which users can send files, and it uploads them to S3. As I didn't want to set IAM for each user of my app, nor put the SDK in the front-end, I decided to go with a back-end authorized approach.

app.post('/upload', (req, res) => {
  let UploadId = req.body.UploadId;

  const params = {
    Bucket: 'my-bucket-name',
    Key: req.body.filename
  };

  // Initialize the multipart - no need to do it on the client (although you can)
  if (req.body.part === 1) {
    const createRequest = await s3.createMultipartUpload(params).promise();
    UploadId = createRequest.UploadId;
  }

  // Save createRequest.UploadId in your front-end, you will need it. 
  // Also sending the uploadPart pre-signed URL for part #1
  res.send({
    signedURL: s3.getSignedUrl('uploadPart', {
      ...params,
      Expires: 60 * 60 * 24, // this is optional, but I find 24hs very useful
      PartNumber: req.body.part
    }),
    UploadId,
    ...params
  });
});

app.post('/upload-complete', (req, res) => {
  let UploadId = req.body.UploadId;

  const params = {
    Bucket: 'my-bucket-name',
    Key: req.body.filename
  };

  const data = await s3.completeMultipartUpload({
    ...params,
    MultipartUpload: {
      Parts: req.body.parts
    },
    UploadId
  }).promise();

  // data = {
  //   Bucket: "my-bucket-name", 
  //   ETag: "some-hash", 
  //   Key: "filename.ext", 
  //   Location: "https://my-bucket-name.s3.amazonaws.com/filename.ext"
  // }
  res.send({
    ...data
  });
});

TL;DR: it is possible, so feel free to close the ticket, IMHO.

Do you have front end and back end working code for your solution?

@tomasdev
Copy link

I have already posted the back-end code.

The front-end doesn't do anything especial, just a fetch with method PUT and passing the body binary buffer.

@shawnly
Copy link

shawnly commented Dec 19, 2018

I have already posted the back-end code.

The front-end doesn't do anything especial, just a fetch with method PUT and passing the body binary buffer.

@TroyWolf
Copy link

TroyWolf commented Feb 7, 2019

I have already posted the back-end code.

The front-end doesn't do anything especial, just a fetch with method PUT and passing the body binary buffer.

@tomasdev , thank you very much for your back-end API example code!

Unencumbered by facts, your back-end code suggests that for every individual part in the client, you make a call to your API to get a new signed URL. So seems you do have some relatively specific client-code beyond a standard file upload going on. As @shawnly suggested, it would be helpful to share it.

I think what we all want, though, is a more direct support of S3 Multipart Upload in the browser using a pre-signed URL such that a single signed URL could be used for the entire upload process of a single file regardless of number of parts.

A process that must make an API call to our own API between every chunk would surely slow the otherwise direct S3 upload way down. Kind of defeats some of the genius of "direct upload to S3 from browser".

kudos also to @sandyghai for his work and sharing it with us.

April 2021 update (way late): The solution I use in production now does in fact make an API call to my own API for every part to get a presigned URL. This is necessary since I am now using MD5 hash for each part. So now as I process each chunk in the browser, I generate my MD5 hash client-side, then send that hash and the part details to my API to get the presigned URL for that part. It works great. My concerns about performance were unfounded.

@tomasdev
Copy link

tomasdev commented Feb 7, 2019

That's not how multipart uploads work, you'd need authentication on each request.

My front-end is within an electron app, so it uses fs to read files in chunk and I can't share it due to legal contracts with my company. But should be doable with a FileReader API Stream like https://github.com/maxogden/filereader-stream

@TroyWolf
Copy link

TroyWolf commented Feb 7, 2019

That's not how multipart uploads work, you'd need authentication on each request.

I understand that. Consider that today, the S3 Javascript SDK supports a multipart upload. It makes it very simple for the user. The user does not have to manage the individual parts--it's hidden in the SDK. But it only works with your actual credentials. The desire, therefore, is for the SDK to have a method that can do the same thing, but accept a pre-signed URL for the auth. If they wanted to, unencumbered by facts, AWS could support a signed URL that authenticates a single file ID and all it's parts.

In the meantime, I am going to try the approach used by you and sandyghai, but with a twist. My thought is to add a "reqCount" param to my custom API that is responsible for making the s3.getSignedUrl() call. I'll go into a loop and generate multiple signed URLs, adding 1 to the part number each time. This way, my API can, for example, do s3.createMultipartUpload(), and return 10 signed URLs to my client--one each for parts 1 - 10. This would cut down my API calls by a factor of 10.

Better yet, it would be trivial to use file.size to estimate how many parts will be needed. This would allow me to initiate the upload and return all signed URLs for all parts in a single request to my custom API.

Of course once all parts are uploaded, I need to make an additional call to my custom API to do s3.completeMultipartUpload().

What do you think of this approach? ☕

@McSheps
Copy link

McSheps commented Oct 19, 2019

I've just encountered this issue as well, my only solution for now is to use STS to create a temporary set of credentials assuming an upload only role with a policy restricting it to the sole object location. I further went on to add a condition which restricts based on the requesters IP address for added measure.

@madmox
Copy link

madmox commented Dec 13, 2019

That combined with the fact the demand for this solution is apparently very low--note you and @McSheps are the only ones asking about this here in the last 8 months.

@TroyWolf
Github discourages commenting just for the sake of +1ing feature requests. In my opinion, many people used the +1 reaction on the initial post during the last 8 months, so this feature is still actively demanded.

Your solution is brilliant, though it's very low-level and needs a lot of work and testing to be production-ready. You are basically reimplementing multipart upload strategies to optimize data transfer rates!

In this case, I prefer the STS approach mentioned in earlier posts: just generate a set of temporary credentials for a sub path in the bucket. The youtube video already linked above is quite simple to follow.

But then, you need to authorize a whole folder for each upload, which may be a little more than what you want to allow to your clients, even if you organize them so that existing files are not included in the policy (like, create a folder with a unique name for each upload).

@TroyWolf
Copy link

In my opinion, many people used the +1 reaction on the initial post during the last 8 months, so this feature is still actively demanded.

I failed to notice those +1 reactions. You are correct, @madmox

@ggolda
Copy link

ggolda commented Mar 10, 2020

Any updates related to this?

@TroyWolf
Copy link

An update as I've learned a lot more about the strategy to upload files direct from browser to S3 bucket using multipart and presigned URLs. I have developed server and client code to support resume as well as parallel chunk uploads in a parts queue. My latest client solution also supports MD5 checksum on all parts.

Previously I shared a strategy where my API call to start the upload returned an array of all the presigned URLs--one for each chunk. However, to support an MD5 checksum on each chunk, you'll make an API call to fetch each presigned URL individually--passing in the chunk's MD5 hash generated in the browser. Otherwise you'd have to read the entire file into browser memory to generate all the MD5 hashes up front.

My original concern about potentially making hundreds of API calls to get presigned URLs individually was unfounded.

https://troywolf.com

@danielcastrobalbi
Copy link

@TroyWolf could you please share some code examples for your strategy? It seems a very interesting approach for the same problem I got in my project.

@TroyWolf
Copy link

TroyWolf commented Jul 17, 2020

@danielcastrobalbi and all, Due to client agreements for custom solutions I've developed around this, I can't freely share the source in public at this time. I am free to consult privately or even develop similar solutions, but I'd have to ensure you aren't a direct competitor of my existing file upload clients before we dive in too deep.

Just figuring out the architecture for an "S3 Multipart Presigned URL Upload from the Browser" solution was a pretty daunting task for me. I've tried to outline this previously in this thread, but let me try again--adding in more bits from my experience so far.

I like to think about "3 players" in the mix:

  1. AWS (S3 bucket)
  2. My API (server)
  3. My Client (browser)

While this is an "upload straight from the browser" solution, you still need your own API to handle parts of the process. The actual file uploading, though, does not go through your own API. This is a major advantage of this solution. You won't pay for server CPU, memory and bandwidth to proxy the file into your S3 bucket. In addition, it's typically faster without your server as a middleman in the upload.

A very high-level view of the process:

  1. Client: User selects file(s) for upload in browser. Do this however you want--normal file dialog, folder dialog, drag & drop.
  2. Client: Pass the filename, size, and type to your own API
  3. API: Make the AWS request to start an MPU. Pass the MPU ID to Client
  4. Client: Iterate over the file one chunk at a time. For each chunk, calculate an MD5 hash and call your API with the filename, size, type, md5 hash, part number (chunk sequence), and MPU ID.
  5. API: Generate a chunk-specific, pre-signed URL. This uses the AWS SDK but happens locally--no request out to AWS. Send pre-signed URL back to Client.
  6. Client: Upload the chunk to AWS using the pre-signed URL. The response includes an ETag you need to keep track of for the final step to complete the MPU.
  7. Repeat 4-6 until all parts are uploaded.
  8. Client: Call your own API to complete the MPU. Pass in the array of ETags for the uploaded parts.
  9. API: Make AWS request to complete the MPU.
    10: ???
    11: Profit

A lot of nitty-gritty between those lines!

Resume is made possible by the fact AWS holds onto the uploaded parts until you either complete the MPU or send a request to delete the MPU. There is an AWS API request that takes an MPU ID and reports back on what parts are already uploaded. To resume the file upload, just start at the next chunk.

You can upload the parts in any order and many in parallel if you want. This is how speed gains can be realized.

Pro tip: Use a bucket trigger to automatically delete incomplete MPUs after some period of time. The hidden danger is that the incomplete MPU parts sit hidden in your bucket costing you storage space and there is no way to see them in the AWS Console UI! You could, for example, tell the bucket to "delete MPU parts that are more than 5 days old".

If anyone wants consulting for this, please feel free to reach out to me at troy@troywolf.com. ☕️

@khurram-wasim
Copy link

I used this snippet to upload an mp4 file:

https://gist.github.com/sevastos/5804803

Works like a charm, you just have to tweak it as per your requirements

@ShaharYak
Copy link

After so many tries, I figured it out!
I created code snippets for you guys to implement it wherever you would like to (I really like open source but when it is a tiny amount of code I prefer not to use third-party). Hope it could help you 👍
https://www.altostra.com/blog/multipart-uploads-with-s3-presigned-url

@nickwild-999
Copy link

@ShaharYak Thanks for this, really useful. I need a solution with uploadprogress though like .upload. Can this be done?

@TroyWolf
Copy link

@ShaharYak Thanks for this, really useful. I need a solution with uploadprogress though like .upload. Can this be done?

Check out the axios docs here. See the bit about the onUploadProgress handler.
https://github.com/axios/axios/blob/master/examples/upload/index.html

@nickwild-999
Copy link

Perfect thanks @TroyWolf

@adriangb
Copy link

adriangb commented Mar 6, 2021

To my knowledge, this is still not resolved.

All of the solutions I have seen consist of generating a pre-signed url for each part. This requires the client to know the full size of the object, which is especially hard if it's going to be compressed on the fly while uploading.

In my opinion, the only best solution would be for S3 to accept the Content-Range and Content-MD5 headers. This would allow, for example, for clients to upload data compressed on the fly without it ever hitting the disk. A great example use case of this is recording, compressing and uploading high resolution video without it ever hitting the disk (aside from buffering / caching).

A close second would be for S3 to allow generating pre-signed urls without signing / validating signature of the Content-Range and Content-MD5 headers. This way clients can continually increment the part count.

@VaibhavKhandelwal20
Copy link

Still unresolved. Need some reliable solution around this.

@TroyWolf
Copy link

It is possible I'm overlooking something, but reviewing my own code around this, the only thing I use the total file size for is to calculate how big I want each chunk to be. I use the AWS chunk size limits and overall file size limits to make my chunks as small as possible as this improves the perception of pause/resume.

Without knowing the total file size up front, you wouldn't know if your file is going to end up being too big for s3, which is crazy big, so probably not a real concern. I handle this by knowing these 2 things:

const MAX_PARTS = 10000
const MAX_CHUNK_SIZE = 1024 * 1024 * 1024 * 5 // 5 GB

So if uploading "on the fly" without knowing the total file size, you'd probably want to default to a relatively large chunk size to decrease the chance you'll hit the 10,000 parts limit.

My client-side code to generate the part's MD5 does not care about the file size:

  async chunkMd5(chunk) {
    const contentBuffer = await this.readFileAsync(chunk);
    return btoa(SparkMD5.ArrayBuffer.hash(contentBuffer, true))
  }

My API code to create the part's pre-signed URL does not care about file size:

app.post('/url', (req, res, next) => {
	const { key, uploadId, partNumber, md5 } = req.body
	const params = {
		Bucket: BUCKET_NAME,
		Expires: EXPIRATION,
		Key: key,
		UploadId: uploadId,
		PartNumber: partNumber,
		ContentMD5: md5,
	}
	res.json({ url: s3.getSignedUrl('uploadPart', params) });
})

I don't know anything about compressing on the fly, so maybe that is where your real challenge lies. ☕️

@shredding
Copy link

shredding commented May 31, 2021

I'm not sure if this is new information, but this works with just axios and without the aws sdk:

        await axios.put(presignedUrl, video, {
            headers: {
                'Content-Type': 'video/mp4',
            },
            onUploadProgress: progressEvent => console.log(progressEvent.loaded / progressEvent.total)
        })

@3deepak
Copy link

3deepak commented Jun 7, 2021

the original question is still valid, need some solution. Similar to TransferManager for multipart upload in S3, there is no such solution with S3 PreSigned URL.

@shredding
Copy link

@3deepak Wdym? My solution is a presigned url ...

@ultratin
Copy link

Bumping this request up as I need to upload a file which can be upwards of 100mb to a client's s3 bucket,but cannot ask them to generate multiple S3 presigned bucket url and also I won't know how big the file is until during runtime.

@nickwild-999
Copy link

nickwild-999 commented Jun 10, 2021

Not sure if this helps anybody, but here is a gist for code I got working for multipart s3 upload with presigned urls and progress bar.

Not the prettiest code and needs some pruning but hope this helps.

This was largely guided and helped by others in this thread

https://gist.github.com/nickwild-999/c89cfdc3b9edf5f9a8175381ffd79943

@danielRicaud
Copy link

Seems like this thread has been a bit quiet lately.

Just bumping for any new development news and to say I'm still in need of this feature.

My use case is uploading a stream to S3 directly from the browser with only a single pre-signed Put url for the entire stream and no other credentials.

If anyone has a way forward without using STS credentials or generating multiple pre-signed urls for each data chunk that'd be very appreciated.

@TroyWolf
Copy link

TroyWolf commented Nov 4, 2021

@danielRicaud, I imagine you'll be waiting a very long time. A foundational way the multipart upload works is each chunk is a separate upload with specific headers that identify the chunk, and that info is also encoded into the presigned URL. So as it stands today, you aren't going to get away from needing a unique URL per chunk. ☕

@danielRicaud
Copy link

They already have abstractions that keep the user from having to worry handling multi-part logic themselves like lib-storage, but lib-storage requires an S3 client to be passed to it.

I wish we could instantiate a new S3 Client with limited permissions based on the type of pre-signed URL that's passed to it, instead of only allowing S3 clients to be instantiated with STS tokens. For example if I instantiate an S3 client with a presigned Put url it would only be allowed to upload.

I think that's one way forward to implementing the feature.

@toniopelo
Copy link

toniopelo commented Mar 10, 2022

I had to implement this myself and I decided to create a package in order to avoid the hassle of doing this again and again.
If this ever helps someone, here are the two packages: @modbox/s3-uploads-server and @modbox/s3-uploads-client

These include as a commodity:

  • Helpers in the backend to use s3 utils elsewhere if you want
  • Progress manager in the frontend to easily create a progress bar out of this module.
  • Key conflict handling and renaming strategies
  • Mime types and sizes whitelist
  • Cancellable uploads (still WIP)

@IzaanAnwar
Copy link

Not sure if this helps anybody, but here is a gist for code I got working for multipart s3 upload with presigned urls and progress bar.

Not the prettiest code and needs some pruning but hope this helps.

This was largely guided and helped by others in this thread

https://gist.github.com/nickwild-999/c89cfdc3b9edf5f9a8175381ffd79943

I don't know about how it works on ur side, but I had almost done the same thing on my next js app, but the thing is when I uploaded files larger the 100 MB I was getting Https timeout err.
Still not managed to solve, r8 now trying streaming approach and would also look into generating all the urls in one api call by @TroyWolf

@kellertk
Copy link
Contributor

Hello, we currently don't have any plans to implement this in JS v2, which will reach end of support by September 2024. Please instead track this issue here: aws/aws-sdk-js-v3#3945

@kellertk kellertk closed this as not planned Won't fix, can't repro, duplicate, stale Apr 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request A feature should be added or improved.
Projects
None yet
Development

No branches or pull requests