Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uppy S3 + Golden Retriever Service Worker #4985

Closed
2 tasks done
mok419 opened this issue Mar 11, 2024 · 11 comments
Closed
2 tasks done

Uppy S3 + Golden Retriever Service Worker #4985

mok419 opened this issue Mar 11, 2024 · 11 comments
Labels

Comments

@mok419
Copy link

mok419 commented Mar 11, 2024

Initial checklist

  • I understand this is a bug report and questions should be posted in the Community Forum
  • I searched issues and couldn’t find anything (or linked relevant results below)

Link to runnable example

https://github.com/mok419/uppy-next-S3-goldenRetriever/blob/main/.env.example

Steps to reproduce

Upload a large file (>100mb) and refresh the page at around 20%.
The dashboard will show that a ghost file was found, re-add the file and click upload.

Expected behavior

after refresh, the upload should continue from 20%

Actual behavior

UI shows 100% completed and 0 byte file is uploaded to S3.

@mok419 mok419 added the Bug label Mar 11, 2024
@mok419
Copy link
Author

mok419 commented Mar 11, 2024

Apologies didn't see #4269 (comment)

Any plans to implement S3 support? Would be helpful if docs were clearer on which uploader golden retriever is actually intended for?

From what I just read, TUS innately supports resumable uploads, so if we were using TUS would golden retriever even be needed?

@Murderlon
Copy link
Member

This has been fixed: #4526

When using tus, you may not need Golden Retriever. The difference is after a browser refresh the user as to select the file again with tus and if the file fits in storage, it will be automatically loaded by Golden Retriever.

@Murderlon
Copy link
Member

Checkout this option, it will likely fix your issue: https://uppy.io/docs/aws-s3-multipart/#shouldusemultipartfile

@mok419
Copy link
Author

mok419 commented Mar 11, 2024

Thanks for the speedy response, appreciate the help!

My example wasn't great, my project is using S3-multipart and I tried integrating Golden Retriever last night. I will update the repo to test out with dashboard shortly!

I would like to achieve Resumable Uploads with S3 Multipart and looking at #2121 it looks like there has been some progress.

I was looking at the source code of multipart uploader to see why it may not be working in my project, and this line (201) of code stood out to me:
} else if (this.#isRestoring) { this.options.companionComm.restoreUploadFile(this.#file, { uploadId: this.options.uploadId, key: this.options.key }) this.#resumeUpload() }

Is companion required to support resumable S3 multipart uploads?

As users may be uploading large files, I do not want them to be stored with GR, what I would prefer is that successfully uploaded chunk metadata is stored instead, so if the same file is uploaded, we can pick up chunk metadata from where we left.

@mok419
Copy link
Author

mok419 commented Mar 11, 2024

Ok I just updated the example code to include multipart and golden retriever, made into a hook like in my project.

The issue persists, something is going wrong, even when I try to pause and resume a upload - I think nextjs is bundling stuff funny.

image image

You can test the example with 20mb files as I set the chunk size as 5mb.

There are two scenarios when we refresh the page 50% through upload:

  1. The ghost file appears, if we try to continue, there is a crash - I don't think this is from uppy though, I think its next 14 "webpack" / turbopack
    image
    image

  2. We try to a fresh upload with the same file, the upload starts from the beginning!

I have put the sw.js in my next public folder, so this should bypass webpack bundling. I have also done a dynamic import of Golden retriever so it should be available as required in the sw.js client?

@Murderlon
Copy link
Member

I'm not sure this is the place for Next.js specific issues. But if you have a reproducible example in codesandbox/stackblitz, then we may be able to help.

@mok419
Copy link
Author

mok419 commented Mar 11, 2024

Ofcourse, I just wanted to know if resumable multipart s3 uploads is supported, before I bother tackling the nextjs specific issues?!

By this I mean, will uppy golden retriever not store the file but store successfully uploaded chunks for me?

@Murderlon
Copy link
Member

@uppy/golden-retriever and uppy/aws-s3(-multipart) can work together yes.

By this I mean, will uppy golden retriever not store the file but store successfully uploaded chunks for me?

It stores the files, it's up to the uploader to decide what needs to be uploaded.

@mok419
Copy link
Author

mok419 commented Mar 11, 2024

Ok I made some good progress!

I ran a react & express server with Golden Retriever, without the service worker. For me not to store the files on client, this is necessary.
This time the ghost file appeared and then I had a listParts problem, so when using golden retriever to reupload a previously uploaded file, listParts is called, this is good!

When I fixed this, I got back to this
image

This error comes from here, and because my companion is undefined, everything following is undefined. So a companion is required to support resumable uploads!

function _resumeUpload2() {
this.options.companionComm.resumeUploadFile(_classPrivateFieldLooseBase(this, _file)[_file], _classPrivateFieldLooseBase(this, _chunks)[_chunks], _classPrivateFieldLooseBase(this, _abortController)[_abortController].signal).then(_classPrivateFieldLooseBase(this, _onSuccess)[_onSuccess], _classPrivateFieldLooseBase(this, _onReject)[_onReject]);
}

Will try with a companion next haha! thanks, so I don't think its a next issue! but a companion problem! Will learn more about this!

@mok419 mok419 closed this as completed Mar 11, 2024
@mok419
Copy link
Author

mok419 commented Mar 11, 2024

I would just like to add that for me to resume a multipart s3 upload, I don't believe a fully fledged companion server should be a necessity!
Surely it should be possible to continue an upload from storing client side which chunks were successfully sent? We could call listParts to verify it is a valid upload!

Many thanks to the Uppy team regardless! I appreciate what you have built!!

@mok419
Copy link
Author

mok419 commented Mar 11, 2024

Ok I figured out the issue, resumable uploads is fully supported without companion - apologies for my false claims! Even more amazingly, it is supported exactly how I wanted it! (we locally store the successfully uploaded chunks within a file descriptor object and call listParts to compare!)

The issue was from my end, for my listParts endpoint I was returning a object containing the array! I went so far as to locally develop uppy haha - this was where I saw that yes, the companion code from above was being run but it was not the source of my error.

After two companion code blocks, it would go on to the listParts I mentioned earlier, it expects an array for the .find method to work and hence this was the falling point. Will update the code so it can be a good starting point for people in the future!

Thanks for your patience and apologies for any confusion! 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants