Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Last part of multi-upload is ignored #415

Closed
ringge opened this issue Mar 29, 2023 · 20 comments · Fixed by #475 or #492
Closed

Last part of multi-upload is ignored #415

ringge opened this issue Mar 29, 2023 · 20 comments · Fixed by #475 or #492

Comments

@ringge
Copy link

ringge commented Mar 29, 2023

Hello, it seems the issue of "Last part of multi-upload is ignored" (tus/tusd#462) is not yet fixed in tus/server and @tus/s3store, so now when I integrate @tus/server and @tus/s3store with Nodejs (specifically nextjs) I have the issue that last part of the uploads were ignored, thus the uploaded file is incomplete

@Murderlon
Copy link
Member

Hi, I haven't seen this happen. We also have a test for this that runs against all stores:

export const shouldDeclareUploadLength = function () {
describe('declareUploadLength', () => {
it('should reject non-existant files', function () {
return this.datastore.declareUploadLength('doesnt_exist', '10').should.be.rejected()
})
it('should update upload_length after declaring upload length', async function () {
const file = new Upload({
id: 'declare-length-test',
offset: 0,
metadata: {filename: 'world_domination_plan.pdf', is_confidential: null},
})
await this.datastore.create(file)
let upload = await this.datastore.getUpload(file.id)
assert.equal(upload.size, undefined)
assert.equal(upload.sizeIsDeferred, true)
await this.datastore.declareUploadLength(file.id, 10)
upload = await this.datastore.getUpload(file.id)
assert.equal(upload.size, 10)
assert.equal(upload.sizeIsDeferred, false)
})
})
}

If you could provide a failing test that would help a lot 🙌

@ringge
Copy link
Author

ringge commented Mar 29, 2023

Hi, I haven't seen this happen. We also have a test for this that runs against all stores:

export const shouldDeclareUploadLength = function () {
describe('declareUploadLength', () => {
it('should reject non-existant files', function () {
return this.datastore.declareUploadLength('doesnt_exist', '10').should.be.rejected()
})
it('should update upload_length after declaring upload length', async function () {
const file = new Upload({
id: 'declare-length-test',
offset: 0,
metadata: {filename: 'world_domination_plan.pdf', is_confidential: null},
})
await this.datastore.create(file)
let upload = await this.datastore.getUpload(file.id)
assert.equal(upload.size, undefined)
assert.equal(upload.sizeIsDeferred, true)
await this.datastore.declareUploadLength(file.id, 10)
upload = await this.datastore.getUpload(file.id)
assert.equal(upload.size, 10)
assert.equal(upload.sizeIsDeferred, false)
})
})
}

If you could provide a failing test that would help a lot 🙌

Hi, thanks for prompt reply, I will try to provide a failing test then

@Murderlon
Copy link
Member

@ringge are you still interested in contributing?

@ringge
Copy link
Author

ringge commented May 16, 2023

Hi yes I am, sorry I've been caught up, I will provide a sample repo to demonstrate the issue soon

@yalsayid
Copy link

I've encountered the same issue. In my production environment, files are getting mostly uploaded, but the final part is being stored separately, as you can see in the attached screenshots.

Screenshot 2023-06-22 at 11 17 27 AM Screenshot 2023-06-22 at 11 13 37 AM

What seems to have fixed it is setting the chunk part size to 6mb on both the client and server, it's something I noticed in the Supabase docs where they write chunkSize: 6 * 1024 * 1024, // NOTE: it must be set to 6MB (for now) do not change it and I know they use tus-node-server so I figured I'd try it.

I'm currently working to replicate a failing test, while familiarizing myself with the code base, but right now I'm not sure what the issue is.

@Murderlon
Copy link
Member

I'll look into it soon.

setting the chunk part size to 6mb on both the client and server

Note that you should never set the chunkSize on the client unless your are forced to due to restrictions in your setup. It's always more efficient to upload a file over a single connection instead of multiple. Setting the client chunk size also doesn't impact how chunks are uploaded to S3.

I'm currently working to replicate a failing test,

Thanks for the effort :)

@Murderlon
Copy link
Member

I know they use tus-node-server

Also curious how you know this 😯

@ringge
Copy link
Author

ringge commented Jun 22, 2023

they said in in Supabase docs that they use Tus protocol : ) https://supabase.com/docs/guides/storage/uploads#resumable-upload

@Murderlon
Copy link
Member

I saw that! But that could be one of dozens of tus server implementations, I was curious if I missed something specifically about tus-node-server.

@yalsayid
Copy link

yalsayid commented Jun 22, 2023

They call out tus-node-server in their announcement for resumable uploads. It appears that they plan to contribute some improvements, which would be awesome!

@yalsayid
Copy link

Note that you should never set the chunkSize on the client unless your are forced to due to restrictions in your setup. It's always more efficient to upload a file over a single connection instead of multiple. Setting the client chunk size also doesn't impact how chunks are uploaded to S3.

Thanks for the heads up, so maybe it was just changing the chunk size in the s3 store config that helped.

@fenos
Copy link
Collaborator

fenos commented Jul 25, 2023

I came across this issue as well and thought that the cause could have been this issue #444 which has been addressed.
If any of you still encounter this issue after using the latest version please let us know on this thread

@yalsayid
Copy link

yalsayid commented Jul 25, 2023

Yeah I am continuing to experience this issue, even with the latest version. I've tried debugging the problem but unfortunately came up empty handed. However, I've found a temporary solution by adjusting the 'chunkSize' within the tus-js-client to 6mb, which fixes the issue. Although @Murderlon previously recommended not to, it definitely works for me, but I can't explain why.

I've also attempted to replicate the issue through a failing test but haven't been able to get consistent results. If I modify the test file size in the S3 data store test, it occasionally triggers an error. Here's what I've been using: this.testFileSize = 20_849_384; this.testFileName = 'test.stl'.

@Murderlon
Copy link
Member

Alright, thanks for reporting back.

@elliotdickison
Copy link

elliotdickison commented Aug 31, 2023

@Murderlon I patched in #475 but it unfortunately it didn't solve the issue for me - I'm still getting corrupted uploads that are missing bytes from the end and have a .part file hanging around. Setting partSize: 6 * 1024 * 1024 seems to fix the issue. I'm using tus & s3-store 1.0.0-beta.7 and uploading to CloudFlare R2. Here's a sample file that triggers the issue 100% of the time for me: https://samplelib.com/lib/preview/mp4/sample-15s.mp4

@Murderlon
Copy link
Member

Thanks for immediately testing this yourself! I'll take a look next week and update the PR 👍

@Murderlon
Copy link
Member

Found the issue and incorporated it in the PR.

@elliotdickison
Copy link

Excellent, thank you so much!

@mitjap
Copy link
Collaborator

mitjap commented Oct 16, 2023

This issue is still present with latest fix. Problem is that all logic regarding current part number, offset or determining whether this is final part must be synchronous. Failing to do so can cause parts to be reordered or not properly detected as final (more common case).

@elliotdickison
Copy link

@mitjap this seems to have resolved a bunch of upload problems our users were having - thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants