Skip to content

@tus/s3-store: gracefully handle zero byte uploads#824

Merged
Murderlon merged 3 commits intotus:mainfrom
itslenny:fix/handle-s3-upload-with-zero-bytes
Apr 21, 2026
Merged

@tus/s3-store: gracefully handle zero byte uploads#824
Murderlon merged 3 commits intotus:mainfrom
itslenny:fix/handle-s3-upload-with-zero-bytes

Conversation

@itslenny
Copy link
Copy Markdown
Contributor

@itslenny itslenny commented Apr 20, 2026

This PR resolves an issue with zero byte uploads on S3 or Minio.

Error message

  • S3: MalformedXML: The XML you provided was not well-formed or did not validate against our published schema
  • Minio: InvalidRequest: You must specify at least one part

Reproduction

Any upload with zero bytes will reproduce this issue. This is a simple repro using tus-js-client

import * as tus from 'tus-js-client'
const upload = new tus.Upload(Readable.from(Buffer.alloc(0)), {
  endpoint,
  chunkSize: 1024,
  uploadSize: 0,
  retryDelays: [],
  uploadDataDuringCreation: true,
  metadata: {
    bucketName: 'my-bucket',
    objectName: 'some/object/path.ext',
  },
  onError: (error) => {
    console.error("onError", error)
  },
  onSuccess: () => {
    console.log(`onSuccess`)
  },
})

upload.start()

Additional context

Including the StreamLimiter transform as is done here in BaseHandler causes zero byte chunks to not be emitted. This means that chunkFinished is never called and therefore no part is ever created. This results in the error as described above because you cannot complete a multipart upload that has no parts.

The test suite already has a test for zero byte uploads, but it doesn't use StreamLimiter so it succeeds. I've added a mostly identical test that includes StreamLimiter which consistently reproduces the issue.

As a fix I simply check if there are zero parts and add an empty part before trying to finalize the upload.

This PR also adds the AWS_ENDPOINT env to tests to allow testing locally against Minio

@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented Apr 20, 2026

🦋 Changeset detected

Latest commit: fc1b7df

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@tus/s3-store Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 20, 2026

Walkthrough

Uploads a single empty S3 multipart part when no parts exist during multipart completion, accepts Buffer for part uploads to allow zero-byte parts, updates tests to use a pipeline with StreamLimiter and an S3 endpoint, and adds a changeset for a patch release.

Changes

Cohort / File(s) Summary
S3 Store Implementation
packages/s3-store/src/index.ts
uploadPart now accepts Buffer alongside streams; when parts.length === 0 the code uploads an empty part (PartNumber: 1, Body: Buffer.alloc(0)), appends its {ETag, PartNumber: 1} to parts, then calls completeMultipartUpload. Minor whitespace tweak in an error path.
Test Suite
packages/s3-store/src/test/index.ts
S3 client configured with endpoint: process.env.AWS_ENDPOINT; test write flow switched to stream.pipeline (node:stream/promises) with StreamLimiter from @tus/utils; zero-byte upload test asserts stored object ContentLength === 0.
Release Metadata
.changeset/public-beds-sing.md
Add changeset for @tus/s3-store patch noting handling of zero-byte uploads.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • Murderlon
🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title directly and clearly describes the main change: handling zero-byte uploads gracefully in the S3 store package.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description check ✅ Passed The PR description clearly explains the issue (zero-byte uploads failing), provides error messages, reproduction steps, root cause analysis, and describes the implemented solution.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
packages/s3-store/src/test/index.ts (1)

283-323: Test accurately reproduces the StreamLimiter zero-byte scenario.

Good coverage — pipelining Readable.from(Buffer.alloc(0)) through StreamLimiter before store.write is the exact flow that previously failed, and the ContentLength === 0 check on the resulting S3 object is a solid end-to-end assertion. Overlaps meaningfully with the existing should successfully upload a zero byte file test only in intent; the StreamLimiter path is what triggers the original bug, so keeping both is justified.

Minor nit (optional): the metadata: { contentType, cacheControl } on the Upload isn't asserted against the stored object (e.g., headResult.ContentType/CacheControl). If the intent was to also verify those are preserved through the zero-byte completion path, consider adding assertions; otherwise the metadata block can be dropped to keep the test minimal.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/s3-store/src/test/index.ts` around lines 283 - 323, The test
currently includes metadata on the Upload but doesn't assert it's preserved;
either remove the metadata block from the Upload creation to keep the test
minimal, or add assertions after s3Client.getObject to verify
headResult.ContentType and headResult.CacheControl match upload.metadata (check
variables: Upload instance named upload, headResult from s3Client.getObject in
the test 'should upload an empty part when completing zero byte multipart
upload'); update the assertions accordingly and keep the existing ContentLength
check.
packages/s3-store/src/index.ts (1)

464-500: Zero-byte multipart handling looks correct.

Uploading an empty part with PartNumber: 1 before completeMultipartUpload is a valid workaround — S3 allows the last (and here only) part to be 0 bytes, and the SDK accepts Buffer.alloc(0) as a Body. The call site in write guards this path by only invoking finishMultipartUpload when newOffset === metadata.file.size, so an accidental empty completion on a non-zero upload isn't possible.

One optional alternative worth considering: rather than uploading an empty part and completing the multipart, you could abortMultipartUpload and putObject with an empty body + the original ContentType/CacheControl/metadata. That avoids the (harmless but slightly odd) artifact of a "multipart" object that was never really multipart, and keeps the completion path conceptually for actual data. Not a blocker — current approach is simpler and works.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/s3-store/src/index.ts` around lines 464 - 500, The current zero-byte
multipart workaround in finishMultipartUpload uploads an empty part and
completes the multipart which is valid; no mandatory change required, but if you
prefer to avoid creating a "multipart" artifact implement the optional
alternative: when parts.length === 0 call
this.client.abortMultipartUpload({Bucket: this.bucket, Key: metadata.file.id,
UploadId: metadata['upload-id']}) and then call this.client.putObject({Bucket:
this.bucket, Key: metadata.file.id, Body: Buffer.alloc(0), ContentType:
metadata.file.contentType, CacheControl: metadata.file.cacheControl, Metadata:
metadata.file.metadata}) and return the putObject result Location instead of
uploading an empty part; modify finishMultipartUpload accordingly and keep the
existing completeMultipartUpload path for non-empty parts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/s3-store/src/index.ts`:
- Around line 464-500: The current zero-byte multipart workaround in
finishMultipartUpload uploads an empty part and completes the multipart which is
valid; no mandatory change required, but if you prefer to avoid creating a
"multipart" artifact implement the optional alternative: when parts.length === 0
call this.client.abortMultipartUpload({Bucket: this.bucket, Key:
metadata.file.id, UploadId: metadata['upload-id']}) and then call
this.client.putObject({Bucket: this.bucket, Key: metadata.file.id, Body:
Buffer.alloc(0), ContentType: metadata.file.contentType, CacheControl:
metadata.file.cacheControl, Metadata: metadata.file.metadata}) and return the
putObject result Location instead of uploading an empty part; modify
finishMultipartUpload accordingly and keep the existing completeMultipartUpload
path for non-empty parts.

In `@packages/s3-store/src/test/index.ts`:
- Around line 283-323: The test currently includes metadata on the Upload but
doesn't assert it's preserved; either remove the metadata block from the Upload
creation to keep the test minimal, or add assertions after s3Client.getObject to
verify headResult.ContentType and headResult.CacheControl match upload.metadata
(check variables: Upload instance named upload, headResult from
s3Client.getObject in the test 'should upload an empty part when completing zero
byte multipart upload'); update the assertions accordingly and keep the existing
ContentLength check.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 7aaa644a-66a9-4d4f-aea9-64985727878a

📥 Commits

Reviewing files that changed from the base of the PR and between 8906f14 and a4ecb04.

📒 Files selected for processing (2)
  • packages/s3-store/src/index.ts
  • packages/s3-store/src/test/index.ts

@itslenny itslenny force-pushed the fix/handle-s3-upload-with-zero-bytes branch from a4ecb04 to cc8c7d1 Compare April 20, 2026 22:31
@itslenny itslenny temporarily deployed to external-testing April 20, 2026 22:31 — with GitHub Actions Inactive
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/s3-store/src/index.ts (1)

464-479: Fix targets the symptom rather than the root cause.

Unconditionally uploading an empty part at finalize time works, but it adds an extra S3 round-trip to every completed upload whose final retrieveParts result happens to be empty, not only truly zero-byte uploads. More importantly, the underlying issue is that uploadParts never emits a chunkFinished event for an empty input stream (because StreamSplitter/StreamLimiter don't produce a zero-byte chunk), so bytesUploaded returns 0 and no part is ever uploaded during write. Consider gating this on the known zero-byte case to avoid the extra request on non-empty uploads, e.g.:

Proposed narrower guard
-    if (parts.length === 0) {
+    if (parts.length === 0 && metadata.file.size === 0) {
       const uploadResult = await this.client.uploadPart({

Alternatively, fix it upstream in uploadParts so an empty input stream still produces one (final) zero-byte part via uploadPart, keeping finishMultipartUpload agnostic to this edge case.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/s3-store/src/index.ts` around lines 464 - 479, The current
finishMultipartUpload unconditionally uploads a zero-byte part when parts.length
=== 0, causing an extra S3 round-trip for uploads that merely had no parts
returned by retrieveParts; instead, only add the synthetic empty part when the
upload is known to be zero-bytes (e.g., metadata or metadata.file contains a
size/length field equal to 0) or fix the root cause in uploadParts so an empty
input stream produces one final zero-byte chunk: either (a) change
finishMultipartUpload to gate the uploadPart call on a known-zero size in
metadata (check metadata.file.size or metadata.size) and only then push the
empty part, or (b) modify uploadParts / StreamSplitter / StreamLimiter so that
an empty source still emits a chunkFinished/uploadPart for a zero-byte final
part, ensuring bytesUploaded and write behavior are correct and
finishMultipartUpload stays agnostic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/s3-store/src/index.ts`:
- Around line 464-479: The current finishMultipartUpload unconditionally uploads
a zero-byte part when parts.length === 0, causing an extra S3 round-trip for
uploads that merely had no parts returned by retrieveParts; instead, only add
the synthetic empty part when the upload is known to be zero-bytes (e.g.,
metadata or metadata.file contains a size/length field equal to 0) or fix the
root cause in uploadParts so an empty input stream produces one final zero-byte
chunk: either (a) change finishMultipartUpload to gate the uploadPart call on a
known-zero size in metadata (check metadata.file.size or metadata.size) and only
then push the empty part, or (b) modify uploadParts / StreamSplitter /
StreamLimiter so that an empty source still emits a chunkFinished/uploadPart for
a zero-byte final part, ensuring bytesUploaded and write behavior are correct
and finishMultipartUpload stays agnostic.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 9597e9c9-3a39-429f-830b-686554acb36f

📥 Commits

Reviewing files that changed from the base of the PR and between a4ecb04 and cc8c7d1.

📒 Files selected for processing (3)
  • .changeset/public-beds-sing.md
  • packages/s3-store/src/index.ts
  • packages/s3-store/src/test/index.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/s3-store/src/test/index.ts

Comment thread packages/s3-store/src/index.ts Outdated
// Handle zero-byte uploads - S3 requires at least one part to complete a multipart upload
// S3 allows the last part to be 0 bytes, so we upload a single empty part
if (parts.length === 0) {
const uploadResult = await this.client.uploadPart({
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't you reuse the uploadPart method?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the only issue is that when I try to pass Readable.from(Buffer.alloc(0)) I get the error TypeError [ERR_HTTP_INVALID_HEADER_VALUE]: Invalid value "undefined" for header "x-amz-decoded-content-length"

I updated uploadPart to allow passing a Buffer directly and that works correctly.

@itslenny itslenny temporarily deployed to external-testing April 21, 2026 13:56 — with GitHub Actions Inactive
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/s3-store/src/index.ts (1)

232-246: Consider narrowing the Body type rather than widening readStream.

Widening the parameter to fs.ReadStream | Readable | Buffer works, but the name readStream is now slightly misleading for the Buffer case, and fs.ReadStream is already a subtype of Readable so the union is a bit redundant. Optional cleanup:

Proposed rename/simplification
   protected async uploadPart(
     metadata: MetadataValue,
-    readStream: fs.ReadStream | Readable | Buffer,
+    body: Readable | Buffer,
     partNumber: number
   ): Promise<string> {
     const data = await this.client.uploadPart({
       Bucket: this.bucket,
       Key: metadata.file.id,
       UploadId: metadata['upload-id'],
       PartNumber: partNumber,
-      Body: readStream,
+      Body: body,
     })

Purely a readability nit — feel free to ignore.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/s3-store/src/index.ts` around lines 232 - 246, The parameter type
and name in uploadPart should be simplified for clarity: replace the readStream:
fs.ReadStream | Readable | Buffer union with a single sensible type (e.g.,
Readable | Buffer or just Readable if you always stream) and rename the
parameter from readStream to body (or stream) to avoid misleading callers when
passing a Buffer; update the uploadPart signature and its callers to accept the
new type and name, keeping metadata, partNumber, and the client.uploadPart Body
usage unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/s3-store/src/index.ts`:
- Around line 232-246: The parameter type and name in uploadPart should be
simplified for clarity: replace the readStream: fs.ReadStream | Readable |
Buffer union with a single sensible type (e.g., Readable | Buffer or just
Readable if you always stream) and rename the parameter from readStream to body
(or stream) to avoid misleading callers when passing a Buffer; update the
uploadPart signature and its callers to accept the new type and name, keeping
metadata, partNumber, and the client.uploadPart Body usage unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 6e5e53e4-42ad-4798-b68e-87a2168377f8

📥 Commits

Reviewing files that changed from the base of the PR and between cc8c7d1 and ea365f7.

📒 Files selected for processing (1)
  • packages/s3-store/src/index.ts

@itslenny itslenny requested a review from Murderlon April 21, 2026 14:00
Copy link
Copy Markdown
Collaborator

@Murderlon Murderlon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR, one more thing

Comment thread packages/s3-store/src/test/index.ts Outdated
}
})

it('should upload an empty part when completing zero byte multipart upload', async function () {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the existing test didn't work as expected, can you just add StreamLimiter to that test and remove this new one?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I also ran the test with / without my fix to ensure it still fails as expected.

@itslenny itslenny deployed to external-testing April 21, 2026 14:44 — with GitHub Actions Active
@itslenny itslenny requested a review from Murderlon April 21, 2026 14:44
Copy link
Copy Markdown
Collaborator

@Murderlon Murderlon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@Murderlon Murderlon merged commit 0f3ec0c into tus:main Apr 21, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants