Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oracleobjectstorage: Use ChunkWriter Interfaces #7240

Merged
merged 1 commit into from Aug 23, 2023

Conversation

msays2000
Copy link
Contributor

@msays2000 msays2000 commented Aug 20, 2023

What is the purpose of this change?

Use the new ChunkWriter interfaces for multi part uploads introduced in #7154

Was the change discussed in an issue or in the forum before?

Yes, in #7154 and in #7056

Checklist

  • I have read the contribution guidelines.
  • I have added tests for all changes in this PR if appropriate.
  • I have added documentation for the changes if appropriate.
  • All commit messages are in house style.
  • I'm done, this Pull Request is ready for review :-)

@msays2000 msays2000 changed the title Manoj/multithread oracleobjectstorage: Use ChunkWriter Interfaces Aug 20, 2023
@msays2000
Copy link
Contributor Author

@ncw I have used the new chunk writer interfaces as you have requested via #7154.
please review.

Copy link
Member

@ncw ncw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of quick comments inline.

Can you rebase on the multithread2 branch please?

I had to change the ChunkWriter interface - it needed a ctx parameter on WriteChunk otherwise we can't cancel pending uploads properly.

I fixed the bug in multithread too but in a different way so you won't need the fs/operations/multithread.go fix.

Once you've rebased I'll give it a more thorough review.

Thank you.

backend/oracleobjectstorage/options.go Outdated Show resolved Hide resolved
backend/oracleobjectstorage/options.go Outdated Show resolved Hide resolved
@msays2000 msays2000 changed the base branch from multithread to multithread2 August 22, 2023 04:19
@msays2000
Copy link
Contributor Author

@ncw rebased with multithread2 branch. please review.

Copy link
Member

@ncw ncw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks very nice :-)

I put 2 minor points inline.

Do the integration tests all pass still?

Thank you :-)

backend/oracleobjectstorage/multipart.go Outdated Show resolved Hide resolved
backend/oracleobjectstorage/options.go Show resolved Hide resolved
@msays2000
Copy link
Contributor Author

@ncw many tests are failing with hash type not supported .

// Hash returns the requested hash of the contents
func (i *StaticObjectInfo) Hash(ctx context.Context, h hash.Type) (string, error) {
	if len(i.hashes) == 0 {
		return "", hash.ErrUnsupported // this is the failure that is being returned. 
	}
	if hash, ok := i.hashes[h]; ok {
		return hash, nil
	}
	return "", hash.ErrUnsupported
}
=== RUN   TestRcat/withChecksum=true,ignoreChecksum=true
    run.go:180: Remote "oos:bucket rclone-test-jiyujuj8hujeqov0qiwixav0", Local "Local file system at /var/folders/c2/fxthjc2x34q0ztcr8h557h900000gn/T/rclone3100234066", Modify Window "1ms"
    operations_test.go:1740: 
        	Error Trace:	/Users/mkghosh/Code/rs/rclone/fs/operations/operations_test.go:1740
        	            				/Users/mkghosh/Code/rs/rclone/fs/operations/operations_test.go:1751
        	Error:      	Received unexpected error:
        	            	multipart upload failed to initialise: failed to prepare upload: hash type not supported
        	Test:       	TestRcat/withChecksum=true,ignoreChecksum=true

@msays2000
Copy link
Contributor Author

found the problem fixed it, I had to return ui, nil rather than ui, err in prepare multipart upload

@ncw
Copy link
Member

ncw commented Aug 22, 2023

Let me know when you want me to take another look :-)

@msays2000
Copy link
Contributor Author

@ncw please review again. all tests have passed. I have ignored a test case of uploading a stream of 0 length (i guess we are unable to determine its size prior to reading the stream) that results in multipart upload which doesn't accept empty list of parts to commit. A fix will be to fall back to PutObject for such empty stream later. Ok for now as I have seen it fail in TestS3 compatible way as well. Some other s3 providers have it in exception too.

Screen Shot 2023-08-22 at 9 43 58 AM

Copy link
Member

@ncw ncw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you this looks great now.

Don't worry about the 0 byte putstream this will never happen in practice as rclone buffers streams less than --streaming-upload-cutoff in memory and uploads then normally.

I'll merge this onto the multithread2 branch now and I'll merge that branch to master very soon.

Thank you :-)

@ncw ncw merged commit 2da9820 into rclone:multithread2 Aug 23, 2023
10 checks passed
@ncw
Copy link
Member

ncw commented Aug 24, 2023

I've merged this to master now - thank you :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants