Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combine parts #1544

Closed
akram620 opened this issue Mar 29, 2024 · 6 comments
Closed

Combine parts #1544

akram620 opened this issue Mar 29, 2024 · 6 comments

Comments

@akram620
Copy link

Hello everyone! I am using Minio for my project. Now I want to upload big files. For this purpose, I want to implement resumable uploading. For example, if my file size is 100MB, I want to save it 20 times with 5MB each, and after uploading the last part, combine them into one file. Please help me with this. How can I upload a big file in smaller parts and then combine them?
Is my approach to saving large files correct? Am I using the right method for it?

I know here is for minio-java, but I am using the Golang language.

@balamurugana
Copy link
Member

putObject() from MinioAsyncClient provides automatic multipart upload, but it doesn't resume. As you want to resume multipart upload, you would need to use lower level S3 APIs by inheriting S3Base class. Below APIs are available

@akram620
Copy link
Author

akram620 commented Mar 29, 2024

@balamurugana I've looked at this: #892. What if I use this method, and what about performance this way?

@balamurugana
Copy link
Member

@akram620 composeObject() is meant to combine existing objects into bigger objects. There is no performance issue. The limitation is except last part object, other part objects must be > 5MiB in size.

@akram620
Copy link
Author

@balamurugana
Thanks. For my case, which way must I use for resumable uploading? For example, if my file size is 1GB and the client's internet disconnects during uploading, we will have to save this file again. That is, I want to save the file in 5MB chunks multiple times. Exactly for this scenario, which method do you recommend?

@balamurugana
Copy link
Member

Both methods work! You would need to choose according to your setup.

@akram620
Copy link
Author

akram620 commented Apr 2, 2024

@balamurugana Hi
Sorry, can you help me?
I am getting the bytes of my file and saving them. For example, if I have a file with a size of 20MB, first I get 5MB of bytes and save them, then another 5MB... However, when I try to combine them, I encounter an error: 'The specified copy source is not supported as a byte-range copy source.'

Here is my code:

func (r *MediaRepo) UploadChatFileV2(ctx context.Context, profileID, chatID int64, fileID, fileExt, bucket string, chunk []byte, totalPart, partNumber int) (string, error) {
	objectName := fmt.Sprintf("%d/%d/%s_part_%d", chatID, profileID, fileID, partNumber)
	chunkReader := bytes.NewReader(chunk)

	res, err := r.db.PutObject(ctx, bucket, objectName, chunkReader, int64(len(chunk)), minio.PutObjectOptions{})
	if err != nil {
		r.log.Error(err)
		return "", err
	}

	if partNumber == totalPart {
		r.log.Debug("Merging parts")

		dst := minio.CopyDestOptions{
			Bucket: bucket,
			Object: fmt.Sprintf("%d/%d/%s", chatID, profileID, fileID+"."+fileExt),
		}

		srcs := make([]minio.CopySrcOptions, totalPart)
		for i := 1; i <= totalPart; i++ {
			srcs[i-1] = minio.CopySrcOptions{
				Bucket: bucket,
				Object: fmt.Sprintf("%d/%d/%s_part_%d", chatID, profileID, fileID, i),
			}
		}

		info, err := r.db.ComposeObject(ctx, dst, srcs...)
		if err != nil {
			return "", err
		}

		r.log.Debug("Success merged", info.Key)
		return info.Key, nil
	}

	return res.Key, nil
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants