Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add ability to use s3 storage #45

Merged
merged 2 commits into from
Jan 18, 2023
Merged

feat: add ability to use s3 storage #45

merged 2 commits into from
Jan 18, 2023

Conversation

uchajk
Copy link
Contributor

@uchajk uchajk commented Jan 17, 2023

Openspout can not open xlsx writer directly on s3, so we have to write the file locally first. If s3_disk is used (see config file change), then at the end of writing into xlsx file, we upload the generated file to s3 bucket. When downloading, we again check if the s3_disk is used, and download the file accordingly.

Sometimes queue process and web process do not share the storage, and have no access to each others local storage. Therefore, I believe this feature can come in handy to others as well.

@sonarcloud
Copy link

sonarcloud bot commented Jan 17, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

@challgren
Copy link
Sponsor

Also related #16

@yajra
Copy link
Owner

yajra commented Jan 18, 2023

Unfortunately, I don't have an s3 resource to check this. But per code review on a previous project where I integrated the export to s3, the code looks correct.

I also used a similar approach, didn't stream the file, and had to zip the export as it included images.

Storage::disk('s3')->put('exports/'.$zipFileName, Storage::disk('tmp')->get($zipFileName));

@challgren do you approve of this PR?

@yajra yajra self-requested a review January 18, 2023 01:38
@challgren
Copy link
Sponsor

We tested it locally and it works, we can improve it by adding tests but that's minor.

@yajra yajra changed the title use s3 storage feat: add ability to use s3 storage Jan 18, 2023
@yajra yajra merged commit 263d990 into yajra:master Jan 18, 2023
@yajra
Copy link
Owner

yajra commented Jan 18, 2023

Released on v1.1.0 🚀 Thanks!

@challgren
Copy link
Sponsor

@yajra and @uchajk one thing I forgot to mention is we should fix the cleanup command too since it references the old method https://github.com/yajra/laravel-datatables-export/blob/master/src/Commands/DataTablesPurgeExportCommand.php#L34

@yajra
Copy link
Owner

yajra commented Jan 18, 2023

Yeah, had that issue before and what I did was use Livewire's temp folder on s3 since it can be configured to automatically purge. Maybe we can use that same concept, but not sure how to do it yet.

@challgren
Copy link
Sponsor

Well since rackspace and digital ocean have s3 like file storage we may want to stick to the command. The user could always configure retention/purging on the bucket if they desire. But I would say the purge command should purge s3. But one thing I did notice was the temp local file is not deleted so we'll need to add that in

@yajra
Copy link
Owner

yajra commented Jan 18, 2023

Agree, we can have another job to handle s3 I guess since not all use S3? Feel free to submit a PR, please. Currently away from Laravel and on a mobile app project atm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants