-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add ability to use s3 storage #45
Conversation
Kudos, SonarCloud Quality Gate passed! |
Also related #16 |
Unfortunately, I don't have an s3 resource to check this. But per code review on a previous project where I integrated the export to s3, the code looks correct. I also used a similar approach, didn't stream the file, and had to zip the export as it included images. Storage::disk('s3')->put('exports/'.$zipFileName, Storage::disk('tmp')->get($zipFileName)); @challgren do you approve of this PR? |
We tested it locally and it works, we can improve it by adding tests but that's minor. |
Released on v1.1.0 🚀 Thanks! |
@yajra and @uchajk one thing I forgot to mention is we should fix the cleanup command too since it references the old method https://github.com/yajra/laravel-datatables-export/blob/master/src/Commands/DataTablesPurgeExportCommand.php#L34 |
Yeah, had that issue before and what I did was use Livewire's temp folder on s3 since it can be configured to automatically purge. Maybe we can use that same concept, but not sure how to do it yet. |
Well since rackspace and digital ocean have s3 like file storage we may want to stick to the command. The user could always configure retention/purging on the bucket if they desire. But I would say the purge command should purge s3. But one thing I did notice was the temp local file is not deleted so we'll need to add that in |
Agree, we can have another job to handle s3 I guess since not all use S3? Feel free to submit a PR, please. Currently away from Laravel and on a mobile app project atm. |
Openspout can not open xlsx writer directly on s3, so we have to write the file locally first. If s3_disk is used (see config file change), then at the end of writing into xlsx file, we upload the generated file to s3 bucket. When downloading, we again check if the s3_disk is used, and download the file accordingly.
Sometimes queue process and web process do not share the storage, and have no access to each others local storage. Therefore, I believe this feature can come in handy to others as well.