I propose to abstract the (archived) files.
With MaxClass we have to store quite some uploaded files (currently more than 100GB).
This uses up quite some disk space with mostly cold (hardly used) files.
Proposal is to place all these files on a S3-alike service.
When we need a file (for download or resize) we pull it from S3 and store it locally in a file cache.
Even resized files can be stored in such a way, though those currently need only a small fraction of the disk space compared to the original files.
This would introduce a file service, with some configuration per web site.
The file service will handle the file cache and can be requested to give a local reference to a file.
Access to files will have a greater latency than before. All file access must go through the service.
Please add your comments.
Ping @mmzeeman @kaos @arjan
I like the idea of a file service API. Much like the ACL API, that you can have one out of a number of modules providing the file services. This way, you can have a site with all files local, and another with the files on S3, while another user might prefer to store their files on dropbox. This way, we could have the local file service be the default, avoiding any extra configuration and being completely backwards compatible to how it works now (on the fs layer).
Cross ref back to post on BEAM community mailing list, mentioning this issue: https://groups.google.com/d/msg/beam-community/wf2IfmdbyRg/nqXcyR27ctsJ
Added wiki page to keep a working document for a new zotonic file service API: https://github.com/zotonic/zotonic/wiki/Zotonic-File-Service-API
moved to 0.10..
This has been implemented in the mod_filestore.
First commit: 6db73e6