An Amazon S3 stream wrapper for PHP with full support for subdirectory trees, multiple protocol names and caching
Adding a storage option to allow changing from standard to reduced redundancy storage. Thanks!
|lib||Adding a storage option to allow changing from standard to reduced re…|
|README||* Completed README|
aS3StreamWrapper: a better Amazon S3 stream wrapper for PHP ABOUT This is an Amazon S3 stream wrapper. It lets your PHP code use fopen(), file_put_contents(), opendir(), readdir(), closedir(), stat(), copy() and friends via Amazon S3. Unlike most wrappers there is full support for subdirectories (not just directory listings of entire buckets). LIMITATIONS Be aware that files are buffered entirely in memory which will lead to out of memory errors on files larger than your PHP memory limit. For our use cases this currently makes a lot of sense, but perhaps we'll add support for buffering on the local drive instead as an option specified when registering a protocol or creating a stream context. It's not hard compared to the stuff we've covered already. UNUSUAL FEATURES * You can make subdirectories and sub-subdirectories (most S3 wrappers don't support S3's native features for listing things with a prefix and a limit) * You can register the wrapper more than once under different names * You can subclass and extend the wrapper class properly * Caching support for fast read access to the first 8K of each file * Extensive unit tests USAGE Registering the stream wrapper is easy: $wrapper = new aS3StreamWrapper(); $wrapper->register(array('protocol' => 's3', 'acl' => AmazonS3::ACL_PUBLIC, 'key' => 'your key id', 'secretKey' => 'your secret key', 'region' => AmazonS3::REGION_US_W1)); Now you can open files over S3 with, for instance: fopen("s3://bucketname/subdir/filename.txt", "w"); Note that the first directory level is the bucket name. You must use mkdir() to create a new bucket. Calling mkdir() and rmdir() for subdirectories belong the bucket level is not really necessary but we support it so that your code works naturally. See lib/wrapper/aS3StreamWrapperTest.php for extensive examples of proper usage. Note that you can register *more than one* protocol name, with *different* options. Which means that you can copy files from region to region or with different ACL settings (public vs. private). Many (but not all) PHP functions accept stream wrapper URLs. See: http://php.net/manual/en/book.stream.php For the official PHP documentation on this. There are many Our stream wrapper class is in lib/wrapper along with its tests. lib/vendor contains a recent snapshot of the official AWS PHP SDK from Amazon. CACHING S3 is great at delivering content to end users and scaling the storage of content in general, but you don't want to do many reads of entire S3 objects in rapid succession just to peek at their headers to determine image dimensions, file type, etc. The optional cache feature addresses this problem. You supply the cache object. In addition to the options demonstrated above and in the tests, you can specify a 'cache' option, which must point to a cache object that supports get($key), set($key, $data, $timeout-in-seconds), has($key) and remove($key) methods (hint: any subclass of sfCache is cool). This cache is used to store the first 8K of every file and also the results of stat(). Consider using memcache or even a MySQL cache that is fast and local to your servers. Of course, all of your servers must use the cache consistently to get consistent results. SYMFONY Although we built this for Symfony and Apostrophe, the wrapper has no dependencies on Symfony or Apostrophe. But if you add this folder to your plugins/ folder in a Symfony 1.x project and add it to ProjectConfiguration in the usual way, you'll be able to autoload the stream wrapper. Just FYI. WARNING Use at your own risk. There is no warranty, express or implied. LICENSE Copyright 2011 P'unk Avenue LLC. Released under the BSD license. Built for Apostrophe: apostrophenow.com CONTACT Questions? Contact us via http://github.com/punkave Also follow us at http://punkave.com/ and @punkave