Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 #58

Closed
ghost opened this issue Apr 11, 2016 · 4 comments
Closed

S3 #58

ghost opened this issue Apr 11, 2016 · 4 comments

Comments

@ghost
Copy link

ghost commented Apr 11, 2016

I'm having trouble configuring S3 caching. Can you confirm I am using the correct syntax ? I have verified my S3 permissions and connected from this instance using the AWS SDK. My access key and security key are setup via aws configure.

Start Command
imageproxy/bin/imageproxy -cache s3://s3.amazonaws.com/zone-dropbox/output/cm/image-cache/ -addr 10.138.107.125:8080

Error
2016/04/11 22:00:46 s3util.Create failed: unwanted http status 403: "\nInvalidAccessKeyIdThe AWS Access Key Id you provided does not exist in our records.AEAC7AA329BBE037tEBB1ELYxnC8PD5lT1JHhO02so4ZWyB3Vw+tIfQl2GPbs0398kR+S+7VFhoJy7Wzc5azjN/I0N0=

@willnorris
Copy link
Owner

You either need to use an instance profile (when you're actually running on EC2), or the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY environment variables.

aws configure populates a ~/.aws/credentials file which is specific for the aws CLI. imageproxy doesn't read from that file.

@ghost
Copy link
Author

ghost commented Apr 12, 2016

Thanks. I did try exporting/setting the environment variables as well and that didn't work either. Still getting a 403. I have verified the permissions outside of imageproxy and everything seems fine.

On Apr 11, 2016, at 10:00 PM, Will Norris notifications@github.com wrote:

You either need to use an instance profile (when you're actually running on EC2), or the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY environment variables.

aws configure populates a ~/.aws/credentials file which is specific for the aws CLI. imageproxy doesn't read from that file.


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub

@noogen
Copy link

noogen commented Apr 24, 2016

Can you try without any folder? Check my working run configuration here: https://github.com/trybrick/imageproxy/blob/master/scripts/run.sh

Use:

-cache s3://s3.amazonaws.com/zone-dropbox

Reason: When I look at the s3 cache module, it look like the code is using the entire bucket. It doesn't seem to work with bucket subfolder. Therefore, you should reserve a bucket for the cache.

If you think about it, it makes a lot of sense why they did it this way for performance. This is because s3 partition (storage location) by the first n (usually 3) characters. S3 cache module use guid to represent the URL it is caching and for storage performance.

P.S. Additionally, you should make sure that the server/vps have ntp install with up-to-date time synced. This is an Amazon AWS requirement.

We loved the ability to cache with s3. We run it as docker and has no need to worry about storage. Though it does cost us around $50 per month for caching of about 100+ sites with around 2K visits per day per site. We also have ours behind a CDN to further reduce cost. Additionally, we have ours s3 configured to auto-delete after 14 days since storage is dirt cheap anyway. Most of the cost is related to store and retrieve. Cloundinary would've cost more because most of image are high-res and we need to be able to transform into different sizes to display on different site. Too much transformation and data transfer which we solve by scaling up docker instance using Rancher cloud.

@ghost
Copy link
Author

ghost commented Apr 26, 2016

Thanks.. this is solved.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants