Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Module does not work when running locally via Docker #26

Closed
dirtybirdnj opened this issue Jun 25, 2019 · 3 comments
Closed

Module does not work when running locally via Docker #26

dirtybirdnj opened this issue Jun 25, 2019 · 3 comments

Comments

@dirtybirdnj
Copy link

dirtybirdnj commented Jun 25, 2019

I've installed this module on a 4.3 site and I'm stuck at the following error. I cannot run dev/build... this is the error I get when I try to do that:

[Emergency] Uncaught Aws\Exception\CredentialsException: Error retrieving credentials from the instance profile metadata server. (cURL error 28: Connection timed out after 1001 milliseconds (see http://curl.haxx.se/libcurl/c/libcurl-errors.html))

After some googling it seems that the module is behaving as-if installed on an EC2 instance... the error seems to indicate that it's trying to get information about the EC2 instance it's running on but I'm running it locally via docker.

Things I have tried:

  1. Adding these fields to my .env file
AWS_REGION = "us-east-1"
AWS_BUCKET_NAME = "bucket-name"
AWS_ACCESS_KEY_ID="XXXXX"
AWS_SECRET_ACCESS_KEY="YYYYY"
  1. Creating a ~/.aws/credentials file with the following contents:
[default]
AWS_ACCESS_KEY_ID = XXXX
AWS_SECRET_ACCESS_KEY = YYYY
  1. Adding the ENV values directly to docker-compose.yml

services:
  db:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: root
      MYSQL_DATABASE: web_db
      MYSQL_USER: devuser
      MYSQL_PASSWORD: devpass
    ports:
      - "9909:3306"
    volumes:
      - web-db:/var/lib/mysql
  web:
    build:
      dockerfile: docker/Dockerfile
      context: ./
    environment:
      SHELL: /bin/bash
      AWS_REGION: us-east-1
      AWS_BUCKET_NAME: bucket-name
      AWS_ACCESS_KEY: XXXXX
      AWS_SECRET_ACCESS_KEY: YYYYY
    volumes:
      - ./app:/work/app
      - ./html:/work/html
      - ./public:/work/public
      - ./themes:/work/themes
    depends_on:
      - db
    ports:
      - "8101:80"
    links:
      - db
volumes:
  web-db:

Nothing I do seems to change how the client is being created... please advise if there's something I'm doing wrong or something missing from the docs (highly unlikely).

@obj63mc
Copy link
Collaborator

obj63mc commented Jun 25, 2019

You will need to update your yaml config to make sure the aws credentials are being used.

In your /app/_config/app.yml or whatever yml config file you are using for your project, add something like the following -

---
Only:
  envvarset: AWS_BUCKET_NAME
After:
  - '#assetsflysystem'
---
SilverStripe\Core\Injector\Injector:
  Aws\S3\S3Client:
    constructor:
      configuration:
        region: '`AWS_REGION`'
        version: latest
        credentials:
          key: '`AWS_ACCESS_KEY_ID`'
          secret: '`AWS_SECRET_ACCESS_KEY`'

By default it is expecting you to be on some type of amazon system but outside if you have the following in your config you will be all set. One thing you can do is also setup a separate yml file that is ignored from git like your .env file that contains this info as well separate from your main apps config (this is what we do locally)

@dirtybirdnj
Copy link
Author

dirtybirdnj commented Jun 25, 2019

This worked! Thank you @obj63mc for the lightning fast response ⚡️

Is there any downside to leaving this .yml file in when deploying? Is it basically "overkill" to have this in prod becasue the module is set up to operate this way by default... and having the .yml file basically is more descriptive and explicit about how it's configured?

@obj63mc
Copy link
Collaborator

obj63mc commented Jun 25, 2019

Since I host on heroku and not amazon such as say on EC2, I am not sure if the environment variables are set for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY by default, if those two variables are not set then you would probably get a connection error when trying to access the bucket.

If those environment variables are set at your hosting provider, you can definitely leave it checked in. The main thing is that when hosting on amazon EC2 by example, as long as you have your servers configured to your S3 buckets then you shouldn't need to specify that information.

@obj63mc obj63mc closed this as completed Jun 25, 2019
This was referenced Aug 7, 2019
@leochenftw leochenftw mentioned this issue Jan 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants