New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

figure out deploy to AWS #106

antislice opened this Issue Aug 26, 2016 · 4 comments


None yet
1 participant

antislice commented Aug 26, 2016

Eric says they have AWS but not heroku, so we need to figure out how to use AWS.

@antislice antislice changed the title from TRANSITION: deploy to AWS to figure out deploy to AWS Aug 30, 2016


This comment has been minimized.

@antislice antislice added the ready label Aug 30, 2016

@antislice antislice added this to the Sprint 8/29-9/10ish milestone Aug 30, 2016

@antislice antislice added in progress and removed ready labels Aug 31, 2016


This comment has been minimized.


antislice commented Aug 31, 2016

  • Created a user, nola2016, to avoid using the main account credentials as recommended.
  • I don't understand AWS permissions, and their documentation is not helpful. Gave the nola2016 user full admin access because it was easier than tracking down what I needed to solve ERROR: Unable to assign role. Please verify that you have permission to pass this role: aws-elasticbeanstalk-service-role. Otherwise, the instructions linked above work more or less well.
  • Created an integrated postgres DB following (other places discuss making a standalone DB but that seemed like overkill)
  • FINALLY got sinatra to connect to postgres, was getting some weird af issues using a helper method in the Sinatra::Base extension class. (those issues are still unresolved, but also reproduce locally) - fixed now
  • The assets path requires a workaround. The default nginx config redirects /assets to /public/assets. We use Sprockets to dynamically compile and serve assets from the /assets directory, so redirecting like this breaks everything. If we precompiled assets we could theoretically place them in /public/assets.
    • Solution: We could remove the redirect from the nginx config. The file to change is /opt/elasticbeanstalk/support/conf/webapp_healthd.conf, which is symlinked to from /etc/nginx/conf.d/webapp_healthd.conf. Relying on this StackOverflow answer, I have a container-command to copy our nginx config into the location AWS is using. The config is the same as theirs minus the redirect. Also need to add an nginx restart to have the changes take effect, but the config does stick across deploys. (So it could be changed to only restart on leader_only aka a single instance. I think this is what leader_only is for.)

This comment has been minimized.


antislice commented Sep 1, 2016

From Rails docs: "Compiled assets are written to the location specified in config.assets.prefix. By default, this is the /assets directory." Not the /public/assets directory that AWS is assuming.

@antislice antislice self-assigned this Sep 2, 2016


This comment has been minimized.


antislice commented Sep 2, 2016

See PR #108 for config and code changes to make this work on AWS.

@antislice antislice closed this in 52bc32c Sep 2, 2016

@antislice antislice removed the in progress label Sep 2, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment