🍐 An AWS deployment automation pattern.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


About 🍐

Update 29 September 2018: Since the release of AWS Step Functions, this pattern is out of date. Orchestrating deployment with a slim lambda function is still a solid pattern, but Step Functions could make the implementation simpler and could provide more features (like graphical display of deployment status). You can even see some of the ways this might work in AWS's DevOps blog. Those implementation changes would drive some re-architecture, too. Because those are major changes, for now Pear is an interesting exploration of a pattern but isn't ready for use in new deployments.

Pear is an AWS deployment automation pattern presented through an example implementation. It's designed to make it easy to add slick features like voice interfaces while giving you enough control to stay secure and enough stability to operate in production. It's meant for infrastructures too complex to fit in tools like Heroku or Elastic Beanstalk, for situations where you need to be able to turn all the knobs.

Pear Architecture

See the design doc for more info.


This project is about which deployment tools to use and how to hook them together. To keep the example easy to read and easy to launch, some other best practices have been omitted. For instance, in real production you'd keep Pear's lambda functions each in separate repositories. That's a pain in an example scenario, so here they're all kept in one.

Some specific things to watch out for:

  • Pear assumes you have a single Alexa skill that you'll connect to Pear envs one at a time.

  • The shell scripts were tested on macOS Sierra (10.12.5). Some limited testing suggests you may find a little funkiness on other platforms.

  • Pear uses local shell scripts as the build process for it's lambda functions.

User Guide



(These are all the steps you'd take on a first run. On future runs you can skip some.)

  1. Create an Alexa skill in the Amazon Developer Console (The interaction model is defined in lambda_functions/deployer_alexa_skill/interaction_model).

  2. Copy the Application Id from the Skill Information tab of your Alexa skill in the Amazon Developer Console into a file called .alexa_skill_id in the project root.

  3. source setup.sh

  4. cd lambda_functions

  5. ./package.sh -f deployer

  6. ./package.sh -f deployer_alexa_skill

  7. cd ../terraform/global

  8. terraform init -backend-config ../.backend_config

  9. terraform apply

  10. cd ../env

  11. terraform init -backend-config ../.backend_config

  12. Create environments with terraform env.

  13. terraform apply

  14. Use the AWS web console to open the deployer_alexa_skill_<env> lambda function and add Alexa Skills Kit as a Trigger (not yet supported by Terraform).

  15. If you want to get alerts when the honeypot is triggered, subscribe to the pear_alerts_<env> SNS topic.

  16. cd ../site

  17. ./publish.sh -e <env>

  18. cd ../lambda_functions

  19. ./publish.sh -f deployer -e <env>

  20. ./publish.sh -f deployer_alexa_skill -e <env>