Skip to content
Example REST service using Ryanair's APIs
Branch: master
Clone or download
jaguililla Feature/minishift (#2)
* Fix service check

* Update Gradle

* Add Frontend module and Minishift support

* Small fixes

* Add Docker slides
Latest commit 721839b May 16, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
backend Feature/minishift (#2) May 16, 2019
deploy Feature/minishift (#2) May 16, 2019
frontend Feature/minishift (#2) May 16, 2019
gradle/wrapper Feature/minishift (#2) May 16, 2019
site Feature/minishift (#2) May 16, 2019
.editorconfig Initial commit Mar 7, 2019
.gitignore Initial commit Mar 7, 2019
README.md Feature/minishift (#2) May 16, 2019
build.gradle Feature/minishift (#2) May 16, 2019
docker-compose.yml Feature/minishift (#2) May 16, 2019
gradle.properties Feature/minishift (#2) May 16, 2019
gradlew
gradlew.bat Add Gradle scripts Mar 7, 2019
settings.gradle.kts Feature/minishift (#2) May 16, 2019

README.md

FLYHOPPER

Flight connections application. It uses Ryanair public APIs to find connecting flights from given airports and dates.

Structure

The project has a Gradle multi-module layout. Deployment code is located at the deploy folder.

Docker Compose is used to build images for each module and publish them to the Docker Registry.

Format

Minimum formatting rules are set inside the .editorconfig file.

Build

Gradle is the tool used to automate build tasks locally.

For image creation, Docker builds binaries (using Gradle) in a first stage. The outcome of this stage is used to create the application image (check backend/Dockerfile for more details).

Docker Compose is used to build all modules (from their Dockerfile) and run them inside containers.

To be sure that everything works before pushing changes, you can copy the deploy/pre-push file to the .git/hooks/ directory. You can use git push --no-verify to skip these checks.

Useful build commands:

  • Build: ./gradlew installDist. Generates:

    • Application directory: backend/build/install/backend
    • Packaged application: backend/build/distributions
    • Web application: backend/build/libs/ROOT.war
    • Single JAR with dependencies: backend/build/libs/<module>-all-<version>.jar
    • Application specifications: backend/build/reports/cucumber
  • Rebuild: ./gradlew clean installDist

  • Documentation: ./gradlew doc. Creates:

    • API documentation: backend/build/dokka/backend
    • Coverage report: backend/build/reports/jacoco/test/html
  • Generate everything (binaries and documentation): ./gradlew all

  • Run: ./gradlew run

  • Watch: ./gradlew --no-daemon --continuous runService

  • Build local container images: docker-compose build

  • Start application inside containers: docker-compose up -d

Testing

The HTTPie and jq tools are used for testing the application manually in some scripts.

IntelliJ HTTP Client is also used to perform requests interactively: backend/src/test/http/*.http.

Release

Tagging of source code and container images should be done upon Pull Request merge on live branches. This is still to be implemented by the CI/CD pipeline.

Publish

Published artifacts are Docker images. They are published to a Docker registry. For Minikube and Docker Compose, the local image store is used instead a registry.

Inside AWS each account is provided with a single (default) Amazon ECR registry. The registry address follows this pattern: <account_id>.dkr.ecr.<region>.amazonaws.com Ie: 487943794540.dkr.ecr.eu-west-1.amazonaws.com

Each registry can have many ECR repositories. A repository holds all images of a service (different versions).

If an account is shared by many projects, ECR repositories should have namespaces (a hierarchy) to avoid name clashes. Ie:

  • <account_id>.dkr.ecr.<region>.amazonaws.com/app1/srv1
  • <account_id>.dkr.ecr.<region>.amazonaws.com/app1/srv2
  • <account_id>.dkr.ecr.<region>.amazonaws.com/app2/srv1
  • <account_id>.dkr.ecr.<region>.amazonaws.com/app2/srv2

To push images, you must create ECR repositories matching the image names (or they could be dynamic and created by the CI/CD process). So you have to create one repository per service and tag local images accordingly. To do so, follow the next steps:

  1. Log into your AWS account: aws login
  2. Log in the AWS Docker registry: eval $(aws ecr get-login --region <region> --no-include-email)
  3. Build repository container images: registry="<repository>" docker-compose build The <repository> value MUST end with '/'. Ie: registry="487943794540.dkr.ecr.eu-west-1.amazonaws.com/application/" docker-compose build

Deployment

To deploy the application services. The services' images must be published in their corresponding repositories.

Minikube

Prior to deploying to Minikube, VirtualBox and HTTPie must be installed also. The deployment script has to be run from project root: deploy/minikube.sh it initializes a Minikube instance and deploy the application service.

You can find more information inside the script file.

Minishift

Heroku

To deploy the WAR in Heroku. First setup the Heroku CLI tool and project:

heroku login
heroku plugins:install heroku-cli-deploy
heroku create flyhopper

And then build and upload the binary:

gw clean assemble
heroku war:deploy backend/build/libs/ROOT.war --app flyhopper

TODO

  • Generate a documentation site (use Orchid or JBake)
  • Add front end mounted on NGINX or Apache
  • Code stress tests using Gatling.io against local, container, or deployed service
  • Deploy to Docker Swarm and check conversion of docker-compose.yml to K8S
  • Configuration management (ConfigMaps)
  • CI/CD Pipeline

Other goodies

  • Multi-module to deploy all applications services
  • Infrastructure inside project is more convenient
  • Also having documentation and stress tests as modules (Git monorepo) is faster to develop and maintain a project
You can’t perform that action at this time.