Skip to content

ministryofjustice/staff-device-dns-dhcp-admin

Repository files navigation

repo standards badge .github/workflows/format-code.yml Brakeman Scan

Staff Device DNS / DHCP Admin

This is the web portal for managing Staff Device DNS / DHCP servers

Getting Started

Authenticate with AWS

Assuming you have been granted necessary access permissions to the Shared Service Account, please follow the NVVS DevOps best practices provided step-by-step guide to configure your AWS Vault and AWS Cli with AWS SSO.

Prepare the variables

  1. Clone the repository
  2. Copy .env.example to .env
  3. Modify the .env file and provide values for variables as below:
Variables How?
AWS_PROFILE= your AWS-CLI profile name for the Shared Services AWS account. Check this guide if you need help.
SHARED_SERVICES_ACCOUNT_ID= Account ID of the MoJO Shared Services AWS account.
REGISTRY_URL= <MoJO Development AWS Account ID>.dkr.ecr.eu-west-2.amazonaws.com
ENV= Your Terraform namespace from the DNS DHCP Infrastructure repo.
  1. Copy .env.development to .env.<your terraform namespace>

Prerequisite to starting the App

This repo is dependant on a locally running dhcp network. This is so that the admin app can query the dhcp api without timing out.

  1. Clone the repository here
  2. Follow the instructions in the cloned repository to run the dhcp server
  3. Navigate back to this repo

Starting the App

  1. If this is the first time you have setup the project:

    1. Build the base containers

      make build-dev
    2. Setup the database

      make db-setup
  2. Start the application

$ make serve

Running Tests

  1. Setup the test database
make db-setup
  1. Run the entire test suite
make test

To run individual tests:

  1. Shell onto a test container
ENV=test make shell
  1. Run the test file or folder
bundle exec rspec path/to/spec/file

Scripts

There are two utility scripts in the ./scripts directory to:

  1. Migrate the database schema
  2. Deploy new tasks into the service

Deployment

The deploy command is wrapped in a Makefile. It calls ./scripts/deploy which schedules a zero downtime phased deployment in ECS.

It doubles the currently running tasks and briefly serves traffic from the new and existing tasks in the service. The older tasks are eventually decommissioned, and production traffic is gradually shifted over to only the new running tasks.

On CI this command is executed from the buildspec.yml file after migrations and publishing the new image to ECR has been completed.

Targetting the ECS Cluster and Service to Deploy

The ECS infrastructure is managed by Terraform. The name of the cluster and service are outputs from the Terraform apply. These values are published to SSM Parameter Store, when this container is deployed it pulls those values from Parameter Store and sets them as environment variables.

The deploy script references these environment variables to target the ECS Admin service and cluster. This is to avoid depending on the hardcoded strings.

The build pipeline assumes a role to access the target AWS account.

Publishing Image from Local Machine

  1. Export the following configurations as an environment variable.
  export DHCP_DNS_TERRAFORM_OUTPUTS='{
    "admin": {
      "ecs": {
        "cluster_name": "[TARGET_CLUSTER_NAME]",
        "service_name": "[TARGET_SERVICE_NAME]"
      }
    }
  }'

This mimics what happens on CI where this environment variable is already set.

When run locally, you need to target the AWS account directly with AWS Vault.

  1. Schedule the deployment
  aws-vault exec [target_aws_account_profile] -- make deploy

Maintenance

AWS RDS SSL Certificate

The AWS RDS SSL certificate is due to expire August 22, 2024. See the documentation for information on updating the certificate closer to the date.

To update the certificate, update the Dockerfile to use the new intermediate (region specific) certificate (found here), and update the config/database.yml to point to the new certificate file path.

DHCP Data Checks

For information on how to perform the data import before network cutover, please see the documentation.

CI/CD

Known Issues

  • Dependabot currently does not support a container image monitoring solution only for the Docker container ruby:3.2.2-alpine3.16, this alpine images needs to be updated manually.