A Python API that takes in temperature readings from remote sensors and store these readings in AWS DynamoDB. The API also provides temperature statistics such as Max, Min and Average temperatures, based on the readings received.
- API Reference
- Built With
- Running the API locally
- Deploying the API
- Tearing down the Terraform stack
- CI/CD Pipeline
- Architecture
- Database
A Swagger documentation page is currently under development for the API but in the meantime the below should provide enough guidance on how to interact with the API.
The API is available at: https://prod.lucastelemetry3m.com
The API running on the staging environment is also available at: https://staging.lucastelemetry3m.com
The only reason why the staging environment is exposed is to demonstrate the multiple environments development strategy implemented. More about that in the CI/CD Pipeline Section
The API exposes two endpoints:
-
GET /api/stats
Used to retrieve the current temperature statistics based on the readings received. No authentication required.Response Sample:
200 OK { "Maximum": 28, "Minimum": -6, "Average": 10 }
-
PUT /api/temperature
Used to store a new reading. No authentication required.Expected payload sample:
{ "sensorId": "202", "temperature": 18, "timestamp": "YYYY-MM-DDTHH:MM:SS" }
Response Sample:
200 OK { "message": "Temperature reading recorded successfully" }
Python Application
- Python 3.8
- Docker v20.10.10
- Flask Web Framework v2.0.2
- Flask-RESTful 0.3.9
- Boto3 v1.20.14 (AWS SDK)
- uWSGI v2.0.18 (Web server)
- AutoPEP8 v1.6.0 (Python Linter)
Infrastructure as Code (IaC)
- Terraform v2.4
- Python
- pip
- Flask
- uWSGI
- Docker
- Terraform
- AWS CLI
- AWS-Vault (Not mandatory but highly recommended to configure AWS Credentials locally safely)
The following are required to create this stack in your AWS Account.
- AWS IAM user with at least the permission listed in this Sample IAM Policy
- Custom Domain registered in AWS Route53
After registering your domain, update the "dns_zone_name" variable in deploy/variables.tf with your domain name.
The Terraform state and lock are stored remotely following best practices when working as part of a team. Terraform requires the following AWS Resources to be set up for remote state/lock.
- S3 Bucket (Used to store the TF State)
- DynamoDB Table (Used to store the TF Lock)
To replicate this, create the resources above and replace the details in the terraform/main.tf file to match your S3 bucket name and DynamoDB Lock table. More information about setting up Terraform.
- Given that all the Prerequisites above have been correctly installed, execute the below
git clone https://github.com/lucasfdsilva/telemetry-app
cd telemetry-app/
pip install -r /requirements.txt
export FLASK_APP=wsgi.py
export FLASK_ENV=development
export PREFIX=telemetry-dev
IMPORTANT
Terraform will use your AWS account to build all resources required. This in turn will generate costs in you AWS account.
Estimated costs for running the infrastructure required (per environment)
Monthly: $125.19
Daily: $4.04
Hourly: $0.17
Please refer to the Architecture Diagram to understand which resources are part of this Terraform Stack.
- Once the AWS credentials have been configured locally, We will use Docker Compose to run Terraform.
docker-compose -f terraform/docker-compose.yml run --rm terraform init
docker-compose -f terraform/docker-compose.yml run --rm terraform workspace select dev || terraform workspace create dev
docker-compose -f terraform/docker-compose.yml run --rm terraform plan
docker-compose -f terraform/docker-compose.yml run --rm terraform apply
- Run the following command to "seed" the Aggregations DynamoDB Table.
Remember to replace "dev" with your workspace name if using a different name.
aws dynamodb put-item \
--table-name telemetry-dev-temperature-readings-aggregation \
--item file://terraform/templates/dynamodb/seed.json \
--condition-expression "attribute_not_exists(total_readings_count)" \
|| true
- Now that Terraform has been initialized and the AWS Resources have been provisioned, run the application:
cd /app
flask run
We will use Docker to build a new docker image and then push this image to the ECR repository in AWS so that ECS can pull in and use this image.
- Before we deploy, make sure your Terraform is valid.
docker-compose -f terraform/docker-compose.yml run --rm terraform init
docker-compose -f terraform/docker-compose.yml run --rm terraform fmt
docker-compose -f terraform/docker-compose.yml run --rm terraform validate
- Now run the following at the project root. Make sure you replace the variables where applicable to match your ECR Repo.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:latest .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
- We can now apply the Terraform stack so that ECS uses the newest version of the API.
docker-compose -f terraform/docker-compose.yml run --rm terraform plan
docker-compose -f terraform/docker-compose.yml run --rm terraform apply
- After the apply job is complete, Terraform will output the URL you can use to access the application.
Since Terraform manages our entire stack, destroying and re-creating it can be done very quickly. The stack created for this application takes approx. 6 minutes to be created from scratch.
- Ensure you're in the correct Terraform workspace.
- Destroy your Terraform stack
docker-compose -f terraform/docker-compose.yml run --rm terraform workspace select dev
docker-compose -f terraform/docker-compose.yml run --rm terraform destroy
In this repository you will find GitHub Actions workflows that automate the process of continuous integrations and continuous deployment of this application.
The workflows available make it possible to have the environments "staging" and "prod" being constantly and seamlessly tested, created and updated.
The Staging environment is built following changes and updates to the "main" branch, while "prod" is updated when new commits and pull requests are made to the "prod" branch.
For more information on the configuration of these workflows, please refer to the following:
The Terraform stack was developed following the 5 pillars of the AWS Well-Architected framework.
Please refer to the Architecture Diagram to understand the resources used and the relationship between these resources.
This application only uses DynamoDB to persistent and access the data required. Due to the nature of NoSQL databases the schemas are very simple and basic.
Please refer to the Database Diagram to understand how the DynamoDB tables are set up.