Skip to content

tntk-io/tntk-infra

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project overview

This educational project is intended for the practical application of knowledge in the field of cloud technology and DevOps, as well as the deployment of web applications with the best practices. The stack is designed to cover the maximum number of technologies and simultaneously carry the functional and meaningful load on each. This project applies a declarative approach to infrastructure building and shows the deployment automation process for the entire stack. All components of the project and their relationships are considered in detail to see how the application and its operation services work in the actual cloud on a concrete example.

Main goal

Gain practical Infrastructure as code (IaC) skills. Learn how to deploy such applications in the cloud cluster.

Requirements and tools

  1. Terraform cli (https://developer.hashicorp.com/terraform/install)
  2. AWS account (where everything will be deployed)
  3. AWS account where our host_zone is located
  4. Domain (e.g. "example.com")
  5. Cloud terraform account (https://app.terraform.io/session)
  6. Repository access

Infrastructure deployment

To deploy our terraform code first of all we need to create account on https://app.terraform.io/session and create organization and workspace.

  1. Login to the terraform cloud. After that we need to create workspace. We need to go to "Projects & workspaces" → new workspace → create workspace click on then Version control workflow and we have to connect it with our github repository where our terraform code is located.
  2. Select VCS provider. Select GitHub and then select GitHub.com from the menu.
  3. Set up provider to connect GitHub.com to Terraform Cloud (For additional information about connecting to GitHub.com to Terraform Cloud, please read documentation - https://developer.hashicorp.com/terraform/cloud-docs/vcs).
  4. Choose repository.
  5. Open Advanced options.
  6. Define folder with terraform code ("dev" or "prod").
  7. Define branch name.
  8. Create workspace.
  9. Define required variables. Don't forget to define Environment variables for AWS ("AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", if you are using SSO add the "AWS_SESSION_TOKEN").
  10. Start plan.
  11. Apply. After planning we can confirm & apply – it will executes the changes defined by our Terraform configuration to create, update, or destroy resources. It will take about 30 min to create all resources.

Application deployment

After deploying infrastructure we moving to application section desribed in application repository - https://github.com/tntk-io/tntk-ci.

Removing Infrastructure

  1. Login to argo CD (e.g. "https://argo.prod.example.com/") and delete "demo" application.
  2. Destroy. Open "Settings" → "Destruction and Deletion" → inside "Manually destroy" block push "Queue destroy plan" button. After planning we can destroy resources – it will executes the changes defined by our Terraform configuration to destroy resources. It will take about 30 min to destroy all resources.

Requirements

Name Version
terraform >= 1.4.2

Providers

Name Version
aws 5.35.0
helm 2.12.1
kubernetes 2.25.2
random 3.6.0
cloudinit 2.3.3
null 3.2.2
time 0.10.0
tls 4.0.5

Modules

Name Source Version
vpc terraform-aws-modules/vpc/aws 5.5.1
dynamodb terraform-aws-modules/dynamodb-table/aws 4.0.0
ecr cloudposse/ecr/aws 0.40.0
eks terraform-aws-modules/eks/aws 20.2.1
iam_assumable_role_for_lambda_execution terraform-aws-modules/iam/aws//modules/iam-assumable-role 5.33.1
iam_policy_for_lambda_execution terraform-aws-modules/iam/aws//modules/iam-policy ~> 5.33.1
rds cloudposse/rds-cluster/aws 1.7.0
s3_bucket_for_output_files terraform-aws-modules/s3-bucket/aws 4.1.0
sqs terraform-aws-modules/sqs/aws 4.1.0

Resources

Name Type
aws_caller_identity.current data source
aws_key_pair.devops resource
aws_eks_cluster.cluster data source
aws_eks_cluster_auth.cluster data source
aws_availability_zones.available data source
random_password.rds_db_name resource
random_password.rds_password resource
random_password.rds_admin_username resource
aws_ssm_parameter.save_rds_db_name_to_ssm resource
aws_ssm_parameter.save_rds_endpoint_to_ssm resource
aws_ssm_parameter.save_rds_password_to_ssm resource
aws_ssm_parameter.save_rds_admin_username_to_ssm resource
aws_ssm_parameter.save_dynamodb_table_name_to_ssm resource
aws_ssm_parameter.save_name_of_s3_for_output_files_to_ssm resource
aws_ssm_parameter.save_lambda_iam_role_arn_to_ssm resource
aws_ssm_parameter.sqs_arn resource
aws_ssm_parameter.sqs_name resource
aws_ssm_parameter.sqs_url_path resource
aws_cloudformation_stack.DatadogIntegration resource
helm_release.ack-lambda resource
helm_release.crd-helm-chart resource
helm_release.cert-manager resource
helm_release.actions-runner-controller resource
helm_release.ingress-nginx resource
helm_release.argocd resource
helm_release.argocd-apps resource
helm_release.datadog resource
kubernetes_service.ingress_gateway data source
aws_route53_record.eks_domain resource
aws_route53_record.eks_domain_cert_validation_dns resource
aws_acm_certificate.eks_domain_cert resource
aws_acm_certificate_validation.eks_domain_cert_validation resource
aws_lb.ingress data source
aws_route53_record.bastion_host_record resource
aws_route53_zone.base_domain data source
kubernetes_namespace.application resource
kubernetes_secret.demo-repo resource
aws_eip.bastion_host resource
aws_security_group.bastion_host resource
aws_instance.bastion_host resource
aws_ami.latest-ubuntu data source

Inputs

Name Description Type Default Required
base_domain Base domain for our DNS records (e.g. "example.com") string n/a yes
aws_region AWS region to create our resources (e.g. "us-east-1") string n/a yes
tag_env Tag environment (e.g. "prod") string n/a yes
id_rsa Public ssh key for ec2 instances (e.g. "ssh-rsa XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX...") string n/a yes
datadog_api_key DataDog api key. Get it at official DD web site. Region sensitive string n/a yes
datadog_application_key DataDog application key. Get it at official DD web site. Region sensitive string n/a yes
datadog_region DataDog region (e.g. "us5.datadoghq.com") string n/a yes
registrationToken Token for github actions self-hosted runners (e.g. ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) string n/a yes
ci_project_repo Git repo with source files for CI (e.g. "account-name/repository-name") string n/a yes
cd_project_repo Git repo with source files for CD (e.g. "account-name/repository-name") string n/a yes