This repository provisions a complete, Google Cloud Platform (GCP) environment using Terraform.
tf-gcp-automation/
- main.tf
- modules/
- cloudrun/
- sql/
- firewall/
- iam/
- load-balancer/
- monitoring/
- network/
- project/
- secrets/
- vpc_connector/
- outputs.tf
- README.md
- configs/
- example1.yaml
- example2.yaml
- generated_projects/
- <project-name>/
- main.tf
- variables.tf
- terraform.tfvars
- plan.txt
- scripts/
- __init__.py
- deploy.py
- terraform_utils.py
- config_loader.py
- template_processor.py
- project_generator.py
- terraform.tfvars
- variables.tf
- The
modules/
directory is used as a template for newly generated projects but with different variables - Each new YAML file in
configs/
represents a project to be created with specific requirements - The python script then generates new terraform projects in
generated_projects/
for each YAML file in theconfigs/
by passing the variables from the YAML file to the existing modules in the repository - The GitHub Actions workflow is used to trigger the script when a new YAML files are added to
configs/
(Only when new files are added)
- The Script uses existing modules that are production-ready, the user only has to insert his own input variables in a YAML file and the script will do the rest.
- You can also specify some of the modules and not required to use all of them to make the project work, as the script can handle dependency issues between modules.
- The root
main.tf
wires together modular Terraform components undermodules/
. - The existing template is used to help extract a dependency map using
terraform graph
for the existing modules.- This is important because new projects specified in the YAML file can have dependency issues between differrent modules
- Google Cloud account with billing enabled
- Permissions to create projects/resources in your organization/folder (or use an existing project id)
- Terraform >= 1.5
- gcloud CLI installed and authenticated
Authenticate locally so Terraform can use your credentials:
# Login to Google Cloud
gcloud auth login
# Provide Application Default Credentials for Terraform
gcloud auth application-default login
# Set default project if desired
gcloud config set project <YOUR_PROJECT_ID>
- Python 3.8+
- Packages:
pyyaml
,pydot
- System dependency: Graphviz (required by
pydot
)
Install on Ubuntu/Debian:
sudo apt-get update && sudo apt-get install -y graphviz
python3 -m pip install --upgrade pip
python3 -m pip install pyyaml pydot
Run from the repository root:
# Initialize providers, modules, and backend
terraform init
# See what will be created/changed
terraform plan
# Apply the infrastructure
terraform apply -auto-approve
# Tear everything down
terraform destroy -auto-approve
If you change backend settings or the bucket, re-run terraform init -reconfigure
.
modules/project
: Creates/uses a GCP project and enables required APIsmodules/network
: VPC, subnetwork, and outputs for self links/idsmodules/firewall
: Ingress rules (e.g., port 5432 to SQL with tagsql
)modules/sql
: PostgreSQL instance, database, users; peering/network wiring; uses Secret Manager for passwordmodules/secrets
: Creates secrets (e.g., DB password) and returns names/versionsmodules/vpc_connector
: Serverless VPC Access connector for Cloud Run -> VPCmodules/iam
: Service account for Cloud Run and role bindings fromroles
modules/cloudrun
: Cloud Run service; container image, port, envs, VPC connector, SAmodules/load-balancer
: External HTTPS Load Balancer fronting Cloud Runmodules/monitoring
: Alerting policy toalert_email
for service health
The root main.tf
composes these modules together with outputs passed between them.
- Triggers on pushes to
main
that change files inconfigs/*.yaml
orconfigs/*.yml
- Can also be run manually from the Actions tab (
workflow_dispatch
) - Runs the script to create projects and produce Terraform plans
- GCP credentials (service account JSON), git username and email in repository secrets
- You must also generate a Slack webhook for your channel and add it as a repository secret.
- Checkout the repo
- Optionally detect newly added files under
configs/
- Setup Python 3.10 and install
pyyaml
,pydot
- Authenticate to Google Cloud using
${{ secrets.GCP_Credentials }}
- Install
gcloud
and set theproject_id
(replaceyour-gcp-project-id
in the workflow) - Setup Terraform 1.8.4
- Run
python scripts/deploy.py --overwrite
- Push new changes with the new project generated to a new branch in the same repository
- Send a notification to your slack channel if workflow succeeded
The script is organized into focused modules for better maintainability. The main entry point is scripts/deploy.py
which orchestrates the entire process.
- Entry point for the entire generation process
- Loads configurations and coordinates all other modules
- Handles command-line arguments and provides summary output
get_terraform_graph()
- Runsterraform graph
commandparse_terraform_graph()
- Parses graph output to extract module dependenciesload_dependency_map()
- Builds complete dependency map from templatevalidate_dependencies()
- Ensures selected modules have all required dependencies
load_yaml_configs()
- Loads all YAML files fromconfigs/
directoryextract_selected_modules()
- Extracts modules marked asselected: true
filter_main_tf()
- Filtersmain.tf
to include only selected modulesfilter_variables_tf()
- Filtersvariables.tf
to include only needed variablesgenerate_tfvars()
- Createsterraform.tfvars
from module configurationsformat_tfvars_value()
- Formats values for Terraform variable files
generate_project_structure()
- Creates project directoriescopy_and_filter_templates()
- Copies and filters template filesrun_terraform_init()
- Runsterraform init
in project directoriesrun_terraform_plan()
- Runsterraform plan
and saves output toplan.txt
For each YAML file, the system:
- Parses selected modules and their input variables
- Builds a module dependency map using
terraform graph
- Validates dependencies, then generates filtered
main.tf
andvariables.tf
- Writes a
terraform.tfvars
with only the needed variables - Runs
terraform init
andterraform plan
, saving output toplan.txt
Each file in configs/
should define a project name and which modules are selected, along with their inputs. Example outline:
project_name: example1-project
modules:
project:
selected: true
project_id: my-gcp-project
region: us-central1
network:
selected: true
vpc_name: my-vpc
sql:
selected: false
Only modules with selected: true
are included. Their provided keys become variables in the generated terraform.tfvars
.
Run from the repository root:
python3 scripts/deploy.py # generate projects for all YAMLs in configs/
python3 scripts/deploy.py --overwrite # allow overwriting existing generated project folders
python3 scripts/deploy.py --help # show available options
Output appears under generated_projects/<project_name>/
:
generated_projects/
example1-project/
main.tf
variables.tf
terraform.tfvars
plan.txt
- The script exits non‑zero if any project fails validation or planning
- You can subsequently
cd generated_projects/<project_name>
and runterraform apply
if the plan looks good
- Some modules didn’t appear to depend on core modules like
project
in the graph - Root cause: Variables such as
project_id
were passed directly viatfvars
to downstream modules instead of being wired frommodule.project
outputs. Terraform then cannot infer the dependency - Prefer wiring like:
module "sql" {
source = "./sql"
project_id = module.project.project_id
}
- Duplicated values like
project_id
across multiple modules make YAML verbose and error‑prone - Used already existing template and copied necessary files and filtered only the selected modules, as shown in
filter_main_tf()
- The raw
terraform graph
may contain duplicate edges and self‑dependencies - Deduplicate and drop self‑references when parsing:
for mod, deps in dependencies.items():
unique_deps = sorted(set(d for d in deps if d != mod))
dependencies[mod] = unique_deps
You can auto-create a GCS bucket for state, then write a backend.tf
file that points to it:
resource "random_id" "default" {
byte_length = 8
}
resource "google_storage_bucket" "default" {
name = "${random_id.default.hex}-terraform-remote-backend"
location = "US"
force_destroy = false
public_access_prevention = "enforced"
uniform_bucket_level_access = true
versioning {
enabled = true
}
}
resource "local_file" "default" {
file_permission = "0644"
filename = "${path.module}/backend.tf"
content = <<-EOT
terraform {
backend "gcs" {
bucket = "${google_storage_bucket.default.name}"
}
}
EOT
}
Instead of passing DB host/port/envs, use the socket path /cloudsql/<project>:<region>:<instance>
by adding annotations in the Cloud Run service:
metadata {
annotations = {
"autoscaling.knative.dev/maxScale" = "1000"
"run.googleapis.com/cloudsql-instances" = google_sql_database_instance.instance.connection_name
"run.googleapis.com/client-name" = "terraform"
}
}