Skip to content

GoogleCloudPlatform/deploystack-cloudsql-multiregion

Repository files navigation

GC Start

Prepare your Database for Disaster Recovery with Cloud SQL

Introduction

This architecture uses click-to-deploy so you can spin up infrastructure in minutes using terraform!

In today's world, where downtime can cause significant revenue loss and impact customer satisfaction, having a highly available database is critical. Nowadays, critical, global applications require highly available databases that are able to provide low latency access to data and that minimize downtime caused by infrastructure failures or disasters.

Whether you're a developer, a DevOps engineer, or a system administrator, this click-to-deploy architecture is designed to help you automate the deployment and management of your Cloud SQL PostgreSQL database with support for failover. With this solution, you can deploy a highly available relational database that ensures your data is always accessible and resilient to failures, while also providing disaster recovery capabilities in case of a disaster.

This blueprint creates a Cloud SQL instance with multi-region read replicas as described in the Cloud SQL for PostgreSQL disaster recovery article.

The solution is resilient to a regional outage. To get familiar with the procedure needed in the unfortunate case of a disaster recovery, please follow steps described in part two of the aforementioned article.

This repo is based on the Cloud Foundation Fabric blueprint available here.

Use cases:

These are some examples of the use cases where it is critical to have a highly available database:

  • Any application that has a strong high availability requirement
  • E-commerce: An e-commerce website that serves customers all over the world could use a multi-region database to ensure that its website is always available, even if there is an outage in one region.
  • Social media: A social media platform that has users all over the world could use a multi-region database to improve performance and scalability.
  • Financial services: A financial services company that is required to have its data replicated across multiple regions for compliance purposes could use a multi-region database to meet those requirements.

Architecture

This is the high level diagram:

Cloud SQL multi-region.

The solution will use:

If you're migrating from another Cloud Provider, refer to this documentation to see equivalent services and comparisons in Microsoft Azure and Amazon Web Services.

Costs

Pricing Estimates - We have created a sample estimate based on some usage we see from new startups looking to scale. This estimate would give you an idea of how much this deployment would essentially cost per month at this scale and you extend it to the scale you further prefer. Here's the link.

Requirements

This blueprint will deploy all its resources into the project defined by the project_id variable. Please note that we assume this project already exists. However, if you provide the appropriate values to the project_create variable, the project will be created as part of the deployment.

If project_create is left to null, the identity performing the deployment needs the owner role on the project defined by the project_id variable. Otherwise, the identity performing the deployment needs resourcemanager.projectCreator on the resource hierarchy node specified by project_create.parent and billing.user on the billing account specified by project_create.billing_account_id.

Spinning Up The Architecture

Before we deploy the architecture, you will need the following information:

  • The service project ID.
  • A unique prefix that you want all the deployed resources to have (for example: cloudsql-multiregion-hpjy). This must be a string with no spaces or tabs.

Click on the button below, sign in if required and when the prompt appears, click on “confirm”. It will walk you through setting up your architecture.

Open in Cloud Shell

This is the startup screen that appears after clicking the button and confirming: cloud_shell

During the process, you will be asked for some user input. All necessary variables are explained at the bottom of this ReadMe file. In case of failure, you can simply click the button again.

🎉 Congratulations! 🎉
You have successfully deployed your environment on Google Cloud.

Move to real use case consideration

This implementation is intentionally minimal and easy to read. A real world use case should consider:

  • Using a Shared VPC
  • Using VPC-SC to mitigate data exfiltration

Shared VPC

The example supports the configuration of a Shared VPC as an input variable. To deploy the solution on a Shared VPC, you have to configure the network_config variable:

network_config = {
    host_project       = "PROJECT_ID"
    network_self_link  = "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/VPC_NAME"
    subnet_self_link   = "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/$REGION/subnetworks/SUBNET_NAME"
    cloudsql_psa_range = "10.60.0.0/24"
  }

To run this example, the Shared VPC project needs to have:

  • A Private Service Connect with a range of /24 (example: 10.60.0.0/24) to deploy the Cloud SQL instance.
  • Internet access configured (for example Cloud NAT) to let the Test VM download packages.

In order to run the example and deploy Cloud SQL on a shared VPC the identity running Terraform must have the following IAM role on the Shared VPC Host project.

  • Compute Network Admin (roles/compute.networkAdmin)
  • Compute Shared VPC Admin (roles/compute.xpnAdmin)

Test your environment

We assume all those steps are run using a user listed on data_eng_principals. You can authenticate as the user using the following command:

gcloud init
gcloud auth application-default login

Below you can find commands to connect to the VM instance and Cloud SQL instance.

  $ gcloud compute ssh sql-test --project PROJECT_ID --zone ZONE
  sql-test:~$ cloud_sql_proxy -instances=CLOUDSQL_INSTANCE=tcp:5432
  sql-test:~$ psql 'host=127.0.0.1 port=5432 sslmode=disable dbname=DATABASE user=USER'

You can find computed commands on the Terraform demo_commands output.

How to recover your initial deployment by using a fallback

To implement a fallback to your original region (R1) after it becomes available, you can follow the same process that is described in the above section. The process is summarized here.

Cleaning up your environment

The easiest way to remove all the deployed resources is to run the following command in Cloud Shell:

deploystack uninstall

The above command will delete the associated resources so there will be no billable charges made afterwards.

Variables

name description type required default
postgres_user_password postgres user password. string
prefix Unique prefix used for resource names. Not used for project if 'project_create' is null. string
project_id Project id, references existing project if project_create is null. string
data_eng_principals Groups with Service Account Token creator role on service accounts in IAM format, only user supported on CloudSQL, eg 'user@domain.com'. list(string) []
network_config Shared VPC network configurations to use. If null networks will be created in projects with preconfigured values. object({…}) null
postgres_database postgres database. string "guestbook"
project_create Provide values if project creation is needed, uses existing project if null. Parent is in 'folders/nnn' or 'organizations/nnn' format. object({…}) null
regions Map of instance_name => location where instances will be deployed. map(string) {…}
service_encryption_keys Cloud KMS keys to use to encrypt resources. Provide a key for each reagion configured. map(string) null
sql_configuration Cloud SQL configuration object({…}) {…}

Outputs

name description sensitive
bucket Cloud storage bucket to import/export data from Cloud SQL.
connection_names Connection name of each instance.
demo_commands Demo commands.
ips IP address of each instance.
project_id ID of the project containing all the instances.
service_accounts Service Accounts.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages