Skip to content

Latest commit

 

History

History
363 lines (276 loc) · 12.5 KB

File metadata and controls

363 lines (276 loc) · 12.5 KB

Autoscaler tool for Cloud Spanner

Autoscaler

Set up the Autoscaler in Cloud Functions in a per-project deployment using Terraform
Home · Scaler component · Poller component · Forwarder component · Terraform configuration · Monitoring
Cloud Functions · Google Kubernetes Engine
Per-Project · Centralized · Distributed

Table of Contents

Overview

This directory contains Terraform configuration files to quickly set up the infrastructure for your Autoscaler with a per-project deployment.

In this deployment option, all the components of the Autoscaler reside in the same project as your Spanner instances.

This deployment is ideal for independent teams who want to self-manage the infrastructure and configuration of their own Autoscalers. It is also a good entry point for testing the Autoscaler capabilities.

Architecture

architecture-per-project

For an explanation of the components of the Autoscaler and the interaction flow, please read the main Architecture section.

The per-project deployment has the following pros and cons:

Pros

  • Design: this option has the simplest design.
  • Configuration: The control over scheduler parameters belongs to the team that owns the Spanner instance, therefore the team has the highest degree of freedom to adapt the Autoscaler to its needs.
  • Infrastructure: This design establishes a clear boundary of responsibility and security over the Autoscaler infrastructure because the team owner of the Spanner instances is also the owner of the Autoscaler infrastructure.

Cons

  • Maintenance: with each team being responsible for the Autoscaler configuration and infrastructure it may become difficult to make sure that all Autoscalers across the company follow the same update guidelines.
  • Audit: because of the high level of control by each team, a centralized audit may become more complex.

Before you begin

In this section you prepare your project for deployment.

  1. Open the Cloud Console

  2. Activate Cloud Shell
    At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Cloud SDK already installed, including the gcloud command-line tool, and with values already set for your current project. It can take a few seconds for the session to initialize.

  3. In Cloud Shell, clone this repository

    git clone https://github.com/cloudspannerecosystem/autoscaler.git
  4. Export variables for the working directories

    export AUTOSCALER_DIR="$(pwd)/autoscaler/terraform/cloud-functions/per-project"

Preparing the Autoscaler Project

In this section you prepare your project for deployment.

  1. Go to the project selector page in the Cloud Console. Select or create a Cloud project.

  2. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  3. In Cloud Shell, set environment variables with the ID of your autoscaler project:

    export PROJECT_ID=<INSERT_YOUR_PROJECT_ID>
    gcloud config set project "${PROJECT_ID}"
  4. Choose the region and App Engine Location where the Autoscaler infrastructure will be located.

    export REGION=us-central1
    export APP_ENGINE_LOCATION=us-central
  5. Enable the required Cloud APIs

    gcloud services enable iam.googleapis.com \
      cloudresourcemanager.googleapis.com \
      appengine.googleapis.com \
      firestore.googleapis.com \
      spanner.googleapis.com \
      pubsub.googleapis.com \
      logging.googleapis.com \
      monitoring.googleapis.com \
      cloudfunctions.googleapis.com \
      cloudbuild.googleapis.com \
      cloudscheduler.googleapis.com \
      cloudresourcemanager.googleapis.com
  6. Create a Google App Engine app, to enable the APIs for Cloud Scheduler and Firestore

    gcloud app create --region="${APP_ENGINE_LOCATION}"
  7. The Autoscaler state can be stored in either Firestore or Cloud Spanner.

    In case you want to use Firestore, update the database created with the Google App Engine app to use Firestore native mode.

    gcloud firestore databases update --type=firestore-native

    In case you want to use Cloud Spanner, no action is needed at this point.

Deploying the Autoscaler

  1. Set the project ID and region in the corresponding Terraform environment variables

    export TF_VAR_project_id="${PROJECT_ID}"
    export TF_VAR_region="${REGION}"
  2. If you want to create a new Spanner instance for testing the Autoscaler, set the following variable. The spanner instance that Terraform creates is named autoscale-test.

    export TF_VAR_terraform_spanner_test=true

    On the other hand, if you do not want to create a new Spanner instance because you already have an instance for the Autoscaler to monitor, set the name name of your instance in the following variable

    export TF_VAR_spanner_name=<INSERT_YOUR_SPANNER_INSTANCE_NAME>

    For more information on how to make your Spanner instance to be managed by Terraform, see Importing your Spanner instances

  3. If you chose to store the state in Firestore, skip this step. If you want to store the state in Cloud Spanner and you don't have a Spanner instance yet for that, then set the following variable so that Terraform creates an instance for you named autoscale-test-state:

    export TF_VAR_terraform_spanner_state=true

    It is a best practice not to store the Autoscaler state in the same instance that is being monitored by the Autoscaler.

    Optionally, you can change the name of the instance that Terraform will create:

    export TF_VAR_spanner_state_name=<INSERT_STATE_SPANNER_INSTANCE_NAME>

    If you already have a Spanner instance where state must be stored, only set the the name of your instance:

    export TF_VAR_spanner_state_name=<INSERT_YOUR_STATE_SPANNER_INSTANCE_NAME>

    In your own instance, make sure you create the the database spanner-autoscaler-state with the following table:

    CREATE TABLE spannerAutoscaler (
      id STRING(MAX),
      lastScalingTimestamp TIMESTAMP,
      createdOn TIMESTAMP,
      updatedOn TIMESTAMP,
      lastScalingCompleteTimestamp TIMESTAMP,
      scalingOperationId STRING(MAX),
      scalingRequestedSize INT64,
      scalingMethod STRING(MAX),
      scalingPreviousSize INT64,
    ) PRIMARY KEY (id)

    Note: If you are upgrading from v1.x, then you need to add the 5 new columns to the spanner schema using the following DDL statements

    ALTER TABLE spannerAutoscaler ADD COLUMN IF NOT EXISTS lastScalingCompleteTimestamp TIMESTAMP;
    ALTER TABLE spannerAutoscaler ADD COLUMN IF NOT EXISTS scalingOperationId STRING(MAX);
    ALTER TABLE spannerAutoscaler ADD COLUMN IF NOT EXISTS scalingRequestedSize INT64;
    ALTER TABLE spannerAutoscaler ADD COLUMN IF NOT EXISTS scalingMethod STRING(MAX);
    ALTER TABLE spannerAutoscaler ADD COLUMN IF NOT EXISTS scalingPreviousSize INT64;

    Note: If you are upgrading from V2.0.x, then you need to add the 3 new columns to the spanner schema using the following DDL statements

    ALTER TABLE spannerAutoscaler ADD COLUMN IF NOT EXISTS scalingRequestedSize INT64;
    ALTER TABLE spannerAutoscaler ADD COLUMN IF NOT EXISTS scalingMethod STRING(MAX);
    ALTER TABLE spannerAutoscaler ADD COLUMN IF NOT EXISTS scalingPreviousSize INT64;

    For more information on how to make your existing Spanner instance to be managed by Terraform, see Importing your Spanner instances

  4. Change directory into the Terraform per-project directory and initialize it.

    cd "${AUTOSCALER_DIR}"
    terraform init
  5. Import the existing AppEngine application into Terraform state

    terraform import module.scheduler.google_app_engine_application.app "${PROJECT_ID}"
  6. Create the Autoscaler infrastructure. Answer yes when prompted, after reviewing the resources that Terraform intends to create.

    terraform apply -parallelism=2

If you are running this command in Cloud Shell and encounter errors of the form "Error: cannot assign requested address", this is a known issue in the Terraform Google provider, please retry with -parallelism=1

Importing your Spanner instances

If you have existing Spanner instances that you want to import to be managed by Terraform, follow the instructions in this section.

  1. List your spanner instances

    gcloud spanner instances list --format="table(name)"
  2. Set the following variable with the instance name from the output of the above command that you want to import

    SPANNER_INSTANCE_NAME=<YOUR_SPANNER_INSTANCE_NAME>
  3. Create a Terraform config file with an empty google_spanner_instance resource

    echo "resource \"google_spanner_instance\" \"${SPANNER_INSTANCE_NAME}\" {}" > "${SPANNER_INSTANCE_NAME}.tf"
  4. Import the Spanner instance into the Terraform state.

    terraform import "google_spanner_instance.${SPANNER_INSTANCE_NAME}" "${SPANNER_INSTANCE_NAME}"
  5. After the import succeeds, update the Terraform config file for your instance with the actual instance attributes

    terraform state show -no-color "google_spanner_instance.${SPANNER_INSTANCE_NAME}" \
      | grep -vE "(id|num_nodes|state|timeouts).*(=|\{)" \
      > "${SPANNER_INSTANCE_NAME}.tf"

If you have additional Spanner instances to import, repeat this process.

Importing Spanner databases is also possible using the google_spanner_database resource and following a similar process.

Next steps

Your Autoscaler infrastructure is ready, follow the instructions in the main page to configure your Autoscaler.