Skip to content
Manage K3s (k3s.io) region clusters on Packet
Shell HCL Go Smarty Dockerfile
Branch: master
Clone or download
Latest commit 9868ed8 Aug 14, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
example updates inventory generate script since the for_each behavior now req… Aug 13, 2019
modules/cluster_pool updates cluster spec to use facilities map with new for_each syntax, … Aug 8, 2019
scripts adds anycast app and docs with deploy script Mar 15, 2019
.drone.yml Update .drone.yml Aug 8, 2019
.gitignore Adds RBAC, user creation demo--will likely need reworking, use the be… Apr 24, 2019
1-provider.tf Update 1-provider.tf Aug 8, 2019
2-clusters.tf removes exposition about cluster definitions; this is no longer relvant Aug 8, 2019
README.md updates doc Aug 8, 2019
output.tf adds anycast app and docs with deploy script Mar 15, 2019
template.tpl spins up cluster with working bgp sessions and global ipv4 with all c… Mar 15, 2019
terraform.tfvars.sample
vars.tf updates wording Aug 13, 2019

README.md

K3s on Packet

This is a Terraform project for deploying K3s on Packet.

This project configures your cluster with:

on ARM devices.

This is intended to allow you to quickly spin-up and down K3s clusters in edge locations.

Requirements

The only required variables are auth_token (your Packet API key), your Packet project_id, facility, and count (number of ARM nodes in the cluster, not counting the controller, which is always set to 1--if you wish to only run the controller, and its local node, set this value to 0).

In addition to Terraform, your client machine (where Terraform will be run from) will need curl, and jq available in order for all of the automation to run as expected.

You will need an SSH key associated with this project, or your account. Add the identity path to ssh_private_key--this will only be used locally to assist Terraform in completing cluster bootstrapping (needed to retrieve the cluster node-token from the controller node).

BGP will need to be enabled for your project.

Clusters

Generating a Cluster Template

To ensure all your regions have standardized deployments, in your Terraform variables (TF_VAR_varname or in terraform.tfvars), ensure that you have set count (number of nodes per cluster), plan_primary, and plan_node. This will apply to all clusters managed by this project.

To add new clusters to a cluster pool, add the new facility to the facilities map:

variable "facilities" {
  type = "map"

  default = {
    newark  = "ewr1"
    narita  = "nrt1"
    sanjose = "sjc1"
  }
}

by adding a line such as:

...
	chicago = "ord1"
   }
}

Manually defining a Cluster, or adding a new cluster pool

To create a cluster manually, in 3-cluster-inventory.tf (this is ignored by git--your initial cluster setup is in 2-clusters.tf, and is tracked), instantiate a new cluster_pool module:

module "manual_cluster" {
  source = "./modules/cluster_pool"

  cluster_name         = "manual_cluster"
  node_count           = "${var.node_count}"
  plan_primary         = "${var.plan_primary}"
  plan_node            = "${var.plan_node}"
  facilities           = "${var.facilities}"
  primary_facility     = "${var.primary_facility}"
  auth_token           = "${var.auth_token}"
  project_id           = "${var.project_id}"
  ssh_private_key_path = "${var.ssh_private_key_path}"
  anycast_ip           = "${packet_reserved_ip_block.anycast_ip.address}"
}

This creates a single-controller cluster, with count number of agent nodes for each facility in the facilities map.

Demo Project

In example/, there are files to configure and deploy a demo project that, once your request is received, returns the IP of the cluster serving your request to demonstrate the use of Packet's Global IPv4 addresses to distribute traffic globally to your edge cluster deployments.

To run the project, you can run the deploy_demo Ansible project by running the create_inventory.sh script to gather your cluster controller IPs into your inventory for Ansible:

cd example/
sh create_inventory.sh
cd deploy_demo
ansible-playbook -i inventory.yaml main.yml

or manually copy example/deploy_demo/roles/demo/files/traefik.sh to your kubectl client machine and run manually to deploy Traefik and the application.

You can’t perform that action at this time.