Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Summarize EKS on K8 into github readme file #6545

Closed
1 task
jefflbrauer opened this issue Sep 27, 2021 · 3 comments
Closed
1 task

Summarize EKS on K8 into github readme file #6545

jefflbrauer opened this issue Sep 27, 2021 · 3 comments
Assignees
Labels
DevOps CMS team practice area Platform CMS Team

Comments

@jefflbrauer
Copy link
Contributor

jefflbrauer commented Sep 27, 2021

Background

As an engineer, I can read a clear summary of the EKS on K8 discovery findings and approach per
https://app.zenhub.com/workspaces/vagov-cms-team-5c0e7b864b5806bc2bfc2087/issues/department-of-veterans-affairs/va.gov-cms/6355

Acceptance Criteria

  • A read.me file exists in the va.gov-team CMS documentation space summarizing the EKS on K8 discovery work
@olivereri
Copy link
Contributor

olivereri commented Oct 6, 2021

Kubernetes Learning

Purpose

The purpose of this document is to summarize our thoughts and feelings about how an engineer new to Kubernetes might go about gaining some practical, useful experience with it, based on our recent experiences doing just that. This document is tailored to engineers and assumes some experience with DevOps, Linux system administration, or similar subjects.

Kubernetes is a container orchestration platform based on clustering, and its design decisions have followed from that premise. As a result, it is critical to have a working understanding of networking (ideally including a casual familiarity with iptables and the Linux kernel packet filter). But that's only a starting point: Kubernetes introduces a number of terms and concepts and behaviors that might be new even to engineers who have substantial experience with virtualization and containerization.

I believe that the best way to grasp any sort of technology is to use it yourself; I think that's why we use "grasp" as a synonym for "understand", especially with regards to technology. So I think it's essential to dive right in to Kubernetes by deploying a cluster, suffering through the inevitable complications and setbacks encountered along the way, and maintaining and improving it as you learn. This can be done a number of ways.

EKS is the foundation of our infrastructure with regard to Kubernetes, but I feel this is a suboptimal choice for learning. EKS was developed with an eye to the pain points of Kubernetes: managing ingress/load balancing, deployment, upgrades and other maintenance to the cluster itself, and so forth. It's tightly integrated with AWS' innumerable other offerings, making it comparatively easy to add storage, to log in a persistent fashion, to manage TLS certificates, and so forth.

EKS is a solid option, but the fact that it eliminates or eases some of these struggles ultimately presents some barriers to understanding. It also costs money; not much, comparatively speaking, and the resources can of course be created and destroyed on demand as one would any other AWS resource. But some challenges with Kubernetes present themselves only over time, and this expense will accumulate.

I think it is preferable to avoid the cloud and deploy and maintain a cluster locally -- either virtualized or on bare metal. Some distributions of Kubernetes can run and do useful work on two or three Raspberry Pis, or two or three virtual machines running in VirtualBox, or even within a couple of Docker containers. I think it's preferable to avoid options that don't provide a 1:1 match for functionality with a standard Kubernetes distribution, but all options provide significant and irreplaceable value.

In our cases, we took two different approaches:

  • I (Nathan Douglas) maintain a homelab built around three servers running Proxmox VE, each in a distinct subnet, each standalone (as opposed to operating in a Proxmox cluster). I created four clusters, each with a node on each of the servers, one as the control plane and two as workers. The customary approach is to use VMs, but for various reasons I opted to use LXC containers... which posed some additional challenges. Two of the servers are limited to approximately one terabyte of storage, which they split between about twenty containers each; the third has about forty terabytes in a ZFS pool. Thus all nodes have access to a limited quantity of fast local storage and a substantial quantity of comparatively slow NFS storage, and some nodes have access to a substantial quantity of fairly fast local storage, all in dedicated per-host and per-cluster datasets (I might shift local volumes to zvols though). This structure reflects my interest/obsession/terror concerning storage, networking, resilience, backups, and other gritty details of cluster administration.

  • I (Eric Oliver) run MS Windows on my workstation. This presented an opportunity to run Windows Subsystem for Linux and install MicroK8s, a light-weight simple Kubernetes (k8s) distribution. Using WSL and MicroK8s appeared to be the quickest path to get K8s up and running with minimal configuration and deployment steps. The end goal was to quickly bring up a K8s environment and quickly install ArgoCD to locally replicate VSP-Operations team's Production environment. This would provide a safe space to explore and break K8s and ArgoCD, as well as explore deployment patterns and creating K8s hosted applications. In the end this worked up until pods needed to communicate with each other. Unfortunately, I haven't found out why all other network communication works except for inter-pod communication. I'm not sure if it's a combination for WSL and MIcroK8s causing this issue but I would like to explore other options for locally running K8s.

We documented our experiences with getting a cluster running in this GitHub issue.

The factors in your decision-making may (and likely will) differ from ours and lead to different choices and different avenues of investigation. This is a good thing. Kubernetes is large and complex, built on many different technologies at different levels, and developing rapidly. However, its sheer breadth should provide toeholds while you learn.

We've found these resources to be useful and worth checking out:

Videos:

Core Tools:

  • Argo CD - A GitOps (meaning that a Git repository is the source of truth) tool for continuous deployment
  • Helm - A Kubernetes package manager. Probably the simplest to use, although some consider it an anti-pattern. Built around Go templating the YAML manifest files.

Extra Tools:

  • k9s - Awesome terminal UI.
  • kubebox - Another terminal UI.
  • kube-shell - Basically code completion, but for kubectl.
  • Lens - A Kubernetes IDE.

Kubernetes, the Hard Way:

Kubernetes Distributions:

  • k3s -- Lightweight Kubernetes distribution by Rancher. Runs on small systems, including Raspberry Pis, but isn't a 1:1 match for a standard Kubernetes distribution.
  • microk8s -- Lightweight Kubernetes distribution by Canonical. Focuses on simplicity of installation over minimized resource usage. Includes Traefik as an Ingress Controller out of the box.
  • minikube -- A full-blown Kubernetes cluster that will run within a VM on your PC. Very well supported and documented, and even has kubectl built-in and automatically configured.

Just For Fun:

  • kubedoom -- A fun implementation of chaos engineering; test the resilience of your distributed system by killing demons (and thereby individual pods).

Links/References:

@ndouglas
Copy link
Contributor

ndouglas commented Oct 6, 2021

Added a document about our architecture here.

@olivereri
Copy link
Contributor

olivereri commented Oct 6, 2021

Added consolidated discovery work document from #6355 here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DevOps CMS team practice area Platform CMS Team
Projects
None yet
Development

No branches or pull requests

4 participants