AWS Virtual Kubelet aims to provide an extension to your Kubernetes cluster that can provision and maintain EC2 instances through regular Kubernetes operations. This enables usage of non-standard operating systems for container ecosystems, such as MacOS.
This expands the management capabilities of Kubernetes, enabling use-cases such as macOS native application lifecycle control via standard Kubernetes tooling.
See Software Architecture for an overview of the code organization and general behavior. For detailed coverage of specific aspects of system/code behavior, see implemented RFCs.
- Virtual Kubelet (VK)
- Upstream library / framework for implementing custom Kubernetes providers
- Virtual Kubelet Provider (VKP)
- This EC2-based provider implementation (sometimes referred to as virtual-kubelet or VK also)
- Virtual Kubelet Virtual Machine (VKVM)
- The Virtual Machine providing compute for this provider implementation (i.e. an Amazon EC2 Instance)
- Virtual Kubelet Virtual Machine Agent (VKVMA)
- The gRPC agent that exposes an API to manage workloads on EC2 instances (also VKVMAgent, or just Agent)
kubelet → Virtual Kubelet library + this custom EC2 provider
node → Elastic Network Interface (managed by VKP)
pod → EC2 Instance + VKVMAgent + Custom Workload
The following are required to build and deploy this project. Additional tools may be needed to utilize examples or set up a development environment.
Tested with Go v1.12, 1.16, and 1.17. See the Go documentation for installation steps.
Docker is a container virtualization runtime.
See Get Started in the docker documentation for setup steps.
This project uses this Go Project Layout pattern. A top-level Makefile
provides necessary build and utility functions. Run make
by itself (or make help
) to see a list of common targets.
- virtual-kubelet
- provides the Virtual Kubelet (VK) interface between this custom provider and Kubernetes
- node-cli
- abstracts the VK provider command interface into a separate, reusable project[^1]
Example files that require updating placeholders with actual (environment-specific) data are copied to ./local
before modification. The local
directory's contents are ignored, which prevents accidental commits and leaking account numbers, etc. into the GitHub repo.
For local development and testing setup see DevSetup.md
- Run
kubectl apply -f deploy/vk-clusterrole_binding.yaml
to deploy the cluster role and binding.
The ConfigMap provides global and default VK/VKP configuration elements. Some of these settings may be overridden on a per-pod basis.
-
Copy the provided examples/config-map.yaml to the
./local
dir and modify as-needed. See Config for a detailed explanation of the various configuration options. -
Next, run
kubectl apply -f local/config-map.yaml
to deploy the config map.
This configuration will deploy a set of VK providers using the docker image built and pushed earlier.
- Copy the provided examples/vk-statefulset.yaml file to
./local
. - Replace these placeholders in the
image:
reference with the values from your account/environmentAWS_ACCOUNT_ID
AWS_REGION
DOCKER_TAG
- Run
kubectl apply -f local/vk-statefulset.yaml
to deploy the VK provider pods.
- Create an ECR repository in your AWS account.
aws ecr create-repository --repository-name aws-virtual-kubelet
-
in the Makefile, change the
REGISTRY_ID
to your AWS Account ID, theREGION
to your desired AWS Region, and theIMAGE_NAME
to the name of the created ECR repository. -
Build the virtual kubelet docker image, and push it to ECR.
make push
- Create an S3 Bucket
aws s3 mb s3://vk-bootstrap-agent
- Build the Bootstrap Agent and upload to s3
cd examples/bootstrap_agent
go build *.go
aws s3 cp bootstrap_agent s3://vk-bootstrap-agent/bootstrap_agent
- Create the virtual-kubelet namespace This will hold all relevant K8S resources related to the virtual kubelet.
kubect create namespace virtual-kubelet
- Deploy a ConfigMap with required Virtual Kubelet configurations Fill in the values based on the Configuration section below. For a full example, see examples/config-map.yaml
kubectl apply -f examples/config-map.yaml
- Deploy the cluster role binding
First, update deploy/vk-clusterrole_binding.yaml by replacing
YOUR-IAM-ROLE-NAME-HERE
with the IAM role you will use to manage resources in thevirtual-kubelet
namespace. Then, deploy the cluster role binding:
kubectl apply -f deploy/vk-clusterrole_binding.yaml
- Deploy the virtual kubelet stateful set
First, update deploy/vk-statefulset.yaml with an updated
image:
value based on image registry location. This would have been presented in the docker push logs. Example:<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/aws-virtual-kubelet:v0.5.3-2-g46b7568-dev-amd64
kubectl apply -f deploy/vk-statefulset.yaml
- Check the status of the stateful set, to ensure the service is running
kubectl describe statefulset -n virtual-kubelet
kubectl get pods -n virtual-kubelet
You should now be able to run workloads targeting the new EC2-based K8S nodes.
Create a configuration file (JSON) with the following keys and appropriate values. For a full example, see examples/config.json
ManagementSubnet
: Subnet in which you expect to deploy the Virtual Kubelet, which generates an AWS ENI for the purposes of creating a unique location for the Kubenernetes IP address.ClusterName
: Included for tagging purposes to manage AWS ENIs associated with Virtual Kubelet.Region
: Code for AWS Region the Virtual Kubelet will be deployed to. e.g.us-west-2
orus-east-1
.
InitialSecurityGroups
: AWS SecurityGroups assigned to an EC2 instance at launch time, which can be updated later.DefaultAMI
: AMI used when there is no other AMI specified in Podspec of a Kubernetes Pod.InitData
: Base64 encoded JSON to be processed by the Bootstrap Agent. TODO need to know how this is used, and an example content.
S3Bucket
: Bucket location in S3 where bootstrap agent is located.S3Key
: Key location in S3 where bootstrap agent is located.GRPCPort
: Port number for GRPC communication between Virtual Kubelet and the EC2 instances it creates.InitData
: Base64 encoded JSON to be processed by the Bootstrap Agent. TODO need to know how this is used, and an example content.
DesiredCount
: Amount of EC2 to be maintained in the WarmPool, above and beyond what is required to run Kubernetes Pods.IamInstanceProfile
: The IAM instance profile assigned to the EC2 at launch time, which can be changed at Pod assignment time. This needs to at minimum have read access to the bootstrap agent in S3,ec2:RunInstances
,ec2:DescribeNetworkInterfaces
,ec2:CreateNetworkInterface
,iam:PassRole
on itself, and also any application-specific AWS access for workloads running on the virtual node(s).SecurityGroups
: The AWS Security Groups assigned to the EC2 at launch time, which can be changed at Pod assignment time.KeyPair
: The EC2 credentials assigned to allow for SSH/RDP access to the instance. Unchangeable at Pod assignment time.ImageID
: The AWS AMI to launch the EC2 instances with, Unchangeable at Pod assignment time.InstanceType
: The AWS EC2 InstanceType, e.g.mac1.metal
. Unchangeable at Pod assignment time.Subnets
: The AWS VPC Subnet(s) to deploy the WarmPool EC2 instances into. Unchangeable at Pod assignment time.
TODO
add more FAQ items here as-needed
This project serves as a translation and mediation layer between Kubernetes and EC2-based pods. It was created in order to run custom workloads directly on any EC2 instance type/size available via AWS (e.g. Mac Instances).
See TODO
for steps to customize this project for your particular needs.
See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License.
TODO
TODO
Add "article" and external reference links here