Skip to content
Virtual Kubelet is an open source Kubernetes kubelet implementation.
Go JavaScript HTML Makefile CSS Smarty Other
Branch: master
Clone or download
cpuguy83 Merge pull request #793 from cpuguy83/fix_pod_status_panic
[Sync Provider] Fix panic on not found pod status
Latest commit 7f2a022 Nov 16, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
.circleci Cache Downloaded Go Modules Sep 5, 2019
charts Add Helm documentation (#517) Mar 28, 2019
cmd/virtual-kubelet Remove sync provider support Oct 2, 2019
docs/roadmap adding virtual kubelet 2019 roadmap (#473) Apr 3, 2019
errdefs fix tyop in doc.go Aug 20, 2019
examples Update ACI liveness/readiness probe handling to work with named ports ( Apr 23, 2019
hack/skaffold/virtual-kubelet skaffold: bump to 0.33.0 and API v1beta12 Jul 3, 2019
internal Importable End-To-End Test Suite (#758) Sep 4, 2019
log fix several typo Jul 17, 2019
node [Sync Provider] Fix panic on not found pod status Nov 15, 2019
scripts Remove vendor/ (#688) Jul 2, 2019
test/e2e Importable End-To-End Test Suite (#758) Sep 4, 2019
trace Fix log format Sep 27, 2019
website Update docs on (#754) Sep 3, 2019
.dockerignore Create a provider to use Azure Batch (#133) Jun 22, 2018
.envrc Initial commit Dec 5, 2017
.gitignore Add idea files in gitignore (#660) Jun 12, 2019
.golangci.yml Add varcheck, deadcode, and mispell linters Sep 25, 2019
.goreleaser.yml goreleaser.yml: cleanup Dec 5, 2017 Update Sep 25, 2019
AUTHORS Add HashiCorp Nomad provider (#483) Jan 7, 2019 deleted owners Jan 7, 2019 Fix filename typo s/cencus/census/ Oct 4, 2018
Dockerfile More Makefile enhancements (#569) Apr 15, 2019
LICENSE update license to apache 2.0 (#214) May 30, 2018
Makefile Set timeout for tests on CI to 9 minutes Aug 12, 2019
Makefile.e2e e2e: cleanup after run Jul 4, 2019 readme updates (#766) Sep 19, 2019
go.mod Add unused code linter Sep 24, 2019
go.sum Importable End-To-End Test Suite (#758) Sep 4, 2019
netlify.toml Add local logos (#585) Apr 27, 2019

Virtual Kubelet

Virtual Kubelet is an open source Kubernetes kubelet implementation that masquerades as a kubelet for the purposes of connecting Kubernetes to other APIs. This allows the nodes to be backed by other services like ACI, AWS Fargate, IoT Edge etc. The primary scenario for VK is enabling the extension of the Kubernetes API into serverless container platforms like ACI and Fargate, though we are open to others. However, it should be noted that VK is explicitly not intended to be an alternative to Kubernetes federation.

Virtual Kubelet features a pluggable architecture and direct use of Kubernetes primitives, making it much easier to build on.

We invite the Kubernetes ecosystem to join us in empowering developers to build upon our base. Join our slack channel named, virtual-kubelet, within the Kubernetes slack group.

The best description is "Kubernetes API on top, programmable back."

Table of Contents

How It Works

The diagram below illustrates how Virtual-Kubelet works.



Virtual Kubelet is focused on providing a library that you can consume in your project to build a custom Kubernetes node agent.

See godoc for up to date instructions on consuming this project:

There are implementations available for several provides (listed above), see those repos for details on how to deploy.

Current Features

  • create, delete and update pods
  • container logs, exec, and metrics
  • get pod, pods and pod status
  • capacity
  • node addresses, node capacity, node daemon endpoints
  • operating system
  • bring your own virtual network


This project features a pluggable provider interface developers can implement that defines the actions of a typical kubelet.

This enables on-demand and nearly instantaneous container compute, orchestrated by Kubernetes, without having VM infrastructure to manage and while still leveraging the portable Kubernetes API.

Each provider may have its own configuration file, and required environmental variables.

Providers must provide the following functionality to be considered a supported integration with Virtual Kubelet.

  1. Provides the back-end plumbing necessary to support the lifecycle management of pods, containers and supporting resources in the context of Kubernetes.
  2. Conforms to the current API provided by Virtual Kubelet.
  3. Does not have access to the Kubernetes API Server and has a well-defined callback mechanism for getting data like secrets or configmaps.

Alibaba Cloud ECI Provider

Alibaba Cloud ECI(Elastic Container Instance) is a service that allow you run containers without having to manage servers or clusters.

You can find more details in the Alibaba Cloud ECI provider documentation.

Configuration File

The alibaba ECI provider will read configuration file specified by the --provider-config flag.

The example configure file is in the ECI provider repository.

Azure Container Instances Provider

The Azure Container Instances Provider allows you to utilize both typical pods on VMs and Azure Container instances simultaneously in the same Kubernetes cluster.

You can find detailed instructions on how to set it up and how to test it in the Azure Container Instances Provider documentation.

Configuration File

The Azure connector can use a configuration file specified by the --provider-config flag. The config file is in TOML format, and an example lives in providers/azure/example.toml.

AWS Fargate Provider

AWS Fargate is a technology that allows you to run containers without having to manage servers or clusters.

The AWS Fargate provider allows you to deploy pods to AWS Fargate. Your pods on AWS Fargate have access to VPC networking with dedicated ENIs in your subnets, public IP addresses to connect to the internet, private IP addresses to connect to your Kubernetes cluster, security groups, IAM roles, CloudWatch Logs and many other AWS services. Pods on Fargate can co-exist with pods on regular worker nodes in the same Kubernetes cluster.

Easy instructions and a sample configuration file is available in the AWS Fargate provider documentation. Please note that this provider is not currently supported.

HashiCorp Nomad Provider

HashiCorp Nomad provider for Virtual Kubelet connects your Kubernetes cluster with Nomad cluster by exposing the Nomad cluster as a node in Kubernetes. By using the provider, pods that are scheduled on the virtual Nomad node registered on Kubernetes will run as jobs on Nomad clients as they would on a Kubernetes node.

For detailed instructions, follow the guide here.

OpenStack Zun Provider

OpenStack Zun provider for Virtual Kubelet connects your Kubernetes cluster with OpenStack in order to run Kubernetes pods on OpenStack Cloud. Your pods on OpenStack have access to OpenStack tenant networks because they have Neutron ports in your subnets. Each pod will have private IP addresses to connect to other OpenStack resources (i.e. VMs) within your tenant, optionally have floating IP addresses to connect to the internet, and bind-mount Cinder volumes into a path inside a pod's container.

./bin/virtual-kubelet --provider="openstack"

For detailed instructions, follow the guide here.

Adding a New Provider via the Provider Interface

Providers consume this project as a library which implements the core logic of a Kubernetes node agent (Kubelet), and wire up their implementation for performing the neccessary actions.

There are 3 main interfaces:


When pods are created, updated, or deleted from Kubernetes, these methods are called to handle those actions.


type PodLifecycleHandler interface {
    // CreatePod takes a Kubernetes Pod and deploys it within the provider.
    CreatePod(ctx context.Context, pod *corev1.Pod) error

    // UpdatePod takes a Kubernetes Pod and updates it within the provider.
    UpdatePod(ctx context.Context, pod *corev1.Pod) error

    // DeletePod takes a Kubernetes Pod and deletes it from the provider.
    DeletePod(ctx context.Context, pod *corev1.Pod) error

    // GetPod retrieves a pod by name from the provider (can be cached).
    GetPod(ctx context.Context, namespace, name string) (*corev1.Pod, error)

    // GetPodStatus retrieves the status of a pod by name from the provider.
    GetPodStatus(ctx context.Context, namespace, name string) (*corev1.PodStatus, error)

    // GetPods retrieves a list of all pods running on the provider (can be cached).
    GetPods(context.Context) ([]*corev1.Pod, error)

There is also an optional interface PodNotifier which enables the provider to asynchronously notify the virtual-kubelet about pod status changes. If this interface is not implemented, virtual-kubelet will periodically check the status of all pods.

It is highly recommended to implement PodNotifier, especially if you plan to run a large number of pods.


type PodNotifier interface {
    // NotifyPods instructs the notifier to call the passed in function when
    // the pod status changes.
    // NotifyPods should not block callers.
    NotifyPods(context.Context, func(*corev1.Pod))

PodLifecycleHandler is consumed by the PodController which is the core logic for managing pods assigned to the node.

	pc, _ := node.NewPodController(podControllerConfig) // <-- instatiates the pod controller
	pc.Run(ctx) // <-- starts watching for pods to be scheduled on the node


NodeProvider is responsible for notifying the virtual-kubelet about node status updates. Virtual-Kubelet will periodically check the status of the node and update Kubernetes accordingly.


type NodeProvider interface {
    // Ping checks if the node is still active.
    // This is intended to be lightweight as it will be called periodically as a
    // heartbeat to keep the node marked as ready in Kubernetes.
    Ping(context.Context) error

    // NotifyNodeStatus is used to asynchronously monitor the node.
    // The passed in callback should be called any time there is a change to the
    // node's status.
    // This will generally trigger a call to the Kubernetes API server to update
    // the status.
    // NotifyNodeStatus should not block callers.
    NotifyNodeStatus(ctx context.Context, cb func(*corev1.Node))

Virtual Kubelet provides a NaiveNodeProvider that you can use if you do not plan to have custom node behavior.


NodeProvider gets consumed by the NodeController, which is core logic for managing the node object in Kubernetes.

	nc, _ := node.NewNodeController(nodeProvider, nodeSpec) // <-- instantiate a node controller from a node provider and a kubernetes node spec
	nc.Run(ctx) // <-- creates the node in kubernetes and starts up he controller

API endpoints

One of the roles of a Kubelet is to accept requests from the API server for things like kubectl logs and kubectl exec. Helpers for setting this up are provided here


Unit tests

Running the unit tests locally is as simple as make test.

End-to-end tests

Check out test/e2e for more details.

Known quirks and workarounds

Missing Load Balancer IP addresses for services

Providers that do not support service discovery

Kubernetes 1.9 introduces a new flag, ServiceNodeExclusion, for the control plane's Controller Manager. Enabling this flag in the Controller Manager's manifest allows Kubernetes to exclude Virtual Kubelet nodes from being added to Load Balancer pools, allowing you to create public facing services with external IPs without issue.


Cluster requirements: Kubernetes 1.9 or above

Enable the ServiceNodeExclusion flag, by modifying the Controller Manager manifest and adding --feature-gates=ServiceNodeExclusion=true to the command line arguments.


Virtual Kubelet follows the CNCF Code of Conduct. Sign the CNCF CLA to be able to make Pull Requests to this repo.

Bi-weekly Virtual Kubelet Architecture meetings are held at 11am PST every other Wednesday in this zoom meeting room. Check out the calendar here.

Our google drive with design specifications and meeting notes are here.

We also have a community slack channel named virtual-kubelet in the Kubernetes slack.

You can’t perform that action at this time.