Note: This repository should be imported as
To see how Silk is used inside of Cloud Foundry, look at the CF Networking Release.
Silk has three components:
silk-controllerruns on at least one central node and manages IP subnet lease allocation across the cluster. It is implemented as a stateless HTTP JSON API backed by a SQL database.
silk-daemonruns on each host in order to acquire and renew the subnet lease for the host by calling the
silk-controllerAPI. It also has an HTTP JSON API endpoint that serves the subnet lease information and also acts as a health check.
silk-cniis a short-lived program, executed by the container runner, to set up the network stack for a particular container. Before setting up the network stack for the container, it calls the
silk-daemonAPI to check its health and retrieve the host's subnet information.
The Silk dataplane is a virtual L3 overlay network. Each container host is assigned a unique IP address range, and each container gets a unique IP from that range.
The virtual network is constructed from three primitives:
- Every host runs one virtual L3 router (via Linux routing).
- Each container on a host is connected to the host's virtual router via a dedicated virtual L2 segment, one segment per container (point-to-point over a virtual ethernet pair).
- A single shared VXLAN segment connects all of the the virtual L3 routers.
Although the shared VXLAN network carries L2 frames, containers are not connected to it directly. They only access the VXLAN segment via their host's virtual L3 router. Therefore, from a container's point of view, the container-to-container network carries L3 packets, not L2.
To provide multi-tenant network policy on top of this connectivity fabric, Cloud Foundry utilizes the VXLAN GBP extension to tag egress packets with a policy identifier. Other network policy enforcement schemes are also possible.