Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 37 additions & 36 deletions docs/container/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Storage module is available on zbus over the following channel
| container|[container](#interface)| 0.0.1|

## Home Directory

contd keeps some data in the following locations
| directory | path|
|----|---|
Expand Down Expand Up @@ -44,52 +45,52 @@ type ContainerID string

// NetworkInfo defines a network configuration for a container
type NetworkInfo struct {
// Currently a container can only join one (and only one)
// network namespace that has to be pre defined on the node
// for the container tenant

// Containers don't need to know about anything about bridges,
// IPs, wireguards since this is all is only known by the network
// resource which is out of the scope of this module
Namespace string
// Currently a container can only join one (and only one)
// network namespace that has to be pre defined on the node
// for the container tenant

// Containers don't need to know about anything about bridges,
// IPs, wireguards since this is all is only known by the network
// resource which is out of the scope of this module
Namespace string
}

// MountInfo defines a mount point
type MountInfo struct {
Source string // source of the mount point on the host
Target string // target of mount inside the container
Type string // mount type
Options []string // mount options
Source string // source of the mount point on the host
Target string // target of mount inside the container
Type string // mount type
Options []string // mount options
}

//Container creation info
type Container struct {
// Name of container
Name string
// path to the rootfs of the container
RootFS string
// Env env variables to container in format {'KEY=VALUE', 'KEY2=VALUE2'}
Env []string
// Network network info for container
Network NetworkInfo
// Mounts extra mounts for container
Mounts []MountInfo
// Entrypoint the process to start inside the container
Entrypoint string
// Interactivity enable Core X as PID 1 on the container
Interactive bool
// Name of container
Name string
// path to the rootfs of the container
RootFS string
// Env env variables to container in format {'KEY=VALUE', 'KEY2=VALUE2'}
Env []string
// Network network info for container
Network NetworkInfo
// Mounts extra mounts for container
Mounts []MountInfo
// Entrypoint the process to start inside the container
Entrypoint string
// Interactivity enable Core X as PID 1 on the container
Interactive bool
}

// ContainerModule defines rpc interface to containerd
type ContainerModule interface {
// Run creates and starts a container on the node. It also auto
// starts command defined by `entrypoint` inside the container
// ns: tenant namespace
// data: Container info
Run(ns string, data Container) (ContainerID, error)

// Inspect, return information about the container, given its container id
Inspect(ns string, id ContainerID) (Container, error)
Delete(ns string, id ContainerID) error
// Run creates and starts a container on the node. It also auto
// starts command defined by `entrypoint` inside the container
// ns: tenant namespace
// data: Container info
Run(ns string, data Container) (ContainerID, error)

// Inspect, return information about the container, given its container id
Inspect(ns string, id ContainerID) (Container, error)
Delete(ns string, id ContainerID) error
}
```
```
Binary file added docs/network/HIDDEN-PUBLIC.dia
Binary file not shown.
Binary file added docs/network/HIDDEN-PUBLIC.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/network/NR_layout.dia
Binary file not shown.
Binary file added docs/network/NR_layout.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
File renamed without changes.
File renamed without changes.
54 changes: 54 additions & 0 deletions docs/network/attic/zostst.dhcp
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
#!/usr/bin/bash

mgmtnic=(
0c:c4:7a:51:e3:6a
0c:c4:7a:51:e9:e6
0c:c4:7a:51:ea:18
0c:c4:7a:51:e3:78
0c:c4:7a:51:e7:f8
0c:c4:7a:51:e8:ba
0c:c4:7a:51:e8:0c
0c:c4:7a:51:e7:fa
)

ipminic=(
0c:c4:7a:4c:f3:b6
0c:c4:7a:4d:02:8c
0c:c4:7a:4d:02:91
0c:c4:7a:4d:02:62
0c:c4:7a:4c:f3:7e
0c:c4:7a:4d:02:98
0c:c4:7a:4d:02:19
0c:c4:7a:4c:f2:e0
)
cnt=1
for i in ${mgmtnic[*]} ; do
cat << EOF
config host
option name 'zosv2tst-${cnt}'
option dns '1'
option mac '${i}'
option ip '10.5.0.$((${cnt} + 10))'

EOF
let cnt++
done



cnt=1
for i in ${ipminic[*]} ; do
cat << EOF
config host
option name 'ipmiv2tst-${cnt}'
option dns '1'
option mac '${i}'
option ip '10.5.0.$((${cnt} + 100))'

EOF
let cnt++
done

for i in ${mgmtnic[*]} ; do
echo ln -s zoststconf 01-$(echo $i | sed s/:/-/g)
done
22 changes: 22 additions & 0 deletions docs/network/definitions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Definition of words used throughout the documentation

## Node

TL;DR: Computer.
A Node is a computer with CPU, Memory, Disks (or SSD's, NVMe) connected to _A_ network that has Internet access. (i.e. it can reach www.google.com, just like you on your phone, at home)
That Node will, once it has received an IP address (IPv4 or IPv6), register itself when it's new, or confirm it's identity and it's online-ness (for lack of a better word).

## TNo : Tenant Network object. [The gory details here](https://github.com/threefoldtech/zos/blob/master/modules/network.go)

TL;DR: The Network Description.
We named it so, because it is a data structure that describes the __whole__ network a user can request (or setup).
That network is a virtualized overlay network.
Basically that means that transfer of data in that network *always* is encrypted, protected from prying eyes, and __resources in that network can only communicate with each other__ **unless** there is a special rule that allows access. Be it by allowing access through firewall rules, *and/or* through a proxy (a service that forwards requests on behalf of, and ships replies back to the client).

## NR: Network Resource

TL;DR: the Node-local part of a TNo.
The main building block of a TNo; i.e. each service of a user in a Node lives in an NR.
Each Node hosts User services, whatever type of service that is. Every service in that specific node will always be solely part of the Tenant's Network. (read that twice).
So: A Network Resource is the thing that interconnects all other network resources of the TN (Tenant Network), and provides routing/firewalling for these interconnects, including the default route to the BBI (Big Bad Internet), aka ExitPoint.
All User services that run in a Node are in some way or another connected to the Network Resource (NR), which will provide ip packet forwarding and firewalling to all other network resources (including the Exitpoint) of the TN (Tenant Network) of the user. (read that three times, and the last time, read it slowly and out loud)
72 changes: 72 additions & 0 deletions docs/network/introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# Introduction to networkd the network manager of 0-OS

## Boot and initial setup

At boot, be it from an usb stick or PXE, ZOS starts up the kernel, with a few necessary parameters like farm ID and/or possible network parameters, but basically once the kernel has started, [zinit](https://github.com/threefoldtech/zinit) among other things, starts the network initializer.

In short, that process loops over the available network interfaces and tries to obtain an IP address that also provides for a default gateway. That means: it tries to get Internet connectivity. Without it, ZOS stops there, as not being able to register itself, nor start other processes, there wouldn't be any use for it to be started anyway.

Once it has obtained Internet connectivity, ZOS can then proceed to make itself known to the Grid, and acknowledge it's existence. It will then regularly poll the Grid for tasks.

Once initialized, with the network daemon running (a process that will handle all things related to networking), ZOS will set up some basic services so that workloads can themselves use that network.

## Networkd functionality

The network daemon is in itself responsible for a few tasks, and working together with the [provision daemon](../provision) it mainly sets up the local infrastructure to get the user network resources, together with the wireguard configurations for the user's mesh network.

The Wireguard mesh is an overlay network. That means that traffic of that network is encrypted and encapsulated in a new traffic frame that the gets transferred over the underlay network, here in essence the network that has been set up during boot of the node.

For users or workloads that run on top of the mesh, the mesh network looks and behaves like any other directly connected workload, and as such that workload can reach other workloads or services in that mesh with the added advantage that that traffic is encrypted, protecting services and communications over that mesh from too curious eyes.

That also means that workloads between nodes in a local network of a farmer is even protected from the farmer himself, in essence protecting the user from the farmer in case that farmer could become too curious.

As the nodes do not have any way to be accessed, be it over the underlaying network or even the local console of the node, a user can be sure that his workload cannot be snooped upon.

## Techie talk

- **boot and initial setup**
For ZOS to work at all (the network is the computer), it needs an internet connection. That is: it needs to be able to communicate with the BCDB over the internet.
So ZOS starts with that: with the `internet` process, that tries go get the node to receive an IP address. That process will have set-up a bridge (`zos`), connected to an interface that is on an Internet-capable network. That bridge will have an IP address that has Internet access.
Also, that bridge is there for future public interfaces into workloads.
Once ZOS can reach the Internet, the rest of the system can be started, where ultimately, the `networkd` daemon is started.

- **networkd initial setup**
`networkd` starts with recensing the available Network interfaces, and registers them to the BCDB (grid database), so that farmers can specify non-standard configs like for multi-nic machines. Once that is done, `networkd` registers itself to the zbus, so it can receive tasks to execute from the provsioning daemon (`provisiond`).
These tasks are mostly setting up network resources for users, where a network resource is a subnet in the user's wireguard mesh.

- **multi-nic setups**

When someone is a farmer, exploiting nodes somewhere in a datacentre, where the nodes have multiple NICs, it is advisable (though not necessary) to differentiate OOB traffic (like initial boot setup) from user traffic (as well the overlay network as the outgoing NAT for nodes for IPv4) to be on a different NIC. With these parameters, a user will have to make sure their switches are properly configured, more in docs later.

- **registering and configurations**

Once a node has booted and properly initialized, registering and configuring the node to be able to accept workloads and their associated network configs, is a two-step process.
First, the node registers it's live network setup to the BCDB. That is : all NICs with their associated IP addresses and routes are registered so a farm admin can in a second phase configure eventual separate NICs to handle different kinds of workloads.
In that secondary phase, a farm admin can then set-up the NICs and their associated IP's manually, so that workloads can start using them.

## Wireguard explanations

- **wireguard as pointopoint links and what that means**
Wireguard is a special type of VPN, where every instance is as well server for multiple peers as client towards multiple peers. That way you can create fanning-out connections als receive connections from multiple peers, creating effectively a mesh of connections Like this : ![like so](HIDDEN-PUBLIC.png)

- **wireguard port management**
Every wireguard point (a network resource point) needs a destination/port combo when it's publicly reachable. The destination is a public ip, but the port is the differentiator. So we need to make sure every network wireguard listening port is unique in the node where it runs, and can be reapplied in case of a node's reboot.
ZOS registers the ports **already in use** to the BCDB, so a user can the pick a port that is not yet used.

- **wireguard and hidden nodes**
Hidden nodes are nodes that are in essence hidden behind a firewall, and unreachable from the Internet to an internal network, be it as an IPv4 NATed host or an IPv6 host that is firewalled in any way, where it's impossible to have connection initiations form the Internet to the node.
As such, these nodes can only partake in a network as client-only towards publicly reachable peers, and can only initiate the connections themselves. (ref previous drawing).
To make sure connectivity stays up, the clients (all) have a keepalive towards all their peers so that communications towards network resources in hidden nodes can be established.

## Caveats

- **hidden nodes**
Hidden nodes live (mostly) behind firewalls that keep state about connections and these states have a lifetime. We try at best to keep these communications going, but depending of the firewall your mileage may vary (YMMV ;-))

- **local underlay network reachability**
When multiple nodes live in a same hidden network, at the moment we don't try to have the nodes establish connectivity between themselves, so all nodes in that hidden network can only reach each other through the intermediary of a node that is publicly reachable. So to get some performance, a farmer will have to have real routable nodes available in the vicinity.
So for now, a farmer is better off to have his nodes really reachable over a public network.

- **IPv6 and IPv4 considerations**
While the mesh can work over IPv4 __and__ IPv6 at the same time, the peers can only be reached through one protocol at the same time. That is a peer is IPv4 __or__ IPv6, not both. Hence if a peer is reachable over IPv4, the client towards that peer needs to reach it over IPv4 too and thus needs an IPv4 address.
We advise strongly to have all nodes properly set-up on a routable unfirewalled IPv6 network, so that these problems have no reason to exist.
Loading