Skip to content

Commit

Permalink
docs: create v0.2 docs and add note about specifying TALOS_VERSION
Browse files Browse the repository at this point in the history
This PR adds the 0.2 docs, as well as mentions TALOS_VERSION in those docs, to try and avoid some user
confusion

Signed-off-by: Spencer Smith <robertspencersmith@gmail.com>
  • Loading branch information
rsmitty authored and talos-bot committed Apr 9, 2021
1 parent 9322532 commit a20fcf9
Show file tree
Hide file tree
Showing 15 changed files with 1,084 additions and 1 deletion.
78 changes: 78 additions & 0 deletions docs/website/content/docs/v0.2/Configuration/environments.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
description: ""
weight: 1
---

# Environments

Environments are a custom resource provided by the Metal Controller Manager.
An environment is a codified description of what should be returned by the PXE server when a physical server attempts to PXE boot.

Especially important in the environment types are the kernel args.
From here, one can tweak the IP to the metadata server as well as various other kernel options that [Talos](https://www.talos.dev/docs/v0.8/introduction/getting-started/#kernel-parameters) and/or the Linux kernel supports.

Environments can be supplied to a given server either at the Server or the ServerClass level.
The hierarchy from most to least respected is:

- `.spec.environmentRef` provided at `Server` level
- `.spec.environmentRef` provided at `ServerClass` level
- `"default"` `Environment` created by administrator

A sample environment definition looks like this:

```yaml
apiVersion: metal.sidero.dev/v1alpha1
kind: Environment
metadata:
name: default
spec:
kernel:
url: "https://github.com/talos-systems/talos/releases/download/v0.8.1/vmlinuz-amd64"
sha512: ""
args:
- init_on_alloc=1
- init_on_free=1
- slab_nomerge
- pti=on
- consoleblank=0
- random.trust_cpu=on
- ima_template=ima-ng
- ima_appraise=fix
- ima_hash=sha512
- console=tty0
- console=ttyS1,115200n8
- earlyprintk=ttyS1,115200n8
- panic=0
- printk.devkmsg=on
- talos.platform=metal
- talos.config=http://$PUBLIC_IP:9091/configdata?uuid=
initrd:
url: "https://github.com/talos-systems/talos/releases/download/v0.8.1/initramfs-amd64.xz"
sha512: ""
```

Example of overriding `"default"` `Environment` at the `Server` level:

```yaml
apiVersion: metal.sidero.dev/v1alpha1
kind: Server
...
spec:
environmentRef:
namespace: default
name: boot
...
```

Example of overriding `"default"` `Environment` at the `ServerClass` level:

```yaml
apiVersion: metal.sidero.dev/v1alpha1
kind: ServerClass
...
spec:
environmentRef:
namespace: default
name: boot
...
```
30 changes: 30 additions & 0 deletions docs/website/content/docs/v0.2/Configuration/metadata.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
description: ""
weight: 4
---

# Metadata

The Metadata server manages the Machine metadata.
In terms of Talos (the OS on which the Kubernetes cluster is formed), this is the
"[machine config](https://www.talos.dev/docs/v0.8/reference/configuration/)",
which is used during the automated installation.

## Talos Machine Configuration

The configuration of each machine is constructed from a number of sources:

- The Talos bootstrap provider.
- The `Cluster` of which the `Machine` is a member.
- The `ServerClass` which was used to select the `Server` into the `Cluster`.
- Any `Server`-specific patches.

The base template is constructed from the Talos bootstrap provider, using data from the associated `Cluster` manifest.
Then, any configuration patches are applied from the `ServerClass` and `Server`.

Only configuration patches are allowed in the `ServerClass` and `Server` resources.
These patches take the form of an [RFC 6902](https://tools.ietf.org/html/rfc6902) JSON (or YAML) patch.
An example of the use of this patch method can be found in [Patching Guide](../../guides/patching/).

Also note that while a `Server` can be a member of any number of `ServerClass`es, only the `ServerClass` which is used to select the `Server` into the `Cluster` will be used for the generation of the configuration of the `Machine`.
In this way, `Servers` may have a number of different configuration patch sets based on which `Cluster` they are in at any given time.
33 changes: 33 additions & 0 deletions docs/website/content/docs/v0.2/Configuration/serverclasses.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---
description: ""
weight: 3
---

# Server Classes

Server classes are a way to group distinct server resources.
The "qualifiers" key allows the administrator to specify criteria upon which to group these servers.
There are currently three keys: `cpu`, `systemInformation`, and `labelSelectors`.
Each of these keys accepts a list of entries.
The top level keys are a "logical AND", while the lists under each key are a "logical OR".
Qualifiers that are not specified are not evaluated.

An example:

```yaml
apiVersion: metal.sidero.dev/v1alpha1
kind: ServerClass
metadata:
name: default
spec:
qualifiers:
cpu:
- manufacturer: Intel(R) Corporation
version: Intel(R) Atom(TM) CPU C3558 @ 2.20GHz
- manufacturer: Advanced Micro Devices, Inc.
version: AMD Ryzen 7 2700X Eight-Core Processor
labelSelectors:
- "my-server-label": "true"
```

Servers would only be added to the above class if they had _EITHER_ CPU info, _AND_ the label associated with the server resource.
111 changes: 111 additions & 0 deletions docs/website/content/docs/v0.2/Configuration/servers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---
description: ""
weight: 2
---

# Servers

Servers are the basic resource of bare metal in the Metal Controller Manager.
These are created by PXE booting the servers and allowing them to send a registration request to the management plane.

An example server may look like the following:

```yaml
apiVersion: metal.sidero.dev/v1alpha1
kind: Server
metadata:
name: 00000000-0000-0000-0000-d05099d333e0
spec:
accepted: false
configPatches:
- op: replace
path: /cluster/network/cni
value:
name: custom
urls:
- http://192.168.1.199/assets/cilium.yaml
cpu:
manufacturer: Intel(R) Corporation
version: Intel(R) Atom(TM) CPU C3558 @ 2.20GHz
system:
family: Unknown
manufacturer: Unknown
productName: Unknown
serialNumber: Unknown
skuNumber: Unknown
version: Unknown
```

## Installation Disk

A an installation disk is required by Talos on bare metal.
This can be specified in a `configPatch`:

```yaml
apiVersion: metal.sidero.dev/v1alpha1
kind: Server
...
spec:
accepted: false
configPatches:
- op: replace
path: /machine/install/disk
value: /dev/sda1
```

The install disk patch can also be set on the `ServerClass`:

```yaml
apiVersion: metal.sidero.dev/v1alpha1
kind: ServerClass
...
spec:
configPatches:
- op: replace
path: /machine/install/disk
value: /dev/sda1
```

## Server Acceptance

In order for a server to be eligible for consideration, it _must_ be `accepted`.
This is an important separation point which all `Server`s must pass.
Before a `Server` is accepted, no write action will be performed against it.
Thus, it is safe for a computer to be added to a network on which Sidero is operating.
Sidero will never write to or wipe any disk on a computer which is not marked as `accepted`.

This can be tedious for systems in which all attached computers should be considered to be under the control of Sidero.
Thus, you may also choose to automatically accept any machine into Sidero on its discovery.
Please keep in mind that this means that any newly-connected computer **WILL BE WIPED** automatically.
You can enable auto-acceptance by pasing the `--auto-accept-servers=true` flag to `sidero-controller-manager`.

Once accepted, a server will be reset (all disks wiped) and then made available to Sidero.

You should never change an accepted `Server` to be _not_ accepted while it is in use.
Because servers which are not accepted will not be modified, if a server which
_was_ accepted is changed to _not_ accepted, the disk will _not_ be wiped upon
its exit.

## IPMI

Sidero can use IPMI information to control `Server` power state, reboot servers and set boot order.

IMPI connection information can be set in the `Server` spec after initial registration:

```yaml
apiVersion: metal.sidero.dev/v1alpha1
kind: Server
...
spec:
bmc:
endpoint: 10.0.0.25
user: admin
pass: password
```

If IPMI information is set, server boot order might be set to boot from disk, then network, Sidero will switch servers
to PXE boot once that is required.

Without IPMI info, Sidero can still register servers, wipe them and provision clusters, but Sidero won't be able to
reboot servers once they are removed from the cluster. If IPMI info is not set, servers should be configured to boo first from network,
then from disk.
12 changes: 12 additions & 0 deletions docs/website/content/docs/v0.2/Getting Started/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
description: ""
weight: 3
---

# Architecture

The overarching architecture of Sidero centers around a "management plane".
This plane is expected to serve as a single interface upon which administrators can create, scale, upgrade, and delete Kubernetes clusters.
At a high level view, the management plane + created clusters should look something like:

![Alternative text](./images/dc-view.png)
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 14 additions & 0 deletions docs/website/content/docs/v0.2/Getting Started/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
description: ""
weight: 2
---

# Installation

As of Cluster API version 0.3.9, Sidero is included as a default infrastructure provider in clusterctl.

To install Sidero and the other Talos providers, simply issue:

```bash
clusterctl init -b talos -c talos -i sidero
```
32 changes: 32 additions & 0 deletions docs/website/content/docs/v0.2/Getting Started/introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
description: ""
weight: 1
---

# Introduction

Sidero ("Iron" in Greek) is a project created by the [Talos Systems](https://www.talos-systems.com/) team.
The goal of this project is to provide lightweight, composable tools that can be used to create bare-metal Talos + Kubernetes clusters.
These tools are built around the Cluster API project.
Sidero is also a subproject of Talos Systems' [Arges](https://github.com/talos-systems/arges) project, which will publish known-good versions of these components (along with others) with each release.

## Overview

Sidero is made currently made up of three components:

- Metal Metadata Server: Provides a Cluster API (CAPI)-aware metadata server
- Metal Controller Manager: Provides custom resources and controllers for managing the lifecycle of metal machines
- Cluster API Provider Sidero (CAPS): A Cluster API infrastructure provider that makes use of the pieces above to spin up Kubernetes clusters

Sidero also needs these co-requisites in order to be useful:

- [Cluster API](https://github.com/kubernetes-sigs/cluster-api)
- [Cluster API Control Plane Provider Talos](https://github.com/talos-systems/cluster-api-control-plane-provider-talos)
- [Cluster API Bootstrap Provider Talos](https://github.com/talos-systems/cluster-api-bootstrap-provider-talos)

All componenets mentioned above can be installed using Cluster API's `clusterctl` tool.

Because of the design of Cluster API, there is inherently a "chicken and egg" problem with needing an existing Kubernetes cluster in order to provision the management plane.
Talos Systems and the Cluster API community have created tools to help make this transition easier.
That being said, the management plane cluster does not have to be based on Talos.
If you would, however, like to use Talos as the OS of choice for the Sidero management plane, you can find a number of ways to deploy Talos in the [documentation](https://www.talos.dev).

0 comments on commit a20fcf9

Please sign in to comment.