Skip to content

Commit

Permalink
chore: fix markdown linting
Browse files Browse the repository at this point in the history
Port changes from Talos.

Refs #421.

Signed-off-by: Alexey Palazhchenko <alexey.palazhchenko@gmail.com>
  • Loading branch information
AlekSi authored and talos-bot committed May 19, 2021
1 parent a792890 commit 3caa6f5
Show file tree
Hide file tree
Showing 24 changed files with 148 additions and 96 deletions.
10 changes: 5 additions & 5 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
*
!api
!cmd
**
!app
!config
!hack
!app
!internal
!pkg
!sfyra
!templates
!website
!.golangci.yml
!.markdownlint.json
!.textlintrc.json
!go.mod
!go.sum
!*.go
Expand Down
1 change: 1 addition & 0 deletions .golangci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,7 @@ linters:
enable-all: true
disable:
# FIXME those linters should be enabled ASAP
# https://github.com/talos-systems/sidero/issues/421
- cyclop
- errcheck
- forcetypeassert
Expand Down
9 changes: 4 additions & 5 deletions .markdownlint.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
{
"default": true,
"MD013": false,
"MD033": false
}

"default": true,
"MD013": false,
"MD033": false
}
8 changes: 8 additions & 0 deletions .textlintrc.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"rules": {
"one-sentence-per-line": true
},
"filters": {
"comments": true
}
}
29 changes: 24 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -211,16 +211,35 @@ RUN --mount=type=cache,target=/.cache gofumports -w -local ${MODULE} .
#
FROM scratch AS fmt
COPY --from=fmt-build /src /

#
# The markdownlint target performs linting on Markdown files.
#
FROM node:16.1.0-alpine AS lint-markdown
WORKDIR /opt
RUN npm install -g markdownlint-cli@0.23.2
RUN npm install sentences-per-line
RUN apk add --no-cache findutils
RUN npm i -g markdownlint-cli@0.23.2
RUN npm i -g textlint@11.7.6
RUN npm i -g textlint-filter-rule-comments@1.2.2
RUN npm i -g textlint-rule-one-sentence-per-line@1.0.2
WORKDIR /src
COPY --from=base /src .
RUN markdownlint --ignore '**/hack/chglog/**' --rules /opt/node_modules/sentences-per-line/index.js .
COPY . .
RUN markdownlint \
--ignore '**/LICENCE.md' \
--ignore '**/CHANGELOG.md' \
--ignore '**/CODE_OF_CONDUCT.md' \
--ignore '**/node_modules/**' \
--ignore '**/hack/chglog/**' \
.
RUN find . \
-name '*.md' \
-not -path './LICENCE.md' \
-not -path './CHANGELOG.md' \
-not -path './CODE_OF_CONDUCT.md' \
-not -path '*/node_modules/*' \
-not -path './hack/chglog/**' \
-print0 \
| xargs -0 textlint

#
# The sfyra-build target builds the Sfyra source.
#
Expand Down
69 changes: 39 additions & 30 deletions sfyra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,38 @@ Integration test for Sidero/Arges.

Build the test binary and Sidero, push images:

make USERNAME=<username> TAG=v0.1.0 PUSH=true
```sh
make USERNAME=<username> TAG=v0.1.0 PUSH=true
```

Run the test (this will trigger `make release`):

make run-sfyra USERNAME=<username> TAG=v0.1.0
```sh
make run-sfyra USERNAME=<username> TAG=v0.1.0
```

Test uses CIDRs `172.24.0.0/24`, `172.25.0.0/24` by default.

Sequence of steps:

* build initial bootstrap Talos cluster of one node
* build management set of VMs (PXE-boot enabled)
* install Cluster API, Sidero and Talos providers
* run the unit-tests
- build initial bootstrap Talos cluster of one node
- build management set of VMs (PXE-boot enabled)
- install Cluster API, Sidero and Talos providers
- run the unit-tests

It's also possible to run Sfyra manually to avoid tearing down and recreating whole environment
each time. After `make USERNAME=<username> TAG=v0.1.0 PUSH=true` run:
It's also possible to run Sfyra manually to avoid tearing down and recreating whole environment each time.
After `make USERNAME=<username> TAG=v0.1.0 PUSH=true` run:

make talos-artifacts # need to run it only once per Talos release change
make clusterctl-release USERNAME=<username> TAG=v0.1.0 PUSH=true
```sh
make talos-artifacts # need to run it only once per Talos release change
make clusterctl-release USERNAME=<username> TAG=v0.1.0 PUSH=true
```

Then launch Sfyra manually with desired flags:

sudo -E _out/sfyra test integration --registry-mirror docker.io=http://172.24.0.1:5000,k8s.gcr.io=http://172.24.0.1:5001,quay.io=http://172.24.0.1:5002,gcr.io=http://172.24.0.1:5003,ghcr.io=http://172.24.0.1:5004,127.0.0.1:5005=http://172.24.0.1:5005 --skip-teardown --clusterctl-config ~/.cluster-api/clusterctl.sfyra.yaml
```sh
sudo -E _out/sfyra test integration --registry-mirror docker.io=http://172.24.0.1:5000,k8s.gcr.io=http://172.24.0.1:5001,quay.io=http://172.24.0.1:5002,gcr.io=http://172.24.0.1:5003,ghcr.io=http://172.24.0.1:5004,127.0.0.1:5005=http://172.24.0.1:5005 --skip-teardown --clusterctl-config ~/.cluster-api/clusterctl.sfyra.yaml
```

Alternatively, you may use `run-sfyra` target with `SFYRA_EXTRA_FLAGS` and `REGISTRY_MIRROR_FLAGS` environment variables:

Expand All @@ -41,34 +49,29 @@ export SFYRA_EXTRA_FLAGS="--skip-teardown"
make run-sfyra
```

With `--skip-teardown` flag test leaves the bootstrap cluster running so that next iteration of the test
can be run without waiting for the boostrap actions to be finished. It's possible to run Sfyra tests once
again without bringing down the test environment, but make sure that all the clusters are deleted with
`kubectl delete clusters --all`.
With `--skip-teardown` flag test leaves the bootstrap cluster running so that next iteration of the test can be run without waiting for the boostrap actions to be finished.
It's possible to run Sfyra tests once again without bringing down the test environment, but make sure that all the clusters are deleted with `kubectl delete clusters --all`.

Flag `--registry-mirror` is optional, but it speeds up provisioning significantly. See Talos guides on setting up registry
pull-through caches, or just run `hack/start-registry-proxies.sh`.
Flag `--registry-mirror` is optional, but it speeds up provisioning significantly.
See Talos guides on setting up registry pull-through caches, or just run `hack/start-registry-proxies.sh`.

Kubernetes config can be pulled with `talosctl -n 172.24.0.2 kubeconfig --force`.

When `sfyra` is not running, loadbalancer for `management-cluster` control plane is also down, it can be restarted for manual
testing with `_out/sfyra loadbalancer create --kubeconfig=$HOME/.kube/config --load-balancer-port 10000`.
When `sfyra` is not running, loadbalancer for `management-cluster` control plane is also down, it can be restarted for manual testing with `_out/sfyra loadbalancer create --kubeconfig=$HOME/.kube/config --load-balancer-port 10000`.

One can also run parts of the test flow:

* setup Talos bootstrap cluster (single node): `sudo -E _out/sfyra bootstrap cluster`
* install and patch CAPI and providers: `_out/sfyra bootstrap capi`
* launch a set of VMs ready for PXE booting: `sudo -E _out/sfyra bootstrap servers`
- setup Talos bootstrap cluster (single node): `sudo -E _out/sfyra bootstrap cluster`
- install and patch CAPI and providers: `_out/sfyra bootstrap capi`
- launch a set of VMs ready for PXE booting: `sudo -E _out/sfyra bootstrap servers`

See each command help on how to customize the operations.

## Testing Always PXE Boot

By default, QEMU VMs provisioned to emulate metal servers are configured to boot from the disk first, and Sidero uses API
call to force PXE boot to run the agent.
By default, QEMU VMs provisioned to emulate metal servers are configured to boot from the disk first, and Sidero uses API call to force PXE boot to run the agent.

Sometimes it's important to test the flow when the servers are configured to boot from the network first always (e.g. if
bare metal setup doesn't have IPMI), in that case it's important to force VMs to boot from the network always.
Sometimes it's important to test the flow when the servers are configured to boot from the network first always (e.g. if bare metal setup doesn't have IPMI), in that case it's important to force VMs to boot from the network always.
This can be achieved by adding a flag `--default-boot-order=nc` to `sfyra` invocation.
In this case Sidero iPXE server will force VM to boot from disk via iPXE if the server is already provisioned.

Expand All @@ -78,17 +81,23 @@ In this case Sidero iPXE server will force VM to boot from disk via iPXE if the

Build the artifacts in Talos:

make initramfs kernel talosctl-linux
```sh
make initramfs kernel talosctl-linux
```

From Sidero directory run:

sudo -E _out/sfyra test integration --skip-teardown --bootstrap-initramfs=../talos/_out/initramfs-amd64.xz --bootstrap-vmlinuz=../talos/_out/vmlinuz-amd64 --talosctl-path=../talos/_out/talosctl-linux-amd64
```sh
sudo -E _out/sfyra test integration --skip-teardown --bootstrap-initramfs=../talos/_out/initramfs-amd64.xz --bootstrap-vmlinuz=../talos/_out/vmlinuz-amd64 --talosctl-path=../talos/_out/talosctl-linux-amd64
```

This command doesn't tear down the cluster after the test run, so it can be re-run any time for quick another round of testing.

## Cleaning up

To destroy Sfyra environment use `talosctl`:

sudo -E talosctl cluster destroy --provisioner=qemu --name=sfyra
sudo -E talosctl cluster destroy --provisioner=qemu --name=sfyra-management
```sh
sudo -E talosctl cluster destroy --provisioner=qemu --name=sfyra
sudo -E talosctl cluster destroy --provisioner=qemu --name=sfyra-management
```
5 changes: 2 additions & 3 deletions website/content/docs/v0.1/Configuration/servers.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,5 @@ spec:
If IPMI information is set, server boot order might be set to boot from disk, then network, Sidero will switch servers
to PXE boot once that is required.

Without IPMI info, Sidero can still register servers, wipe them and provision clusters, but Sidero won't be able to
reboot servers once they are removed from the cluster. If IPMI info is not set, servers should be configured to boot first from network,
then from disk.
Without IPMI info, Sidero can still register servers, wipe them and provision clusters, but Sidero won't be able to reboot servers once they are removed from the cluster.
If IPMI info is not set, servers should be configured to boot first from network, then from disk.
11 changes: 8 additions & 3 deletions website/content/docs/v0.1/Guides/bootstrapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,13 +244,15 @@ spec:
version: Intel(R) Atom(TM) CPU C3558 @ 2.20GHz
EOF
```
In order to fetch hardware information, you can use
```bash
kubectl get server -o yaml
```
Note that for bare-metal setup, you would need to specify an installation disk. See the [Installation Disk](/docs/v0.1/configuration/servers/#installation-disk)
Note that for bare-metal setup, you would need to specify an installation disk.
See the [Installation Disk](/docs/v0.1/configuration/servers/#installation-disk)
Once created, you should see the servers that make up your server class appear as "available":
Expand Down Expand Up @@ -283,6 +285,7 @@ Note that there are several variables that should be set in order for the templa
- `CONTROL_PLANE_PORT`: The port used for the Kubernetes API server (port 6443)
For instance:
```bash
export CONTROL_PLANE_SERVERCLASS=master
export WORKER_SERVERCLASS=worker
Expand All @@ -292,7 +295,8 @@ export CONTROL_PLANE_ENDPOINT=1.2.3.4
clusterctl config cluster management-plane -i sidero > management-plane.yaml
```
In addition, you can specify the replicas for control-plane & worker nodes in management-plane.yaml manifest for TalosControlPlane and MachineDeployment objects. Also, they can be scaled if needed:
In addition, you can specify the replicas for control-plane & worker nodes in management-plane.yaml manifest for TalosControlPlane and MachineDeployment objects.
Also, they can be scaled if needed:
```bash
kubectl get taloscontrolplane
Expand All @@ -306,7 +310,8 @@ Now that we have the manifest, we can simply apply it:
kubectl apply -f management-plane.yaml
```
**NOTE: The templated manifest above is meant to act as a starting point. If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.**
**NOTE: The templated manifest above is meant to act as a starting point.**
**If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.**
Once the management plane is setup, you can fetch the talosconfig by using the cluster label.
Be sure to update the cluster name and issue the following command:
Expand Down
3 changes: 2 additions & 1 deletion website/content/docs/v0.1/Guides/first-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,8 @@ Now that we have the manifest, we can simply apply it:
kubectl apply -f workload-cluster.yaml
```
**NOTE: The templated manifest above is meant to act as a starting point. If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.**
**NOTE: The templated manifest above is meant to act as a starting point.**
**If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.**
Once the workload cluster is setup, you can fetch the talosconfig with a command like:
Expand Down
2 changes: 1 addition & 1 deletion website/content/docs/v0.1/Guides/flow.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ description: "Diagrams for various flows in Sidero."
weight: 4
---

## Provisioning Flow
# Provisioning Flow

```mermaid
graph TD;
Expand Down
4 changes: 2 additions & 2 deletions website/content/docs/v0.1/Guides/patching.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ version: v1alpha1
Replace `$PUBLIC_IP` with the Sidero IP address and `$SERVER_UUID` with the name of the `Server` to test
against.

If metadata endpoint returns an error on applying JSON patches, make sure config subtree being patched
exists in the config. If it doesn't exist, create it with the `op: add` above the `op: replace` patch.
If metadata endpoint returns an error on applying JSON patches, make sure config subtree being patched exists in the config.
If it doesn't exist, create it with the `op: add` above the `op: replace` patch.

## Combining Patches from Multiple Sources

Expand Down
5 changes: 2 additions & 3 deletions website/content/docs/v0.2/Configuration/servers.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,5 @@ spec:
If IPMI information is set, server boot order might be set to boot from disk, then network, Sidero will switch servers
to PXE boot once that is required.

Without IPMI info, Sidero can still register servers, wipe them and provision clusters, but Sidero won't be able to
reboot servers once they are removed from the cluster. If IPMI info is not set, servers should be configured to boot first from network,
then from disk.
Without IPMI info, Sidero can still register servers, wipe them and provision clusters, but Sidero won't be able to reboot servers once they are removed from the cluster.
If IPMI info is not set, servers should be configured to boot first from network, then from disk.
11 changes: 8 additions & 3 deletions website/content/docs/v0.2/Guides/bootstrapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,13 +244,15 @@ spec:
version: Intel(R) Atom(TM) CPU C3558 @ 2.20GHz
EOF
```
In order to fetch hardware information, you can use
```bash
kubectl get server -o yaml
```
Note that for bare-metal setup, you would need to specify an installation disk. See the [Installation Disk](/docs/v0.1/configuration/servers/#installation-disk)
Note that for bare-metal setup, you would need to specify an installation disk.
See the [Installation Disk](/docs/v0.1/configuration/servers/#installation-disk)
Once created, you should see the servers that make up your server class appear as "available":
Expand Down Expand Up @@ -286,6 +288,7 @@ Note that there are several variables that should be set in order for the templa
This value is used in determining the fields present in the machine configuration that gets generated for Talos nodes.
For instance:
```bash
export CONTROL_PLANE_SERVERCLASS=master
export WORKER_SERVERCLASS=worker
Expand All @@ -295,7 +298,8 @@ export CONTROL_PLANE_ENDPOINT=1.2.3.4
clusterctl config cluster management-plane -i sidero > management-plane.yaml
```
In addition, you can specify the replicas for control-plane & worker nodes in management-plane.yaml manifest for TalosControlPlane and MachineDeployment objects. Also, they can be scaled if needed:
In addition, you can specify the replicas for control-plane & worker nodes in management-plane.yaml manifest for TalosControlPlane and MachineDeployment objects.
Also, they can be scaled if needed:
```bash
kubectl get taloscontrolplane
Expand All @@ -309,7 +313,8 @@ Now that we have the manifest, we can simply apply it:
kubectl apply -f management-plane.yaml
```
**NOTE: The templated manifest above is meant to act as a starting point. If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.**
**NOTE: The templated manifest above is meant to act as a starting point.**
**If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.**
Once the management plane is setup, you can fetch the talosconfig by using the cluster label.
Be sure to update the cluster name and issue the following command:
Expand Down
3 changes: 2 additions & 1 deletion website/content/docs/v0.2/Guides/first-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,8 @@ Now that we have the manifest, we can simply apply it:
kubectl apply -f workload-cluster.yaml
```
**NOTE: The templated manifest above is meant to act as a starting point. If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.**
**NOTE: The templated manifest above is meant to act as a starting point.**
**If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.**
Once the workload cluster is setup, you can fetch the talosconfig with a command like:
Expand Down
2 changes: 1 addition & 1 deletion website/content/docs/v0.2/Guides/flow.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ description: "Diagrams for various flows in Sidero."
weight: 4
---

## Provisioning Flow
# Provisioning Flow

```mermaid
graph TD;
Expand Down
4 changes: 2 additions & 2 deletions website/content/docs/v0.2/Guides/patching.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ version: v1alpha1
Replace `$PUBLIC_IP` with the Sidero IP address and `$SERVER_UUID` with the name of the `Server` to test
against.

If metadata endpoint returns an error on applying JSON patches, make sure config subtree being patched
exists in the config. If it doesn't exist, create it with the `op: add` above the `op: replace` patch.
If metadata endpoint returns an error on applying JSON patches, make sure config subtree being patched exists in the config.
If it doesn't exist, create it with the `op: add` above the `op: replace` patch.

## Combining Patches from Multiple Sources

Expand Down
5 changes: 2 additions & 3 deletions website/content/docs/v0.2/Guides/upgrades.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ weight: 3

# Upgrading

Upgrading a running workload cluster or management plane is the same process as describe in the Talos documentation.
Upgrading a running workload cluster or management plane is the same process as describe in the Talos documentation.

To upgrade the Talos OS, see [here](https://www.talos.dev/docs/v0.9/guides/upgrading-talos).
To upgrade the Talos OS, see [here](https://www.talos.dev/docs/v0.9/guides/upgrading-talos).

In order to upgrade Kubernetes itself, see [here](https://www.talos.dev/docs/v0.9/guides/upgrading-kubernetes/).

Expand Down Expand Up @@ -65,4 +65,3 @@ Update the `spec.controlPlaneConfig.[controlplane,init].talosVersion` fields to

- At this point, any new controlplane or worker machines should receive the newer machine config format and join the cluster successfully.
You can also proceed to upgrade existing nodes.

Loading

0 comments on commit 3caa6f5

Please sign in to comment.