This is a Vagrant Environment for a playing with Sidero.
For playing with Talos see the rgl/talos-vagrant repository.
Install docker, vagrant, vagrant-libvirt, and the Ubuntu Base Box.
If you want to connect to the external physical network, you must configure your host network as described in rgl/ansible-collection-tp-link-easy-smart-switch (e.g. have the br-rpi
linux bridge) and set CONFIG_PANDORA_BRIDGE_NAME
in the Vagrantfile
.
This environment sometimes hits the GitHub rate limits (at the time of writing, these were 60 unauthenticated requests per hour), as such, you might want to export the GITHUB_USERNAME/GITHUB_TOKEN
environment variables before running vagrant
to have an higher (5000 requests per hour).
NB This token is also saved in the .netrc
file inside the VMs.
Bring up the pandora
virtual machine:
vagrant up --provider=libvirt --no-destroy-on-error pandora
Enter the pandora
virtual machine and watch the progress:
vagrant ssh pandora
sudo -i
watch kubectl get servers,machines,clusters
In another shell, bring up the example cluster virtual machines:
vagrant up --provider=libvirt --no-destroy-on-error
Access the example cluster:
vagrant ssh pandora
sudo -i
kubectl get talosconfig \
-l cluster.x-k8s.io/cluster-name=example \
-o jsonpath='{.items[0].status.talosConfig}' \
>example-talosconfig.yaml
first_control_plane_ip="$(cat /vagrant/shared/machines.json | jq -r '.[] | select(.role == "controlplane") | .ip' | head -1)"
talosctl --talosconfig example-talosconfig.yaml config endpoints $first_control_plane_ip
talosctl --talosconfig example-talosconfig.yaml config nodes $first_control_plane_ip
# NB the following will only work after the example cluster has a working
# control plane (e.g. after the cp1 node is ready).
talosctl --talosconfig example-talosconfig.yaml kubeconfig example-kubeconfig.yaml
cp example-*.yaml /vagrant/shared
kubectl --kubeconfig example-kubeconfig.yaml get nodes -o wide
Access kubernetes with k9s:
vagrant ssh pandora
sudo -i
k9s # management cluster.
k9s --kubeconfig example-kubeconfig.yaml # example cluster.
You can easily capture and see traffic from the host with the wireshark.sh
script, e.g., to capture the traffic from the eth1
interface:
./wireshark.sh pandora eth1
- only the
amd64
architecture is currently supported by sidero.- see
kubectl get environment default -o yaml
- see
- Sidero
clusterctl config repositories
kubectl get crd servers.metal.sidero.dev -o yaml
kubectl get clusters
kubectl get servers
kubectl get serverclasses
kubectl get machines
kubectl get taloscontrolplane
kubectl get environment default -o yaml
kubectl get ns
kubectl -n sidero-system get pods
kubectl -n sidero-system logs -l app=sidero
kubectl -n capi-webhook-system get deployments
kubectl -n capi-webhook-system get pods
kubectl -n capi-webhook-system logs -l control-plane=controller-manager -c manager
kubectl -n sidero-system logs -l control-plane=caps-controller-manager -c manager
kubectl -n cabpt-system logs deployment/cabpt-controller-manager -c manager
- Talos
- Troubleshooting Control Plane
talosctl -n cp1 dashboard
talosctl -n cp1 logs controller-runtime
talosctl -n cp1 logs kubelet
talosctl -n cp1 disks
talosctl -n cp1 get resourcedefinitions
talosctl -n cp1 get machineconfigs -o yaml
talosctl -n cp1 get staticpods -o yaml
talosctl -n cp1 get staticpodstatus
talosctl -n cp1 get manifests
talosctl -n cp1 get services
talosctl -n cp1 get addresses
talosctl -n cp1 list /system
talosctl -n cp1 list /var
talosctl -n cp1 read /proc/cmdline
- Kubernetes
kubectl get events --all-namespaces --watch
kubectl --namespace kube-system get events --watch