Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix lint errors #767

Merged
merged 3 commits into from
Aug 23, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .markdownlint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# disable line length errors
MD013: false

# enable duplicate headings
MD024: false

# allow inline HTML
MD033: false

# disable emphasis as heading warning
MD036: false
2 changes: 2 additions & 0 deletions .markdownlintignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
_includes/labs-description.md
_includes/found_a_bug.md
53 changes: 26 additions & 27 deletions _posts/2019-10-09-KubeVirt_k8s_crio_from_scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ In this guide the system will be named k8s-test.local and the IP address is 192.

Ensure the VM system is updated to the latest versions of the software and also ensure that the epel repository is installed:

```
```sh
k8s-test.local# yum install epel-release -y

k8s-test.local# yum update -y
Expand All @@ -54,7 +54,7 @@ k8s-test.local# yum install vim jq -y

The following kernel parameters have to be configured:

```
```sh
k8s-test.local# cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
Expand All @@ -64,7 +64,7 @@ EOF

And also the following kernel modules have to be installed:

```
```sh
k8s-test.local# modprobe br_netfilter
k8s-test.local# echo br_netfilter > /etc/modules-load.d/br_netfilter.conf

Expand All @@ -74,13 +74,13 @@ k8s-test.local# echo overlay > /etc/modules-load.d/overlay.conf

The new sysctl parameters have to be loaded in the system with the following command:

```
```sh
k8s-test.local# sysctl -p/etc/sysctl.d/99-kubernetes-cri.conf
```

The next step is to disable SELinux:

```
```sh
k8s-test.local# setenforce 0

k8s-test.local# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Expand All @@ -97,7 +97,7 @@ To install Kubernetes and CRI-O several ways can be used, in this guide there is
> note ""
> we are waiting for the [PR](https://github.com/cri-o/cri-o-ansible/pull/25) to be merged in the official cri-o-ansible repository, meantime a fork in an alternative repository would be used. Also, note that the following commands are executed from a different place, in this case from a computer called `laptop`:

```
```sh
laptop$ sudo yum install ansible -y

laptop# git clone https://github.com/ptrnull/cri-o-ansible
Expand All @@ -117,7 +117,7 @@ If the ansible way was chosen, you may want to skip this section. Otherwise, let

The required packages may be installed in the system running the following command:

```
```sh
k8s-test.local# yum install btrfs-progs-devel container-selinux device-mapper-devel gcc git glib2-devel glibc-devel glibc-static gpgme-devel json-glib-devel libassuan-devel libgpg-error-devel libseccomp-devel make pkgconfig skopeo-containers tar wget -y
```

Expand All @@ -126,7 +126,7 @@ Install golang and the md2man packages:
> info ""
> depending on the operating system running in your VM, it may be needed to change the name of the md2man golang package.

```
```sh
k8s-test.local# yum install golang-github-cpuguy83-go-md2man golang -y
```

Expand All @@ -137,31 +137,31 @@ The following directories have to be created:
- /var/lib/etcd
- /etc/cni/net.d

```
```sh
k8s-test.local# for d in "/usr/local/go /etc/systemd/system/kubelet.service.d/ /var/lib/etcd /etc/cni/net.d /etc/containers"; do mkdir -p $d; done
```

Clone the runc repository:

```
```sh
k8s-test.local# git clone https://github.com/opencontainers/runc /root/src/github.com/opencontainers/runc
```

Clone the CRI-O repository:

```
```sh
k8s-test.local# git clone https://github.com/cri-o/cri-o /root/src/github.com/cri-o/cri-o
```

Clone the CNI repository:

```
```sh
k8s-test.local# git clone https://github.com/containernetworking/plugins /root/src/github.com/containernetworking/plugins
```

To build each part, a series of commands have to be executed, first building runc:

```
```sh
k8s-test.local# cd /root/src/github.com/opencontainers/runc

k8s-test.local# export GOPATH=/root
Expand All @@ -173,13 +173,13 @@ k8s-test.local# make install

And also runc has to be linked in the correct path:

```
```sh
k8s-test.local# ln -sf /usr/local/sbin/runc /usr/bin/runc
```

Now building CRI-O (special focus on switching the branch):

```
```sh
k8s-test.local# export GOPATH=/root

k8s-test.local# export GOBIN=/usr/local/go/bin
Expand All @@ -201,7 +201,7 @@ k8s-test.local# make install.config

CRI-O also needs the conmon software as a dependency:

```
```sh
k8s-test.local# git clone http://github.com/containers/conmon /root/src/github.com/conmon

k8s-test.local# cd /root/src/github.com/conmon
Expand All @@ -213,7 +213,7 @@ k8s-test.local# make install

Now, the ContainerNetworking plugins have to be built and installed:

```
```sh
k8s-test.local# cd /root/src/github.com/containernetworking/plugins

k8s-test.local# ./build_linux.sh
Expand All @@ -225,23 +225,23 @@ k8s-test.local# cp bin/* /opt/cni/bin/

The cgroup manager has to be changed in the CRI-O configuration from the value of `systemd` to `cgroupfs`, to get it done, the file `/etc/crio/crio.conf` has to be edited and the variable `cgroup_manager` has to be replaced from its original value of `systemd` to `cgroupfs` (it could be already set it up to that value, in that case this step can be skipped):

```
```sh
k8s-test.local# vim /etc/crio/crio.conf
# group_manager = "systemd"
group_manager = "cgroupfs"
```

In the same file, the storage_driver is not configured, the variable `storage_driver` has to be uncommented and the value has to be changed from `overlay` to `overlay2`:

```
```sh
k8s-test.local# vim /etc/crio/crio.conf
#storage_driver = "overlay"
storage_driver = "overlay2"
```

Also related with the storage, the `storage_option` has to be configured to have the following value:

```
```sh
k8s-test.local# vim /etc/crio/crio.conf
storage_option = [ "overlay2.override_kernel_check=1" ]
```
Expand All @@ -251,12 +251,11 @@ storage_option = [ "overlay2.override_kernel_check=1" ]
CRI-O is the lightweight container runtime for Kubernetes. As it is pointed in the [CRI-O Website](https://cri-o.io):

> CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes. It is a lightweight alternative to using Docker as the runtime for Kubernetes. It allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods. Today it supports runc and Kata Containers as the container runtimes but any OCI-conformant runtime can be plugged in principle.

> CRI-O supports OCI container images and can pull from any container registry. It is a lightweight alternative to using Docker, Moby or rkt as the runtime for Kubernetes.

The first step is to change the configuration of the `network_dir` parameter in the CRI-O configuration file, for doing so, the `network_dir` parameter in the `/etc/crio/crio.conf` file has to be changed to point to `/etc/crio/net.d`

```
```sh
k8s-test.local$ vim /etc/crio/crio.conf
[crio.network]
# Path to the directory where CNI configuration files are located.
Expand All @@ -265,27 +264,27 @@ network_dir = "/etc/crio/net.d/"

Also that directory has to be created:

```
```sh
k8s-test.local$ mkdir /etc/crio/net.d
```

The reason behind that change is because CRI-O and `kubeadm reset` don't play well together, as `kubeadm reset` empties /etc/cni/net.d/. Therefore, it is good to change the `crio.network.network_dir` in `crio.conf` to somewhere kubeadm won't touch. To get more information the following link [Running CRI-O with kubeadm] in the References section can be checked.

Now Kubernetes has to be configured to be able to talk to CRI-O, to proceed, a new file has to be created in `/etc/default/kubelet` with the following content:

```
```sh
KUBELET_EXTRA_ARGS=--feature-gates="AllAlpha=false,RunAsGroup=true" --container-runtime=remote --cgroup-driver=cgroupfs --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m
```

Now the systemd has to be reloaded:

```
```sh
k8s-test.local# systemctl daemon-reload
```

CRI-O will use flannel network as it is recommended for multus so the following file has to be downloaded and configured:

```
```sh
k8s-test.local# cd /etc/crio/net.d/

k8s-test.local# wget https://raw.githubusercontent.com/cri-o/cri-o/master/contrib/cni/10-crio-bridge.conf
Expand All @@ -296,7 +295,7 @@ k8s-test.local# sed -i 's/10.88.0.0/10.244.0.0/g' 10-crio-bridge.conf

As the previous code block has shown, the network used is `10.244.0.0`, now the crio service can be started and enabled:

```
```sh
k8s-test.local# systemctl enable crio
k8s-test.local# systemctl start crio
k8s-test.local# systemctl status crio
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ k8s-test.local# kubeadm init --pod-network-cidr=10.244.0.0/16

When the installation finishes the command will print a similar message like this one:

```
```sh
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ This is the last blogpost of the series of 3, in this case KubeVirt is going to
What is KubeVirt? if you navigate to the [KubeVirt webpage](https://kubevirt.io) you can read:

> KubeVirt technology addresses the needs of development teams that have adopted or want to adopt Kubernetes but possess existing Virtual Machine-based workloads that cannot be easily containerized. More specifically, the technology provides a unified development platform where developers can build, modify, and deploy applications residing in both Application Containers as well as Virtual Machines in a common, shared environment.

> Benefits are broad and significant. Teams with a reliance on existing virtual machine-based workloads are empowered to rapidly containerize applications. With virtualized workloads placed directly in development workflows, teams can decompose them over time while still leveraging remaining virtualized components as is comfortably desired.

In this example there is a Kubernetes cluster compose of one master, for it to be schedulable to host the KubeVirt pods, a little modification has to be done:
Expand Down Expand Up @@ -210,7 +209,7 @@ k8s-test.local# kubectl virt stop testvm
VM testvm was scheduled to stop
```

# Troubleshooting
## Troubleshooting

Each step of this guide has a place where to look for possible issues, in general, the [troubleshooting guide of kubernetes](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/) can be checked. The following list tries to ease the possible troubleshooting in case of problems during each step of this guide:

Expand Down
1 change: 0 additions & 1 deletion _posts/2019-10-30-KubeVirt_storage_rook_ceph.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ In this example the following systems names and IP addresses are used:
For being able to import Virtual Machines, the KubeVirt CDI has to be configured too.

> Containerized-Data-Importer (CDI) is a persistent storage management add-on for Kubernetes. Its primary goal is to provide a declarative way to build Virtual Machine Disks on PVCs for KubeVirt VMs.

> CDI works with standard core Kubernetes resources and is storage device-agnostic, while its primary focus is to build disk images for Kubevirt, it's also useful outside of a KubeVirt context to use for initializing your Kubernetes Volumes with data.

In the case your cluster doesn't have CDI, the following commands will cover CDI operator and the CR setup:
Expand Down
3 changes: 1 addition & 2 deletions _posts/2019-12-17-KubeVirt_UI_options.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,6 @@ With further work and investigation, it could be an option to develop a specific
As defined in the [official webpage](https://www.okd.io/):

> OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is the upstream Kubernetes distribution embedded in Red Hat OpenShift.

> OKD embeds Kubernetes and extends it with security and other integrated concepts. OKD is also referred to as Origin in github and in the documentation. An OKD release corresponds to the Kubernetes distribution - for example, OKD 1.10 includes Kubernetes 1.10.

A few weeks ago Kubernetes distribution [OKD4](https://github.com/openshift/okd) was released as preview. OKD is the official upstream version of Red Hat's OpenShift. Since OpenShift includes KubeVirt (Red Hat calls it [CNV](https://docs.openshift.com/container-platform/4.2/cnv/cnv_install/cnv-about-cnv.html)) as a tech-preview feature since a couple of releases, there is already a lot of integration going on between OKD console and KubeVirt.
Expand Down Expand Up @@ -179,7 +178,7 @@ Octant, although it does not have any specific integration with KubeVirt, looks
> note "Note"
> We encourage our readers to let us know of user interfaces that can be used to manage our KubeVirt virtual machines. Then, we can include them in this list

## References:
## References

- [Octant](https://octant.dev)
- [OKD](https://www.okd.io/)
Expand Down
12 changes: 6 additions & 6 deletions _posts/2020-01-24-OKD-web-console-install.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ Done in 215.91s.
The result of the process is a binary file called **bridge** inside the bin folder. Prior to run the _"bridge"_, it has to be verified that the port where the OKD web console is expecting connections is not blocked.

```sh
$ iptables -A INPUT -p tcp --dport 9000 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --dport 9000 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
```

Then, the artifact can be executed:
Expand All @@ -151,8 +151,8 @@ There are two options to fix the issue: one is granting cluster-admin permission
The other option is create a new service account called **console**, grant cluster-admin permissions to it and configure the web console to run with this new service account:

```sh
$ kubectl create serviceaccount console -n kube-system
$ kubectl create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
kubectl create serviceaccount console -n kube-system
kubectl create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
```

Once created, modify the `environment.sh` file and change the line that starts with `secretname` as shown below:
Expand All @@ -165,7 +165,7 @@ secretname=$(kubectl get serviceaccount **console** --namespace=kube-system -o j
Now, variables configured in the `environment.sh` file have to be exported again and the connection to the console must be reloaded.

```sh
$ source ./contrib/environment.sh
source ./contrib/environment.sh
```

## Deploy KubeVirt using the Hyperconverged Cluster Operator (HCO)
Expand Down Expand Up @@ -304,8 +304,8 @@ A YAML file containing a deployment and service objects that mimic the binary in
Then, create a specific service account (**console**) for running the OpenShift web console in case it is not created [previously](#compiling-okd-web-console) and grant cluster-admin permissions:

```sh
$ kubectl create serviceaccount console -n kube-system
$ kubectl create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
kubectl create serviceaccount console -n kube-system
kubectl create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
```

Next, extract the **token secret name** associated with the console service account:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ To proceed with the Installation steps the different elements involved are liste
> No need for executing any command until the [Installation](#installation) section.

1. An empty KubeVirt Virtual Machine

```yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
Expand All @@ -63,6 +64,7 @@ To proceed with the Installation steps the different elements involved are liste
volumes:
...
```

2. A PVC with the Microsoft Windows ISO file attached as CD-ROM to the VM, would be automatically created with the `virtctl` command when uploading the file

First thing here is to download the ISO file of the Microsoft Windows, for that the [Microsoft Evaluation Center](https://www.Microsoft.com/en-us/evalcenter/evaluate-windows-server-2012-r2) offers
Expand Down Expand Up @@ -132,7 +134,7 @@ To proceed with the Installation steps the different elements involved are liste
The container image has to be pulled to have it available in the local registry.

```sh
$ docker pull kubevirt/virtio-container-disk
docker pull kubevirt/virtio-container-disk
```

And also it has to be referenced in the VM YAML, in this example the name for the `containerDisk` is `virtiocontainerdisk`.
Expand Down Expand Up @@ -267,7 +269,7 @@ To proceed with the installation the commands commented above are going to be ex
5. Once the status of the VMI is `RUNNING` it's time to connect using VNC:

```sh
$ virtctl vnc win2k12-iso
virtctl vnc win2k12-iso
```

![windows2k12_install.png](/assets/2020-02-14-KubeVirt-installing_Microsoft_Windows_from_an_iso/windows2k12_install.png "KubeVirt Microsoft Windows installation")
Expand Down
Loading