Skip to content
Permalink
Browse files

Update for k8s 1.13

Use k8s 1.13 & 1.12 for CI, update docs.
  • Loading branch information...
ivan4th committed Feb 25, 2019
1 parent e128049 commit 4e5d241eb344eecaeeb461f93f977c2f04a15410
@@ -169,8 +169,8 @@ e2e: &e2e
elif [[ ${CIRCLE_JOB} = e2e_multi_cni ]]; then
export MULTI_CNI=1
echo >&2 "*** Using multiple CNIs (flannel + calico)"
elif [[ ${CIRCLE_JOB} = e2e_1_11 ]]; then
export KUBE_VERSION=1.11
elif [[ ${CIRCLE_JOB} = e2e_1_12 ]]; then
export KUBE_VERSION=1.12
fi
# APISERVER_PORT is set explicitly to avoid dynamic allocation
# of the port by kdc
@@ -391,7 +391,7 @@ jobs:
e2e_multi_cni:
<<: *e2e

e2e_1_11:
e2e_1_12:
<<: *e2e

push_branch:
@@ -526,7 +526,7 @@ workflows:
only: /^master$|^.*-net$/
tags:
only: /^v[0-9].*/
- e2e_1_11:
- e2e_1_12:
requires:
- build
filters:
@@ -548,7 +548,7 @@ workflows:
- e2e_flannel
- e2e_weave
- e2e_multi_cni
- e2e_1_11
- e2e_1_12
- integration
filters:
branches:
@@ -73,7 +73,7 @@ The demo script will check for KVM support on the host and will make Virtlet use

The demo is based on [kubeadm-dind-cluster](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) project. **Docker btrfs storage driver is currently unsupported.** Please refer to `kubeadm-dind-cluster` documentation for more info.

You can remove the test cluster with `./dind-cluster-v1.12.sh clean` when you no longer need it.
You can remove the test cluster with `./dind-cluster-v1.13.sh clean` when you no longer need it.

## External projects using Virtlet
There are some external projects using Virtlet already.
@@ -4,7 +4,7 @@ set -o nounset
set -o pipefail
set -o errtrace

DIND_SCRIPT="${DIND_SCRIPT:-$HOME/dind-cluster-v1.12.sh}"
DIND_SCRIPT="${DIND_SCRIPT:-$HOME/dind-cluster-v1.13.sh}"
circle_token_file="$HOME/.circle-token"

job_num="${1:-}"
@@ -41,7 +41,7 @@ cd virtlet-circle-dump
url="$(curl -sSL -u "${CIRCLE_TOKEN}:" "${base_url}/${job_num}/artifacts" |
jq -r '.[]|select(.path=="tmp/cluster_state/kdc-dump.gz")|.url')"
echo >&2 "Getting cluster dump from ${url}"
curl -sSL "${url}" | gunzip | ~/dind-cluster-v1.12.sh split-dump
curl -sSL "${url}" | gunzip | ~/dind-cluster-v1.13.sh split-dump

url="$(curl -sSL -u "${CIRCLE_TOKEN}:" "${base_url}/${job_num}/artifacts" |
jq -r '.[]|select(.path=="tmp/cluster_state/virtlet-dump.json.gz")|.url')"
@@ -5,7 +5,7 @@ set -o nounset
set -o pipefail
set -o errtrace

KUBE_VERSION="${KUBE_VERSION:-1.12}"
KUBE_VERSION="${KUBE_VERSION:-1.13}"
CRIPROXY_DEB_URL="${CRIPROXY_DEB_URL:-https://github.com/Mirantis/criproxy/releases/download/v0.14.0/criproxy-nodeps_0.14.0_amd64.deb}"
NONINTERACTIVE="${NONINTERACTIVE:-}"
NO_VM_CONSOLE="${NO_VM_CONSOLE:-}"
@@ -4,7 +4,7 @@ Right now, the persistent rootfs for VMs is initialized by the means
of `qemu-img convert` which converts a QCOW2 image to raw one and
writes the result over a block device mapped on the host. It's
possible to overcome the problem by utilizing the new persistent
volume snapshotting feature in Kubernetes 1.12. It's also possible to
volume snapshotting feature in Kubernetes 1.13. It's also possible to
implement a solution for VMs which use local libvirt volume as their
root filesystem. In both cases, we'll need to add a CRD and a
controller for managing persistent root filesystems.
@@ -113,7 +113,7 @@ actions on libvirt volume objects.

## Using persistent volume snapshots

Kubernetes v1.12 adds support for
Kubernetes v1.13 adds support for
[persistent volume snapshots](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/)
for CSI, which is supported by Ceph CSI driver among others. We can
keep a pool of snapshots that correspond to different images. After
@@ -154,7 +154,7 @@ VirtletVMIdentitySet yaml and the name of the pod).

## Appendix A. Experimenting with Ceph CSI and external-snapshotter

For this experiment, kubeadm-dind-cluster with k8s 1.12 was used.
For this experiment, kubeadm-dind-cluster with k8s 1.13 was used.
The following settings were applied:
```console
$ export FEATURE_GATES="BlockVolume=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true"
@@ -7,10 +7,10 @@ In your shell clone the Virtlet repository and download `virtletctl` binary:
git clone https://github.com/Mirantis/virtlet.git
chmod 600 virtlet/examples/vmkey
wget https://github.com/Mirantis/virtlet/releases/download/v1.4.1/virtletctl
wget https://github.com/Mirantis/virtlet/releases/download/v1.4.4/virtletctl
chmod +x virtletctl
wget https://storage.googleapis.com/kubernetes-release/release/v1.12.3/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.3/bin/linux/amd64/kubectl
chmod +x kubectl
mkdir -p ~/bin
@@ -13,12 +13,12 @@ You'll need the following to run the local environment:
be enough, but please follow the Docker documentation for your Linux
distribution),
* [kubeadm-dind-cluster](https://github.com/kubernetes-sigs/kubeadm-dind-cluster/)
script for Kubernetes version 1.12 (`dind-cluster-v1.12.sh`).
script for Kubernetes version 1.13 (`dind-cluster-v1.13.sh`).

You can get the cluster startup script like this:
```
$ wget -O ~/dind-cluster-v1.12.sh https://github.com/kubernetes-sigs/kubeadm-dind-cluster/releases/download/v0.1.0/dind-cluster-v1.12.sh
$ chmod +x ~/dind-cluster-v1.12.sh
$ wget -O ~/dind-cluster-v1.13.sh https://github.com/kubernetes-sigs/kubeadm-dind-cluster/releases/download/v0.1.0/dind-cluster-v1.13.sh
$ chmod +x ~/dind-cluster-v1.13.sh
```

## Running the local environment
@@ -34,7 +34,7 @@ $ # build Virtlet binaries & the image
$ build/cmd.sh build
$ # start DIND cluster
$ ~/dind-cluster-v1.12.sh up
$ ~/dind-cluster-v1.13.sh up
$ # copy binaries to kube-node-1
$ build/cmd.sh copy-dind
@@ -50,7 +50,7 @@ $ build/cmd.sh e2e -test.v -ginkgo.focus="Should have default route"
$ # Restart the DIND cluster. Binaries from copy-dind are preserved
$ # (you may copy newer ones with another copy-dind command)
$ ~/dind-cluster-v1.12.sh up
$ ~/dind-cluster-v1.13.sh up
$ # start Virtlet daemonset again
$ build/cmd.sh start-dind
@@ -261,7 +261,7 @@ kube-system weave-net-kz698 2/2 Running 0 69m
kube-system weave-net-rbnmf 2/2 Running 1 69m
root@k8s-0:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-0 Ready master 69m v1.12.2
k8s-1 Ready <none> 69m v1.12.2
k8s-2 Ready <none> 69m v1.12.2
k8s-0 Ready master 69m v1.13.3
k8s-1 Ready <none> 69m v1.13.3
k8s-2 Ready <none> 69m v1.13.3
```
@@ -4,13 +4,13 @@ The steps described here are performed automatically by
[demo.sh](https://github.com/Mirantis/virtlet/blob/master/deploy/demo.sh) script.

1. Start [kubeadm-dind-cluster](https://github.com/kubernetes-sigs/kubeadm-dind-cluster)
with Kubernetes version 1.12 (you're not required to download it to your home directory).
with Kubernetes version 1.13 (you're not required to download it to your home directory).
The cluster script stores appropriate kubectl version in `~/.kubeadm-dind-cluster`.

wget -O ~/dind-cluster-v1.12.sh \
https://github.com/kubernetes-sigs/kubeadm-dind-cluster/releases/download/v0.1.0/dind-cluster-v1.12.sh
chmod +x ~/dind-cluster-v1.12.sh
~/dind-cluster-v1.12.sh up
wget -O ~/dind-cluster-v1.13.sh \
https://github.com/kubernetes-sigs/kubeadm-dind-cluster/releases/download/v0.1.0/dind-cluster-v1.13.sh
chmod +x ~/dind-cluster-v1.13.sh
~/dind-cluster-v1.13.sh up
export PATH="$HOME/.kubeadm-dind-cluster:$PATH"

1. Label a node to accept Virtlet pod:
@@ -87,7 +87,7 @@ for testing, you can use this command to start the cluster with
FEATURE_GATES="BlockVolume=true" \
KUBELET_FEATURE_GATES="BlockVolume=true" \
ENABLE_CEPH=1 \
./dind-cluster-v1.12.sh up
./dind-cluster-v1.13.sh up
```

[ubuntu-vm-local-block-pv.yaml](ubuntu-vm-local-block-pv.yaml)

0 comments on commit 4e5d241

Please sign in to comment.
You can’t perform that action at this time.