Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update prow #60

Merged
merged 25 commits into from
Jan 10, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
83a4ef1
Adding build for ppc64le
Pensu Nov 19, 2019
003c14b
Add snapshotter CRDs after cluster setup
ggriffiths Nov 12, 2019
8adde49
Merge pull request #45 from ggriffiths/snapshot_beta_crds
k8s-ci-robot Nov 25, 2019
6d674a7
Merge pull request #47 from Pensu/multi-arch
k8s-ci-robot Nov 26, 2019
80bba1f
Use kind v0.6.0
darkowlzz Nov 29, 2019
4ff2f5f
Merge pull request #50 from darkowlzz/kind-0.6.0
k8s-ci-robot Dec 2, 2019
9a7a685
Create a kind cluster with two worker nodes so that the topology feat…
msau42 Dec 3, 2019
53888ae
Improve README by adding an explicit Kubernetes dependency section
msau42 Dec 4, 2019
4ad6949
Improve snapshot pod running checks and improve version_gt
ggriffiths Dec 4, 2019
d7c69d2
Merge pull request #51 from msau42/enable-multinode
k8s-ci-robot Dec 4, 2019
771ca6f
Merge pull request #49 from ggriffiths/prowsh_improve_version_gt
k8s-ci-robot Dec 4, 2019
a4e6299
fix syntax for ppc64le build
msau42 Dec 4, 2019
540599b
Merge pull request #53 from msau42/fix-make
k8s-ci-robot Dec 4, 2019
9ace020
Merge pull request #52 from msau42/update-readme
k8s-ci-robot Dec 6, 2019
b98b2ae
Enable snapshot tests in 1.17 to be run in non-alpha jobs.
msau42 Dec 17, 2019
9f1f3dd
Merge pull request #56 from msau42/enable-snapshots
k8s-ci-robot Dec 18, 2019
fc80975
Fix version_gt to work with kubernetes prefix
ggriffiths Dec 21, 2019
f6c74b3
Merge pull request #57 from ggriffiths/version_gt_kubernetes_fix
k8s-ci-robot Dec 23, 2019
af9549b
Update prow hostpath driver version to 1.3.0-rc2
saad-ali Jan 2, 2020
5f444b8
Merge pull request #60 from saad-ali/updateHostpathVersion
k8s-ci-robot Jan 2, 2020
8b0316c
Fix overriding of junit results by using unique names for each e2e run
msau42 Jan 2, 2020
3c463fb
Merge pull request #61 from msau42/enable-snapshots
k8s-ci-robot Jan 2, 2020
8191eab
Update snapshotter to version v2.0.0
ggriffiths Jan 8, 2020
4cc9174
Merge pull request #64 from ggriffiths/snapshotter_2_version_update
k8s-ci-robot Jan 8, 2020
fe1a2bb
Merge commit '4cc91745737774fd504332ae8423751e6159b75c' into prow-upd…
msau42 Jan 9, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 15 additions & 11 deletions release-tools/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,17 +141,6 @@ The `vendor` directory is optional. It is still present in projects
because it avoids downloading sources during CI builds. If this is no
longer deemed necessary, then a project can also remove the directory.

When using packages that are part of the Kubernetes source code, the
commands above are not enough because the [lack of semantic
versioning](https://github.com/kubernetes/kubernetes/issues/72638)
prevents `go mod` from finding newer releases. Importing directly from
`kubernetes/kubernetes` also needs `replace` statements to override
the fake `v0.0.0` versions
(https://github.com/kubernetes/kubernetes/issues/79384). The
`go-get-kubernetes.sh` script can be used to update all packages in
lockstep to a different Kubernetes version. It takes a single version
number like "1.16.0".

Conversion of a repository that uses `dep` to `go mod` can be done with:

GO111MODULE=on go mod init
Expand All @@ -160,3 +149,18 @@ Conversion of a repository that uses `dep` to `go mod` can be done with:
GO111MODULE=on go mod vendor
git rm -f Gopkg.toml Gopkg.lock
git add go.mod go.sum vendor

### Updating Kubernetes dependencies

When using packages that are part of the Kubernetes source code, the
commands above are not enough because the [lack of semantic
versioning](https://github.com/kubernetes/kubernetes/issues/72638)
prevents `go mod` from finding newer releases. Importing directly from
`kubernetes/kubernetes` also needs `replace` statements to override
the fake `v0.0.0` versions
(https://github.com/kubernetes/kubernetes/issues/79384). The
`go-get-kubernetes.sh` script can be used to update all packages in
lockstep to a different Kubernetes version. Example usage:
```
$ ./release-tools/go-get-kubernetes.sh 1.16.4
```
1 change: 1 addition & 0 deletions release-tools/build.make
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ build-%: check-go-version-go
CGO_ENABLED=0 GOOS=linux go build $(GOFLAGS_VENDOR) -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o ./bin/$* ./cmd/$*
if [ "$$ARCH" = "amd64" ]; then \
CGO_ENABLED=0 GOOS=windows go build $(GOFLAGS_VENDOR) -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o ./bin/$*.exe ./cmd/$* ; \
CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le go build $(GOFLAGS_VENDOR) -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o ./bin/$*-ppc64le ./cmd/$* ; \
fi

container-%: build-%
Expand Down
150 changes: 135 additions & 15 deletions release-tools/prow.sh
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,7 @@ configvar CSI_PROW_GO_VERSION_GINKGO "${CSI_PROW_GO_VERSION_BUILD}" "Go version
# kind version to use. If the pre-installed version is different,
# the desired version is downloaded from https://github.com/kubernetes-sigs/kind/releases/download/
# (if available), otherwise it is built from source.
# TODO: https://github.com/kubernetes-csi/csi-release-tools/issues/39
configvar CSI_PROW_KIND_VERSION "86bc23d84ac12dcb56a0528890736e2c347c2dc3" "kind"
configvar CSI_PROW_KIND_VERSION "v0.6.0" "kind"

# ginkgo test runner version to use. If the pre-installed version is
# different, the desired version is built from source.
Expand All @@ -133,7 +132,7 @@ configvar CSI_PROW_BUILD_JOB true "building code in repo enabled"
# use the same settings as for "latest" Kubernetes. This works
# as long as there are no breaking changes in Kubernetes, like
# deprecating or changing the implementation of an alpha feature.
configvar CSI_PROW_KUBERNETES_VERSION 1.15.3 "Kubernetes"
configvar CSI_PROW_KUBERNETES_VERSION 1.17.0 "Kubernetes"

# This is a hack to workaround the issue that each version
# of kind currently only supports specific patch versions of
Expand All @@ -143,7 +142,6 @@ configvar CSI_PROW_KUBERNETES_VERSION 1.15.3 "Kubernetes"
#
# If the version is prefixed with "release-", then nothing
# is overridden.
override_k8s_version "1.14.6"
override_k8s_version "1.15.3"

# CSI_PROW_KUBERNETES_VERSION reduced to first two version numbers and
Expand Down Expand Up @@ -189,7 +187,7 @@ configvar CSI_PROW_WORK "$(mkdir -p "$GOPATH/pkg" && mktemp -d "$GOPATH/pkg/csip
#
# When no deploy script is found (nothing in `deploy` directory,
# CSI_PROW_HOSTPATH_REPO=none), nothing gets deployed.
configvar CSI_PROW_HOSTPATH_VERSION "v1.2.0" "hostpath driver"
configvar CSI_PROW_HOSTPATH_VERSION "v1.3.0-rc2" "hostpath driver"
configvar CSI_PROW_HOSTPATH_REPO https://github.com/kubernetes-csi/csi-driver-host-path "hostpath repo"
configvar CSI_PROW_DEPLOYMENT "" "deployment"
configvar CSI_PROW_HOSTPATH_DRIVER_NAME "hostpath.csi.k8s.io" "the hostpath driver name"
Expand All @@ -207,9 +205,9 @@ configvar CSI_PROW_HOSTPATH_CANARY "" "hostpath image"
#
# CSI_PROW_E2E_REPO=none disables E2E testing.
# TOOO: remove versioned variables and make e2e version match k8s version
configvar CSI_PROW_E2E_VERSION_1_14 v1.14.0 "E2E version for Kubernetes 1.14.x"
configvar CSI_PROW_E2E_VERSION_1_15 v1.15.0 "E2E version for Kubernetes 1.15.x"
configvar CSI_PROW_E2E_VERSION_1_16 v1.16.0 "E2E version for Kubernetes 1.16.x"
configvar CSI_PROW_E2E_VERSION_1_17 v1.17.0 "E2E version for Kubernetes 1.17.x"
# TODO: add new CSI_PROW_E2E_VERSION entry for future Kubernetes releases
configvar CSI_PROW_E2E_VERSION_LATEST master "E2E version for Kubernetes master" # testing against Kubernetes master is already tracking a moving target, so we might as well use a moving E2E version
configvar CSI_PROW_E2E_REPO_LATEST https://github.com/kubernetes/kubernetes "E2E repo for Kubernetes >= 1.13.x" # currently the same for all versions
Expand Down Expand Up @@ -279,6 +277,14 @@ tests_need_alpha_cluster () {
tests_enabled "parallel-alpha" "serial-alpha"
}

# Regex for non-alpha, feature-tagged tests that should be run.
#
# Starting with 1.17, snapshots is beta, but the E2E tests still have the
# [Feature:] tag. They need to be explicitly enabled.
configvar CSI_PROW_E2E_FOCUS_1_15 '^' "non-alpha, feature-tagged tests for Kubernetes = 1.15" # no tests to run, match nothing
configvar CSI_PROW_E2E_FOCUS_1_16 '^' "non-alpha, feature-tagged tests for Kubernetes = 1.16" # no tests to run, match nothing
configvar CSI_PROW_E2E_FOCUS_LATEST '\[Feature:VolumeSnapshotDataSource\]' "non-alpha, feature-tagged tests for Kubernetes >= 1.17"
configvar CSI_PROW_E2E_FOCUS "$(get_versioned_variable CSI_PROW_E2E_FOCUS "${csi_prow_kubernetes_version_suffix}")" "non-alpha, feature-tagged tests"

# Serial vs. parallel is always determined by these regular expressions.
# Individual regular expressions are seperated by spaces for readability
Expand Down Expand Up @@ -314,21 +320,27 @@ configvar CSI_PROW_E2E_ALPHA "$(get_versioned_variable CSI_PROW_E2E_ALPHA "${csi
# kubernetes-csi components must be updated, either by disabling
# the failing test for "latest" or by updating the test and not running
# it anymore for older releases.
configvar CSI_PROW_E2E_ALPHA_GATES_1_14 'VolumeSnapshotDataSource=true,ExpandCSIVolumes=true' "alpha feature gates for Kubernetes 1.14"
configvar CSI_PROW_E2E_ALPHA_GATES_1_15 'VolumeSnapshotDataSource=true,ExpandCSIVolumes=true' "alpha feature gates for Kubernetes 1.15"
configvar CSI_PROW_E2E_ALPHA_GATES_1_16 'VolumeSnapshotDataSource=true' "alpha feature gates for Kubernetes 1.16"
# TODO: add new CSI_PROW_ALPHA_GATES_xxx entry for future Kubernetes releases and
# add new gates to CSI_PROW_E2E_ALPHA_GATES_LATEST.
configvar CSI_PROW_E2E_ALPHA_GATES_LATEST 'VolumeSnapshotDataSource=true' "alpha feature gates for latest Kubernetes"
configvar CSI_PROW_E2E_ALPHA_GATES_LATEST '' "alpha feature gates for latest Kubernetes"
configvar CSI_PROW_E2E_ALPHA_GATES "$(get_versioned_variable CSI_PROW_E2E_ALPHA_GATES "${csi_prow_kubernetes_version_suffix}")" "alpha E2E feature gates"

# Which external-snapshotter tag to use for the snapshotter CRD and snapshot-controller deployment
configvar CSI_SNAPSHOTTER_VERSION 'v2.0.0' "external-snapshotter version tag"

# Some tests are known to be unusable in a KinD cluster. For example,
# stopping kubelet with "ssh <node IP> systemctl stop kubelet" simply
# doesn't work. Such tests should be written in a way that they verify
# whether they can run with the current cluster provider, but until
# they are, we filter them out by name. Like the other test selection
# variables, this is again a space separated list of regular expressions.
configvar CSI_PROW_E2E_SKIP 'Disruptive' "tests that need to be skipped"
#
# "different node" test skips can be removed once
# https://github.com/kubernetes/kubernetes/pull/82678 has been backported
# to all the K8s versions we test against
configvar CSI_PROW_E2E_SKIP 'Disruptive|different\s+node' "tests that need to be skipped"

# This is the directory for additional result files. Usually set by Prow, but
# if not (for example, when invoking manually) it defaults to the work directory.
Expand Down Expand Up @@ -524,6 +536,7 @@ apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
- role: worker
EOF

# kubeadm has API dependencies between apiVersion and Kubernetes version
Expand Down Expand Up @@ -576,8 +589,7 @@ EOF
die "Cluster creation failed again, giving up. See the 'kind-cluster' artifact directory for additional logs."
fi
fi
KUBECONFIG="$(kind get kubeconfig-path --name=csi-prow)"
export KUBECONFIG
export KUBECONFIG="${HOME}/.kube/config"
}

# Deletes kind cluster inside a prow job
Expand Down Expand Up @@ -670,6 +682,60 @@ install_hostpath () {
fi
}

# Installs all nessesary snapshotter CRDs
install_snapshot_crds() {
# Wait until volumesnapshot CRDs are in place.
CRD_BASE_DIR="https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${CSI_SNAPSHOTTER_VERSION}/config/crd"
kubectl apply -f "${CRD_BASE_DIR}/snapshot.storage.k8s.io_volumesnapshotclasses.yaml" --validate=false
kubectl apply -f "${CRD_BASE_DIR}/snapshot.storage.k8s.io_volumesnapshots.yaml" --validate=false
kubectl apply -f "${CRD_BASE_DIR}/snapshot.storage.k8s.io_volumesnapshotcontents.yaml" --validate=false
cnt=0
until kubectl get volumesnapshotclasses.snapshot.storage.k8s.io \
&& kubectl get volumesnapshots.snapshot.storage.k8s.io \
&& kubectl get volumesnapshotcontents.snapshot.storage.k8s.io; do
if [ $cnt -gt 30 ]; then
echo >&2 "ERROR: snapshot CRDs not ready after over 1 min"
exit 1
fi
echo "$(date +%H:%M:%S)" "waiting for snapshot CRDs, attempt #$cnt"
cnt=$((cnt + 1))
sleep 2
done
}

# Install snapshot controller and associated RBAC, retrying until the pod is running.
install_snapshot_controller() {
kubectl apply -f "https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${CSI_SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml"
cnt=0
until kubectl get clusterrolebinding snapshot-controller-role; do
if [ $cnt -gt 30 ]; then
echo "Cluster role bindings:"
kubectl describe clusterrolebinding
echo >&2 "ERROR: snapshot controller RBAC not ready after over 5 min"
exit 1
fi
echo "$(date +%H:%M:%S)" "waiting for snapshot RBAC setup complete, attempt #$cnt"
cnt=$((cnt + 1))
sleep 10
done


kubectl apply -f "https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${CSI_SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml"
cnt=0
expected_running_pods=$(curl https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/"${CSI_SNAPSHOTTER_VERSION}"/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml | grep replicas | cut -d ':' -f 2-)
while [ "$(kubectl get pods -l app=snapshot-controller | grep 'Running' -c)" -lt "$expected_running_pods" ]; do
if [ $cnt -gt 30 ]; then
echo "snapshot-controller pod status:"
kubectl describe pods -l app=snapshot-controller
echo >&2 "ERROR: snapshot controller not ready after over 5 min"
exit 1
fi
echo "$(date +%H:%M:%S)" "waiting for snapshot controller deployment to complete, attempt #$cnt"
cnt=$((cnt + 1))
sleep 10
done
}

# collect logs and cluster status (like the version of all components, Kubernetes version, test version)
collect_cluster_info () {
cat <<EOF
Expand Down Expand Up @@ -786,10 +852,6 @@ run_e2e () (
install_e2e || die "building e2e.test failed"
install_ginkgo || die "installing ginkgo failed"

# TODO (?): multi-node cluster (depends on https://github.com/kubernetes-csi/csi-driver-host-path/pull/14).
# When running on a multi-node cluster, we need to figure out where the
# hostpath driver was deployed and set ClientNodeName accordingly.

generate_test_driver >"${CSI_PROW_WORK}/test-driver.yaml" || die "generating test-driver.yaml failed"

# Rename, merge and filter JUnit files. Necessary in case that we run the E2E suite again
Expand Down Expand Up @@ -940,6 +1002,34 @@ make_test_to_junit () {
fi
}

# version_gt returns true if arg1 is greater than arg2.
#
# This function expects versions to be one of the following formats:
# X.Y.Z, release-X.Y.Z, vX.Y.Z
#
# where X,Y, and Z are any number.
#
# Partial versions (1.2, release-1.2) work as well.
# The follow substrings are stripped before version comparison:
# - "v"
# - "release-"
# - "kubernetes-"
#
# Usage:
# version_gt release-1.3 v1.2.0 (returns true)
# version_gt v1.1.1 v1.2.0 (returns false)
# version_gt 1.1.1 v1.2.0 (returns false)
# version_gt 1.3.1 v1.2.0 (returns true)
# version_gt 1.1.1 release-1.2.0 (returns false)
# version_gt 1.2.0 1.2.2 (returns false)
function version_gt() {
versions=$(for ver in "$@"; do ver=${ver#release-}; ver=${ver#kubernetes-}; echo "${ver#v}"; done)
greaterVersion=${1#"release-"};
greaterVersion=${greaterVersion#"kubernetes-"};
greaterVersion=${greaterVersion#"v"};
test "$(printf '%s' "$versions" | sort -V | head -n 1)" != "$greaterVersion"
}

main () {
local images ret
ret=0
Expand Down Expand Up @@ -1000,6 +1090,16 @@ main () {
if tests_need_non_alpha_cluster; then
start_cluster || die "starting the non-alpha cluster failed"

# Install necessary snapshot CRDs and snapshot controller
# For Kubernetes 1.17+, we will install the CRDs and snapshot controller.
if version_gt "${CSI_PROW_KUBERNETES_VERSION}" "1.16.255" || "${CSI_PROW_KUBERNETES_VERSION}" == "latest"; then
info "Version ${CSI_PROW_KUBERNETES_VERSION}, installing CRDs and snapshot controller"
install_snapshot_crds
install_snapshot_controller
else
info "Version ${CSI_PROW_KUBERNETES_VERSION}, skipping CRDs and snapshot controller"
fi

# Installing the driver might be disabled.
if install_hostpath "$images"; then
collect_cluster_info
Expand All @@ -1019,6 +1119,16 @@ main () {
warn "E2E parallel failed"
ret=1
fi

# Run tests that are feature tagged, but non-alpha
# Ignore: Double quote to prevent globbing and word splitting.
# shellcheck disable=SC2086
if ! run_e2e parallel-features ${CSI_PROW_GINKO_PARALLEL} \
-focus="External.Storage.*($(regex_join "${CSI_PROW_E2E_FOCUS}"))" \
-skip="$(regex_join "${CSI_PROW_E2E_SERIAL}")"; then
warn "E2E parallel features failed"
ret=1
fi
fi

if tests_enabled "serial"; then
Expand All @@ -1037,6 +1147,16 @@ main () {
# Need to (re)create the cluster.
start_cluster "${CSI_PROW_E2E_ALPHA_GATES}" || die "starting alpha cluster failed"

# Install necessary snapshot CRDs and snapshot controller
# For Kubernetes 1.17+, we will install the CRDs and snapshot controller.
if version_gt "${CSI_PROW_KUBERNETES_VERSION}" "1.16.255" || "${CSI_PROW_KUBERNETES_VERSION}" == "latest"; then
info "Version ${CSI_PROW_KUBERNETES_VERSION}, installing CRDs and snapshot controller"
install_snapshot_crds
install_snapshot_controller
else
info "Version ${CSI_PROW_KUBERNETES_VERSION}, skipping CRDs and snapshot controller"
fi

# Installing the driver might be disabled.
if install_hostpath "$images"; then
collect_cluster_info
Expand Down