Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating rel docs #6237

Merged
merged 1 commit into from
Dec 2, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/release/expanded/build_container.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Generate Build Container

1. set env variable PATH_TO_KUBERNETES_REPO to the path to your local kubernetes/kubernetes copy: `export PATH_TO_KUBERNETES_REPO="/Users/mtrachier/go/src/github.com/kubernetes/kubernetes"`
1. set env variable GOVERSION to the expected version of go for the kubernetes/kubernetes version checked out: `export GOVERSION=$(yq -e '.dependencies[] | select(.name == "golang: upstream version").version' $PATH_TO_KUBERNETES_REPO/build/dependencies.yaml)`
1. set env variable GOIMAGE to the expected container image to base our custom build image on: `export GOIMAGE="golang:${GOVERSION}-alpine3.15"`
1. set env variable BUILD_CONTAINER to the contents of a dockerfile for the build container: `export BUILD_CONTAINER="FROM ${GOIMAGE}\nRUN apk add --no-cache bash git make tar gzip curl git coreutils rsync alpine-sdk"`
1. use Docker to create the build container: `echo -e $BUILD_CONTAINER | docker build -t ${GOIMAGE}-dev -`
13 changes: 13 additions & 0 deletions docs/release/expanded/channel_server.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Update Channel Server

Once the release is verified, the channel server config needs to be updated to reflect the new version for “stable”.  

1. Channel.yaml can be found at the [root of the K3s repo.](https://github.com/k3s-io/k3s/blob/master/channel.yaml)
1. When updating the channel server a single-line change will need to be performed.  
1. Release Captains responsible for this change will need to update the following stanza to reflect the new stable version of kubernetes relative to the release in progress.  
1. Example:
```
channels:
name: stable
latest: v1.22.12+k3s1
```
96 changes: 96 additions & 0 deletions docs/release/expanded/cut_release.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# Cut Release

1. Verify that the merge CI has successfully completed before cutting the RC
1. After the merge CI has completed, cut an RC by creating a release in the GitHub interface
1. the title is the version of k3s you are releasing with the rc1 subversion eg. "v1.25.0-rc1+k3s1"
1. the target should match the release branch, remember that the latest version is attached to "master"
1. no description
1. the tag should match the title
1. After the RC is cut validate that the CI for the RC passes
1. After the RC CI passes notify the release SLACK channel about the new RC

Example Full Command List (this is not a script!):
```
export SSH_MOUNT_PATH="/var/folders/...krzO/agent.452"
export GLOBAL_GITCONFIG_PATH="/Users/mtrachier/.gitconfig"
export GLOBAL_GIT_CONFIG_PATH="/Users/mtrachier/.gitconfig"
export OLD_K8S="v1.22.14"
export NEW_K8S="v1.22.15"
export OLD_K8S_CLIENT="v0.22.14"
export NEW_K8S_CLIENT="v0.22.15"
export OLD_K3S_VER="v1.22.14-k3s1"
export NEW_K3S_VER="v1.22.15-k3s1"
export RELEASE_BRANCH="release-1.22"
export GOPATH="/Users/mtrachier/go"
export GOVERSION="1.16.15"
export GOIMAGE="golang:1.16.15-alpine3.15"
export BUILD_CONTAINER="FROM golang:1.16.15-alpine3.15\n RUN apk add --no-cache bash git make tar gzip curl git coreutils rsync alpine-sdk"

install -d /Users/mtrachier/go/src/github.com/kubernetes
rm -rf /Users/mtrachier/go/src/github.com/kubernetes/kubernetes
git clone --origin upstream https://github.com/kubernetes/kubernetes.git /Users/mtrachier/go/src/github.com/kubernetes/kubernetes
cd /Users/mtrachier/go/src/github.com/kubernetes/kubernetes
git remote add k3s-io https://github.com/k3s-io/kubernetes.git
git fetch --all --tags

# this second fetch should return no more tags pulled, this makes it easier to see pull errors
git fetch --all --tags

# rebase
rm -rf _output
git rebase --onto v1.22.15 v1.22.14 v1.22.14-k3s1~1

# validate go version
echo "GOVERSION is $(yq -e '.dependencies[] | select(.name == "golang: upstream version").version' build/dependencies.yaml)"

# generate build container
echo -e "FROM golang:1.16.15-alpine3.15\n RUN apk add --no-cache bash git make tar gzip curl git coreutils rsync alpine-sdk" | docker build -t golang:1.16.15-alpine3.15-dev -

# run tag.sh
# note user id is 502, I am not root user
docker run --rm -u 502 \
--mount type=tmpfs,destination=/Users/mtrachier/go/pkg \
-v /Users/mtrachier/go/src:/go/src \
-v /Users/mtrachier/go/.cache:/go/.cache \
-v /Users/mtrachier/.gitconfig:/go/.gitconfig \
-e HOME=/go \
-e GOCACHE=/go/.cache \
-w /go/src/github.com/kubernetes/kubernetes golang:1.16.15-alpine3.15-dev ./tag.sh v1.22.15-k3s1 2>&1 | tee ~/tags-v1.22.15-k3s1.log

# generate and run push.sh, make sure to paste in the tag.sh output below
vim push.sh
chmod +x push.sh
./push.sh

install -d /Users/mtrachier/go/src/github.com/k3s-io
rm -rf /Users/mtrachier/go/src/github.com/k3s-io/k3s
git clone --origin upstream https://github.com/k3s-io/k3s.git /Users/mtrachier/go/src/github.com/k3s-io/k3s
cd /Users/mtrachier/go/src/github.com/k3s-io/k3s

git checkout -B v1.22.15-k3s1 upstream/release-1.22
git clean -xfd


# note that sed has different parameters on MacOS than Linux
# also note that zsh is the default MacOS shell and is not bash/dash (the default Linux shells)
sed -Ei '' "\|github.com/k3s-io/kubernetes| s|v1.22.14-k3s1|v1.22.15-k3s1|" go.mod
git diff
sed -Ei '' "s/k8s.io\/kubernetes v.*$/k8s.io\/kubernetes v1.22.15/" go.mod
git diff
sed -Ei '' "s/v0.22.14/v0.22.15/g" go.mod
git diff
go mod tidy

# make sure go version is updated in all locations
vim .github/workflows/integration.yaml
vim .github/workflows/unitcoverage.yaml
vim Dockerfile.dapper
vim Dockerfile.manifest
vim Dockerfile.test

git commit --all --signoff -m "Update to v1.22.15"
git remote add origin https://github.com/matttrach/k3s-1.git
git push --set-upstream origin v1.22.15-k3s1

# use link to generate pull request, make sure your target is the proper release branch 'release-1.22'
```
5 changes: 5 additions & 0 deletions docs/release/expanded/milestones.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Generate Milestones

If no milestones exist in the k3s repo for the releases, generate them.
No due date or description necessary, we can update them as necessary afterwards.
Make sure to post the new milestones in the SLACK channel if generated.
76 changes: 76 additions & 0 deletions docs/release/expanded/pr.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Generate Pull Request

We update the go.mod in k3s to point to the new modules, and submit the change for review.

1. make sure git is clean before making changes
1. make sure your origin is up to date before making changes
1. checkout a new branch for the new k3s version in the local copy using the formal semantic name eg. "v1.25.1-k3s1"
1. replace any instances of the old k3s version eg. "v1.25.0-k3s1" with the new k3s version eg. "v1.25.1-k3s1" in k3,s-io module links
1. replace any instances of the old Kubernetes version eg. "v1.25.0" with the new Kubernetes version eg. "v1.25.1"
1. replace any instances of the old Kubernetes client-go version eg. "v0.25.0" with the new version eg. "v0.25.1"
1. sed commands make this process easier (this is not a script):
1. Linux example:
```
sed -Ei "\|github.com/k3s-io/kubernetes| s|${OLD_K3S_VER}|${NEW_K3S_VER}|" go.mod
sed -Ei "s/k8s.io\/kubernetes v\S+/k8s.io\/kubernetes ${NEW_K8S}/" go.mod
sed -Ei "s/$OLD_K8S_CLIENT/$NEW_K8S_CLIENT/g" go.mod
```
1. Mac example:
```
# note that sed has different parameters on MacOS than Linux
# also note that zsh is the default MacOS shell and is not bash/dash (the default Linux shells)
sed -Ei '' "\|github.com/k3s-io/kubernetes| s|${OLD_K3S_VER}|${NEW_K3S_VER}|" go.mod
git diff

sed -Ei '' "s/k8s.io\/kubernetes v.*$/k8s.io\/kubernetes ${NEW_K8S}/" go.mod
git diff

sed -Ei '' "s/${OLD_K8S_CLIENT}/${NEW_K8S_CLIENT}/g" go.mod
git diff

go mod tidy
git diff
```
1. update extra places to make sure the go version is correct
1. `.github/workflows/integration.yaml`
1. `.github/workflows/unitcoverage.yaml`
1. `Dockerfile.dapper`
1. `Dockerfile.manifest`
1. `Dockerfile.test`
1. commit the changes and push to your origin
1. make sure to sign your commits
1. make sure to push to "origin" not "upstream", be explicit in your push commands
1. example: 'git push -u origin v1.25.1-k3s1'
1. the git output will include a link to generate a pull request, use it
1. make sure the PR is against the proper release branch
1. generating the PR starts several CI processes, most are in GitHub actions, but some one is in Drone, post the link to the drone CI run in the PR
1. this keeps everyone on the same page
1. if there is an error in the CI, make sure to note that and what the errors are for reviewers
1. finding error messages:
1. example: https://drone-pr.k3s.io/k3s-io/k3s/4744
1. click the "show all logs" to see all of the logs
1. search for " failed." this will find a line like "Test bEaiAq failed."
1. search for "err=" and look for a log with the id "bEaiAq" in it
1. example error:
```
#- Tail: /tmp/bEaiAq/agents/1/logs/system.log
[LATEST-SERVER] E0921 19:16:55.430977 57 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs
[LATEST-SERVER] I0921 19:16:55.431186 57 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
```
1. the first part of the log gives a hint to the log level: "E0921" is an error log "I0921" is an info log
1. you can also look for "Summarizing \d Failure" (I installed a plugin on my browser to get regex search: "Chrome Regex Search")
1. example error:
```
[Fail] [sig-network] DNS [It] should support configurable pod DNS nameservers [Conformance]
```
1. example PR: https://github.com/k3s-io/k3s/pull/6164
1. many errors are flakey/transitive, it is usually a good idea to simply retry the CI on the first failure
1. if the same error occurs multiple times then it is a good idea to escalate to the team
1. After the CI passes (or the team dismisses the CI as "flakey"), and you have at least 2 approvals you can merge it
1. make sure you have 2 approvals on the latest changes
1. make sure the CI passes or the team approves merging without it passing
1. make sure the use the "squash and merge" option in GutHub
1. make sure to update the SLACK channel with the new Publish/Merge CI

- Help! My memory usage is off the charts and everything has slowed to a crawl!
- I found rebooting after running tag.sh was the only way to solve this problem, seems like a memory leak in VSCode on Mac or maybe some weird behavior between all of the added/removed files along with VSCode's file parser, the Crowdstrike virus scanner, and Docker (my top memory users)
18 changes: 18 additions & 0 deletions docs/release/expanded/rebase.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Rebase

1. clear out any cached or old files: `git add -A; git reset --hard HEAD`
1. clear out any cached or older outputs: `rm -rf _output`
1. rebase your local copy to move the old k3s tag from the old k8s tag to the new k8s tag
1. so there are three copies of the code involved in this process:
1. the upstream kubernetes/kubernets copy on GitHub
1. the k3s-io/kubernetes copy on GitHub
1. and the local copy on your laptop which is a merge of those
1. the local copy has every branch and every tag from the remotes you have added
1. there are custom/proprietary commits in the k3s-io copy that are not in the kubernetes copy
1. there are commits in the kubernetes copy do not exist in the k3s-io copy
1. we want the new commits added to the kubernetes copy to be in the k3s-io copy
1. we want the custom/proprietary commits from the k3s-io copy on top of the new kubernetes commits
1. before rebase our local copy has all of the commits, but the custom/proprietary k3s-io commits are between the old kubernetes version and the new kubernetes version
1. after the rebase our local copy will have the k3s-io custom/proprietary commits after the latest kubernetes commits
1. `git rebase --onto $NEW_K8S $OLD_K8S $OLD_K3S_VER~1`
1. After rebase you will be in a detached head state, this is normal
36 changes: 36 additions & 0 deletions docs/release/expanded/release_images.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Create Release Images

## Create System Agent Installer Images

The k3s-io/k3s Release CI should dispatch the rancher/system-agent-installer-k3s repo, generating a tag there and triggering the CI to build images.
The system-agent-installer-k3s repository is used with Rancher v2prov system.
This often fails! Check the CI and if it was not triggered do the following:

After RCs are cut you need to manually release the system agent installer k3s, this along with KDM PR allows QA to fully test RCs.
This should happen directly after the KDM PR is generated, within a few hours of the release candidate being cut.
These images depend on the release artifact and can not be generated until after the k3s-io/k3s release CI completes.

1. Create a release in the system-agent-installer-k3s repo
1. it should exactly match the release title in the k3s repo
1. the target is "main" for all releases (no branches)
1. no description
1. make sure to check the "pre-release" checkbox
1. Watch the Drone Publish CI, it should be very quick
1. Verify that the new images appear in Docker hub

## Create K3S Upgrade Images

The k3s-io/k3s Release CI should dispatch the k3s-io/k3s-upgrade repo, generating a tag there and triggering the CI to build images.
These images depend on the release artifact and can not be generated until after the k3s-io/k3s release CI completes.
This sometimes fails! Check the CI and if it was not triggered do the following:

1. Create a release in the system-agent-installer-k3s repo
1. it should exactly match the release title in the k3s repo
1. the target is "main" for all releases (no branches)
1. no description
1. make sure to check the "pre-release" checkbox
1. Watch the Drone Publish CI, it should be very quick
1. Verify that the new images appear in Docker hub

Make sure you are in constant communication with QA during this time so that you can cut more RCs if necessary,
update KDM if necessary, radiate information to the rest of the team and help them in any way possible.
Loading