Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd-snapshot save ignores s3-folder param provided and saves only in s3-bucket location #9982

Closed
1 of 2 tasks
aganesh-suse opened this issue Apr 19, 2024 · 1 comment
Closed
1 of 2 tasks
Assignees
Labels
kind/bug Something isn't working status/blocker
Milestone

Comments

@aganesh-suse
Copy link

Issue found on master branch with version v1.29.4-rc1+k3s1

Environment Details

Infrastructure

  • Cloud
  • Hosted

Node(s) CPU architecture, OS, and Version:

$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.2 LTS"

$ uname -m
x86_64

Cluster Configuration:

HA: 3 server/ 1 agent

Config.yaml:

token: xxxx
cluster-init: true
write-kubeconfig-mode: "0644"
node-external-ip: 1.1.1.1
node-label:
- k3s-upgrade=server

Testing Steps

  1. Copy config.yaml
$ sudo mkdir -p /etc/rancher/k3s && sudo cp config.yaml /etc/rancher/k3s
  1. Install k3s
curl -sfL https://get.k3s.io | sudo INSTALL_K3S_VERSION='v1.29.4-rc1+k3s1' sh -s - server
  1. Verify Cluster Status:
kubectl get nodes -o wide
kubectl get pods -A
  1. Perform etcd-snapshot save with s3 details provided:
$ sudo /usr/local/bin/k3s etcd-snapshot save --s3 --s3-bucket=<bucket> --s3-folder=<folder> --s3-region=<region> --s3-access-key=xxxx --s3-secret-key="xxxx" --debug 

Expected behavior:

The saved snapshot should be in the folder location provided.

Reproducing Results/Observations:

  • k3s version used for replication:
$ k3s -v
k3s version v1.29.4-rc1+k3s1 (d973fadb)
go version go1.21.9
$ sudo /usr/local/bin/k3s etcd-snapshot save --s3 --s3-bucket=<bucket> --s3-folder=<folder> --s3-region=<region> --s3-access-key=xxxx --s3-secret-key="xxxx" --debug 
time="2024-04-19T03:49:23Z" level=warning msg="Unknown flag --cluster-init found in config.yaml, skipping\n"
time="2024-04-19T03:49:23Z" level=warning msg="Unknown flag --write-kubeconfig-mode found in config.yaml, skipping\n"
time="2024-04-19T03:49:23Z" level=warning msg="Unknown flag --node-external-ip found in config.yaml, skipping\n"
time="2024-04-19T03:49:23Z" level=warning msg="Unknown flag --node-label found in config.yaml, skipping\n"
time="2024-04-19T03:49:23Z" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
time="2024-04-19T03:49:24Z" level=info msg="Snapshot on-demand-ip-172-31-23-173-1713498563 saved."
time="2024-04-19T03:49:24Z" level=info msg="Snapshot on-demand-ip-172-31-23-173-1713498563 saved."

AWS s3 seems to have the snapshot saved in the s3-bucket location and is not found in the s3-folder path provided.

Copying only the relevant line in the output and edited the actual location in the bug:

 $ sudo /usr/local/bin/kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get etcdsnapshotfile
 .
 .
s3-on-demand-ip-172-31-23-173-1713498563-f953f5                       on-demand-ip-172-31-23-173-1713498563                       ip-172-31-23-173   s3://<s3-bucket>/on-demand-ip-172-31-23-173-1713498563
@aganesh-suse
Copy link
Author

Validated on master branch with commit d3b6054

Environment Details

Infrastructure

  • Cloud
  • Hosted

Node(s) CPU architecture, OS, and Version:

$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.2 LTS"

$ uname -m
x86_64

Cluster Configuration:

HA: 3 server/ 1 agent

Config.yaml:

token: xxxx
cluster-init: true
write-kubeconfig-mode: "0644"
node-external-ip: 1.1.1.1
node-label:
- k3s-upgrade=server

Testing Steps

  1. Copy config.yaml
$ sudo mkdir -p /etc/rancher/k3s && sudo cp config.yaml /etc/rancher/k3s
  1. Install k3s
curl -sfL https://get.k3s.io | sudo INSTALL_K3S_COMMIT='d3b60543e7df924881854108984593aafb557d3c' sh -s - server
  1. Verify Cluster Status:
kubectl get nodes -o wide
kubectl get pods -A
  1. Perform etcd-snapshot save with s3 details provided:
$ sudo /usr/local/bin/k3s etcd-snapshot save --s3 --s3-bucket=<bucket> --s3-folder=<folder> --s3-region=<region> --s3-access-key=xxxx --s3-secret-key="xxxx" --debug 
  1. Verify the etcdsnapshotfile path location for the s3 snapshot includes both the s3 bucket and the s3 folder location:
$ sudo /usr/local/bin/kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get etcdsnapshotfile

Validation Results:

  • k3s version used for validation:
$ k3s -v
k3s version v1.29.4+k3s-d3b60543 (d3b60543)
go version go1.21.9
 $ sudo /usr/local/bin/kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get etcdsnapshotfile
NAME                                                 SNAPSHOTNAME                            NODE               LOCATION                                                                                   SIZE      CREATIONTIME
local-on-demand-ip-172-31-16-180-1713803552-1fc677   on-demand-ip-172-31-16-180-1713803552   ip-172-31-16-180   file:///var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-172-31-16-180-1713803552      5529632   2024-04-22T16:32:32Z
local-on-demand-ip-172-31-16-180-1713803621-e205c5   on-demand-ip-172-31-16-180-1713803621   ip-172-31-16-180   file:///var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-172-31-16-180-1713803621      5726240   2024-04-22T16:33:41Z
s3-on-demand-ip-172-31-16-180-1713803552-60bff9      on-demand-ip-172-31-16-180-1713803552   ip-172-31-16-180   s3://<s3-bucket>/<s3-folder>/on-demand-ip-172-31-16-180-1713803552   5529632   2024-04-22T16:32:32Z
s3-on-demand-ip-172-31-16-180-1713803621-db8c7c      on-demand-ip-172-31-16-180-1713803621   ip-172-31-16-180   s3://<s3-bucket>/<s3-folder>/on-demand-ip-172-31-16-180-1713803621   5726240   2024-04-22T16:33:41Z

As we can see, the s3-folder location was honored. Closing the bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working status/blocker
Projects
Status: Done Issue
Development

No branches or pull requests

2 participants