Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster Migration : failed to check s3 bucket: invalid header field value #53

Closed
zeylos opened this issue Nov 4, 2020 · 3 comments
Closed

Comments

@zeylos
Copy link

zeylos commented Nov 4, 2020

Hi,

I'm having trouble trying to migrate my cluster from RKE to EKS.
I followed the instructions of this documentation https://rancher.com/docs/rancher/v2.x/en/backups/v2.5/migrating-rancher/.
After applying the restore ressource on the target cluster, I've got this status message on the ressource :

    Message:              failed to check s3 bucket:s3-rancher2-backups, err:Head https://<bucket_name>.s3.dualstack.eu-west-3.amazonaws.com/: net/http: invalid header field value "AWS4-HMAC-SHA256 Credential=\x00\xa2\x00X\x1d\x96@.\xc0,>\x8b@\xa1C/20201104/eu-west-3/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=<redacted>" for key Authorization

Here is the logs of the rancher-backup pod (this is looping endlessly)

INFO[2020/11/04 10:00:34] Processing Restore CR restore-migration
INFO[2020/11/04 10:00:34] Restoring from backup xxx.tar.gz
INFO[2020/11/04 10:00:34] invoking set s3 service client                s3-accessKey="\x00\xa2\x00X\x1d\x96@.\xc0,>\x8b@\xa1C" s3-bucketName=<bucket_name> s3-endpoint=s3.eu-west-3.amazonaws.com s3-endpoint-ca= s3-folder= s3-region=eu-west-3

I guess there is something buggy with ths accesskey ?
I added the aws credentials following the instructions, with accesskey and secretkey in plain text.
Here is the ressource I applied :

---
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
  name: restore-migration
spec:
  backupFilename: xxx.tar.gz
  prune: false
  storageLocation:
    s3:
      credentialSecretName: s3-creds
      credentialSecretNamespace: default
      bucketName: <bucket_name>
      region: eu-west-3
      endpoint: s3.eu-west-3.amazonaws.com

And the secret

---
apiVersion: v1
kind: Secret
metadata:
  name: s3-creds
type: Opaque
data:
  accessKey: ABCDABCDABCD
  secretKey: ABcdAbcd

I tried to remove these and re-apply without success, I also tried to drop the whole cluster and re-create it, without success.
I may have miss something but I can't find what :(

Thanks for your help !

@zeylos
Copy link
Author

zeylos commented Nov 4, 2020

If anyone passes by and sees this, the problem was that the secret were created from a yaml file, after 2 days on the problem I just re-created the secret like so :

kubectl create secret generic s3-creds --from-literal=accessKey=ABCDABCD --from-literal=secretKey=AbcdAbcd

and everything worked flowlessly.

I guess there were some hidden extra newline or something that blew everything up.

Sorry for bothering, thanks for your work !

@zeylos zeylos closed this as completed Nov 4, 2020
@vincent99
Copy link

The values for secrets are always base64-encoded. --from-literal does that for you, putting them directly into yaml doesn't.

@brooksphilip
Copy link

i confirm this works. for some reason when i make the secret from the yaml and base64 encode the keys myself it doenst work. seems you have to use the literal through kubectl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants