You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm having trouble trying to migrate my cluster from RKE to EKS.
I followed the instructions of this documentation https://rancher.com/docs/rancher/v2.x/en/backups/v2.5/migrating-rancher/.
After applying the restore ressource on the target cluster, I've got this status message on the ressource :
Message: failed to check s3 bucket:s3-rancher2-backups, err:Head https://<bucket_name>.s3.dualstack.eu-west-3.amazonaws.com/: net/http: invalid header field value "AWS4-HMAC-SHA256 Credential=\x00\xa2\x00X\x1d\x96@.\xc0,>\x8b@\xa1C/20201104/eu-west-3/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=<redacted>" for key Authorization
Here is the logs of the rancher-backup pod (this is looping endlessly)
INFO[2020/11/04 10:00:34] Processing Restore CR restore-migration
INFO[2020/11/04 10:00:34] Restoring from backup xxx.tar.gz
INFO[2020/11/04 10:00:34] invoking set s3 service client s3-accessKey="\x00\xa2\x00X\x1d\x96@.\xc0,>\x8b@\xa1C" s3-bucketName=<bucket_name> s3-endpoint=s3.eu-west-3.amazonaws.com s3-endpoint-ca= s3-folder= s3-region=eu-west-3
I guess there is something buggy with ths accesskey ?
I added the aws credentials following the instructions, with accesskey and secretkey in plain text.
Here is the ressource I applied :
I tried to remove these and re-apply without success, I also tried to drop the whole cluster and re-create it, without success.
I may have miss something but I can't find what :(
Thanks for your help !
The text was updated successfully, but these errors were encountered:
If anyone passes by and sees this, the problem was that the secret were created from a yaml file, after 2 days on the problem I just re-created the secret like so :
i confirm this works. for some reason when i make the secret from the yaml and base64 encode the keys myself it doenst work. seems you have to use the literal through kubectl
Hi,
I'm having trouble trying to migrate my cluster from RKE to EKS.
I followed the instructions of this documentation https://rancher.com/docs/rancher/v2.x/en/backups/v2.5/migrating-rancher/.
After applying the restore ressource on the target cluster, I've got this status message on the ressource :
Here is the logs of the rancher-backup pod (this is looping endlessly)
I guess there is something buggy with ths accesskey ?
I added the aws credentials following the instructions, with accesskey and secretkey in plain text.
Here is the ressource I applied :
And the secret
I tried to remove these and re-apply without success, I also tried to drop the whole cluster and re-create it, without success.
I may have miss something but I can't find what :(
Thanks for your help !
The text was updated successfully, but these errors were encountered: