-
Notifications
You must be signed in to change notification settings - Fork 702
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
E2E checking Docker registry credentials in package repo #4927
Conversation
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
This reverts commit 03e85cf. Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
✅ Deploy Preview for kubeapps-dev canceled.
|
.circleci/config.yml
Outdated
@@ -398,7 +398,8 @@ load_kind_images: &load_kind_images | |||
name: "Load needed images into Kind" | |||
command: | |||
./script/load-kind-image.sh docker.io/bitnami/apache:2.4.48-debian-10-r74 kubeapps-ci kubeapps-ci-additional && | |||
./script/load-kind-image.sh docker.io/bitnami/apache:2.4.48-debian-10-r75 kubeapps-ci kubeapps-ci-additional | |||
./script/load-kind-image.sh docker.io/bitnami/apache:2.4.48-debian-10-r75 kubeapps-ci kubeapps-ci-additional && | |||
./script/load-kind-image.sh registry:2 kubeapps-ci kubeapps-ci-additional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preload registry image into Kind
@@ -501,7 +502,7 @@ run_e2e_tests: &run_e2e_tests | |||
DEV_TAG=${latest/v/} | |||
IMG_MODIFIER="" | |||
fi | |||
if ./script/e2e-test.sh $USE_MULTICLUSTER_OIDC_ENV $OLM_VERSION $DEV_TAG $IMG_MODIFIER $TEST_TIMEOUT_MINUTES $DEFAULT_DEX_IP $ADDITIONAL_CLUSTER_IP $DOCKER_USERNAME $DOCKER_PASSWORD $KAPP_CONTROLLER_VERSION; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting rid of this CI context variables
@@ -0,0 +1,16 @@ | |||
image: | |||
repository: local-docker-registry:5600/nginx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a test chart that makes use of an authenticated container image
name: registry-auth | ||
namespace: ci | ||
data: | ||
htpasswd: dGVzdHVzZXI6JDJ5JDA1JHVLUnAvdUV1aEMxRXpmbzQ1a1NCbk96eUlnV2tKNmk3R1VKTzNhLjVvY0JrZUNBNTZjRWxXCgo= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Credentials for Docker registry
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a comment in the file just so it's obvious this is a local credential for the instantiated docker registry, so we don't panic in the future if we find this :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or another thought: I see below that you create the registry-tls
secret as part of the script to setup the registry: why not also create this simple opaque secret there so that we only have the one location where the secret is defined and used? Just a thought, fine either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
registry-tls
is created from the script as it needs some externally generated input (the key file). I prefer having everything possible put in a static yaml file, that is why registry-auth
is there, it does not depend on anything.
But agree with your suggestion, I'll add a comment.
const axInstance = await utils.getAxiosInstance(page, k.token); | ||
const resourceResp = await axInstance.get( | ||
`/apis/plugins/resources/v1alpha1/helm.packages/v1alpha1/c/default/ns/default/${appName}`, | ||
); | ||
expect(resourceResp.status).toEqual(200); | ||
|
||
let deployment; | ||
resourceResp.data | ||
.trim() | ||
.split(/\r?\n/) | ||
.forEach(r => { | ||
// Axios doesn't provide streaming responses, so splitting on new line works | ||
// but gives us a string, not JSON, and may leave a blank line at the end. | ||
const response = JSON.parse(r); | ||
const resourceRef = response.result?.resourceRef; | ||
if (resourceRef.kind === "Deployment" && resourceRef.name.match(appName)) { | ||
deployment = JSON.parse(response.result?.manifest); | ||
} | ||
}); | ||
|
||
expect(deployment?.spec?.template?.spec?.imagePullSecrets).toEqual([{ name: secretName }]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We no longer need this. Test '10' actually checks that the chart is correctly deployed using a container image from the authenticated Docker registry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, so this test here is just testing repository authentication, whereas the other will test the image pull secrets. Nice.
@@ -0,0 +1,84 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logic for setting up the Docker registry is placed in this separate script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth still specifying
set -o errexit
set -o nounset
set -o pipefail
even though this script is currently sourced from the main e2e which already does this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a quick check with errexit
, and functions from sourced files get the same flags applied if invoked from the "sourcer". Therefore we don't need to add it again.
echo "127.0.0.1 $DOCKER_REGISTRY_HOST" | sudo tee -a /etc/hosts | ||
|
||
docker pull nginx | ||
docker tag nginx $DOCKER_REGISTRY/nginx | ||
|
||
/bin/sh -c "kubectl -n ${REGISTRY_NS} port-forward service/docker-registry 5600:5600 &" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Solution far from elegant, but it does the trick for pushing from CI host into Docker registry inside cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setting up a port-forward temporarily is OK I think, we do the same for chartmuseum. But the sleeps are more worrying in my opinion (as the time required is variable). It'd be nice if we could instead wait until the service is available (with a simple curl or something).
docker exec --user root $CONTROL_PLANE_CONTAINER update-ca-certificates -f | ||
|
||
# Restart containerd to make the CA addition effective | ||
docker exec --user root $CONTROL_PLANE_CONTAINER systemctl restart containerd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Key step in order to get CA certificates refreshed for Kubelet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice - I bet that one line cost a bit... sometimes I wish it were possible to show how much one line costs after feeling like I've been banging my head against a wall :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Woah, that's totally true... it seems a tricky one.
I thought this change in the CA could be done as a kubeadm config yaml during the cluster creation in kind (below), but you'll probably have explored many paths and this one is the simplest one for our use case, I guess, so +1 and thanks for coming up with it :P
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
controllerManager:
extraArgs:
cluster-signing-cert-file: /etc/kubernetes/pki/ca.crt
cluster-signing-key-file: /etc/kubernetes/pki/ca.key
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| I bet that one line cost a bit.
Finding that out unlocked the whole thing to me after banging my head against the wall.
| you'll probably have explored many paths
Indeed, tried many ways to get it working, including the use of /etc/kubernetes/pki/ca*
for generating the new certificate. Didn't work.
It seemed to me that the issue is when the "kubelet" establishes a TLS connection to contact the image repository. Acting as a client, it uses a set of CAs available form the underlying OS in which the /etc/kubernetes/pki/ca.crt
, for some reason, is not included. Therefore, I needed to add a new CA to sign the certificate.
secret: | ||
secretName: registry-tls | ||
containers: | ||
- image: registry:2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@antgamdia I guess this new dependency needs to be added to OSM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, you're right. So let's refrain from merging this PR until we trigger the release process, 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks for this Rafa... much nicer than just checking the actual secret is included in the yaml.
.circleci/config.yml
Outdated
@@ -398,7 +398,8 @@ load_kind_images: &load_kind_images | |||
name: "Load needed images into Kind" | |||
command: | |||
./script/load-kind-image.sh docker.io/bitnami/apache:2.4.48-debian-10-r74 kubeapps-ci kubeapps-ci-additional && | |||
./script/load-kind-image.sh docker.io/bitnami/apache:2.4.48-debian-10-r75 kubeapps-ci kubeapps-ci-additional | |||
./script/load-kind-image.sh docker.io/bitnami/apache:2.4.48-debian-10-r75 kubeapps-ci kubeapps-ci-additional && | |||
./script/load-kind-image.sh registry:2 kubeapps-ci kubeapps-ci-additional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use the 2.8
tag or something more specific than the major version? (here and below in the deployment yaml) Maybe one less thing to update. Fine either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, and also I'd suggest: can we replace this registry:2
with sth like << pipeline.parameters.DOCKER_REGISTRY_VERSION>>
so that upgrading the versions becomes as easy as changing a single param at the very beginning of the file?
name: registry-auth | ||
namespace: ci | ||
data: | ||
htpasswd: dGVzdHVzZXI6JDJ5JDA1JHVLUnAvdUV1aEMxRXpmbzQ1a1NCbk96eUlnV2tKNmk3R1VKTzNhLjVvY0JrZUNBNTZjRWxXCgo= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a comment in the file just so it's obvious this is a local credential for the instantiated docker registry, so we don't panic in the future if we find this :P
const axInstance = await utils.getAxiosInstance(page, k.token); | ||
const resourceResp = await axInstance.get( | ||
`/apis/plugins/resources/v1alpha1/helm.packages/v1alpha1/c/default/ns/default/${appName}`, | ||
); | ||
expect(resourceResp.status).toEqual(200); | ||
|
||
let deployment; | ||
resourceResp.data | ||
.trim() | ||
.split(/\r?\n/) | ||
.forEach(r => { | ||
// Axios doesn't provide streaming responses, so splitting on new line works | ||
// but gives us a string, not JSON, and may leave a blank line at the end. | ||
const response = JSON.parse(r); | ||
const resourceRef = response.result?.resourceRef; | ||
if (resourceRef.kind === "Deployment" && resourceRef.name.match(appName)) { | ||
deployment = JSON.parse(response.result?.manifest); | ||
} | ||
}); | ||
|
||
expect(deployment?.spec?.template?.spec?.imagePullSecrets).toEqual([{ name: secretName }]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, so this test here is just testing repository authentication, whereas the other will test the image pull secrets. Nice.
@@ -0,0 +1,84 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth still specifying
set -o errexit
set -o nounset
set -o pipefail
even though this script is currently sourced from the main e2e which already does this?
docker exec --user root $CONTROL_PLANE_CONTAINER update-ca-certificates -f | ||
|
||
# Restart containerd to make the CA addition effective | ||
docker exec --user root $CONTROL_PLANE_CONTAINER systemctl restart containerd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice - I bet that one line cost a bit... sometimes I wish it were possible to show how much one line costs after feeling like I've been banging my head against a wall :)
name: registry-auth | ||
namespace: ci | ||
data: | ||
htpasswd: dGVzdHVzZXI6JDJ5JDA1JHVLUnAvdUV1aEMxRXpmbzQ1a1NCbk96eUlnV2tKNmk3R1VKTzNhLjVvY0JrZUNBNTZjRWxXCgo= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or another thought: I see below that you create the registry-tls
secret as part of the script to setup the registry: why not also create this simple opaque secret there so that we only have the one location where the secret is defined and used? Just a thought, fine either way.
# Generate new certificate for registry | ||
mkcert -key-file $PROJECT_PATH/devel/localhost-key.pem -cert-file $PROJECT_PATH/devel/docker-registry-cert.pem $DOCKER_REGISTRY_HOST "docker-registry.$REGISTRY_NS.svc.cluster.local" | ||
kubectl -n $REGISTRY_NS delete secret registry-tls --ignore-not-found=true | ||
kubectl -n $REGISTRY_NS create secret tls registry-tls --key $PROJECT_PATH/devel/localhost-key.pem --cert $PROJECT_PATH/devel/docker-registry-cert.pem |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As above, I wonder if it's worth creating the registry-auth
here as well so you don't need to repeat the encoded secret in the yaml.
echo "127.0.0.1 $DOCKER_REGISTRY_HOST" | sudo tee -a /etc/hosts | ||
|
||
docker pull nginx | ||
docker tag nginx $DOCKER_REGISTRY/nginx | ||
|
||
/bin/sh -c "kubectl -n ${REGISTRY_NS} port-forward service/docker-registry 5600:5600 &" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setting up a port-forward temporarily is OK I think, we do the same for chartmuseum. But the sleeps are more worrying in my opinion (as the time required is variable). It'd be nice if we could instead wait until the service is available (with a simple curl or something).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks! That's great we are finally testing repos with auth , it's gonna be a key aspect for properly testing the new repos API!
As I said in a comment, please do not merge the PR until we trigger the release, so that it does not interfere with the already reported deps to OSM.
.circleci/config.yml
Outdated
@@ -398,7 +398,8 @@ load_kind_images: &load_kind_images | |||
name: "Load needed images into Kind" | |||
command: | |||
./script/load-kind-image.sh docker.io/bitnami/apache:2.4.48-debian-10-r74 kubeapps-ci kubeapps-ci-additional && | |||
./script/load-kind-image.sh docker.io/bitnami/apache:2.4.48-debian-10-r75 kubeapps-ci kubeapps-ci-additional | |||
./script/load-kind-image.sh docker.io/bitnami/apache:2.4.48-debian-10-r75 kubeapps-ci kubeapps-ci-additional && | |||
./script/load-kind-image.sh registry:2 kubeapps-ci kubeapps-ci-additional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, and also I'd suggest: can we replace this registry:2
with sth like << pipeline.parameters.DOCKER_REGISTRY_VERSION>>
so that upgrading the versions becomes as easy as changing a single param at the very beginning of the file?
secret: | ||
secretName: registry-tls | ||
containers: | ||
- image: registry:2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, you're right. So let's refrain from merging this PR until we trigger the release process, 🙏
docker exec --user root $CONTROL_PLANE_CONTAINER update-ca-certificates -f | ||
|
||
# Restart containerd to make the CA addition effective | ||
docker exec --user root $CONTROL_PLANE_CONTAINER systemctl restart containerd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Woah, that's totally true... it seems a tricky one.
I thought this change in the CA could be done as a kubeadm config yaml during the cluster creation in kind (below), but you'll probably have explored many paths and this one is the simplest one for our use case, I guess, so +1 and thanks for coming up with it :P
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
controllerManager:
extraArgs:
cluster-signing-cert-file: /etc/kubernetes/pki/ca.crt
cluster-signing-key-file: /etc/kubernetes/pki/ca.key
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Signed-off-by: Rafa Castelblanque <rcastelblanq@vmware.com>
Description of the change
Until now, E2E test '03-create-private-package-repository.spec.js' was testing two things:
imagePullSecrets
was being filled in the CRDApart from that, E2E tests where failing when running from forked PRs. This was partially solved by @dlaloue-vmware with reorganizing the E2E script parameters.
This PR separates the 03 test into two separate ones, introducing the
10-deploy-package-with-private-image.spec.js
.The test makes use of a cluster local installed Docker registry with authentication.
Benefits
DOCKER_USERNAME
andDOCKER_PASSWORD
needed from the CI environmentPossible drawbacks
Creation of local authenticated Docker registry brings some scripts using port-forward, which could introduce some instability when running.
Applicable issues