-
Notifications
You must be signed in to change notification settings - Fork 260
ci: move hubble connectivity tests to nightly pipeline #2286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* cilium configmap * update hubble configs and add metrics test * update pipeline yaml * separate cilium+hubble config
matmerr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good stuff, thanks @jshr-w!
| echo "deploy Cilium ConfigMap" | ||
| kubectl apply -f cilium/configmap.yaml | ||
| kubectl apply -f test/integration/manifests/cilium/cilium${FILE_PATH}-config.yaml | ||
| kubectl apply -f test/integration/manifests/cilium/hubble/hubble-peer-svc.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is overwriting the current cilium-config configMap which is being set on the line above. Did the tests for nightly version of cilium pass?
Are we only trying to use this configmap (and test hubble) for the nightly pipeline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this actually a service targeting cilium pods, the configmap below is applied below after the Azilium stages have completed
if preferred, @jshr-w, can you move the service installation to the test stage below? I don't have a strong preference as this is a cilium component, albeit not used yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad, misread it as I was reviewing. Is there any reasoning for overwriting the configMap further down instead of here?
test/hubble/hubble_test.go
Outdated
| flag.StringVar(&kubeconfigPath, "kubeconfig", getDefaultKubeconfigPath(), "Path to the kubeconfig file") | ||
| flag.Parse() | ||
|
|
||
| config, err := getClientConfig(kubeconfigPath) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: We have the kubernetes package that provide k8s api calls. https://github.com/Azure/azure-container-networking/tree/master/test/internal/kubernetes.
The function that could replace this would be
azure-container-networking/test/internal/kubernetes/utils.go
Lines 54 to 60 in 0b45d15
| func MustGetRestConfig() *rest.Config { | |
| config, err := clientcmd.BuildConfigFromFlags("", *Kubeconfig) | |
| if err != nil { | |
| panic(err) | |
| } | |
| return config | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
true, I forgot these were here :)
hack/toolbox/server/Dockerfile.heavy
Outdated
| unzip \ | ||
| vim \ | ||
| wget | ||
| wget |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: whitespace. You can change this to be automatically taken care of in your IDE
| kubectl rollout restart ds cilium -n kube-system | ||
| echo "wait 3 minutes for pods to be ready after restart" | ||
| sleep 180s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What pods need to be ready?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the Cilium pods, which are being restarted on L160
in lieu of static sleep, it could be a kubectl wait, ex:
kubectl wait --for=condition=ready pod -l k8s-app=cilium --timeout=3m
imo this suite could benefit by using kubectl wait in several places instead of sleeps, including
kubectl wait --for=delete pod/ciliumidentity/etc, but that's out of scope for this pr :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would kubectl rollout status ds -n kube-system cilium --timeout=3m be just as effective for this case?
Reason for Change:
PR moves Hubble Connectivity tests to Cilium Nightly Pipeline by enabling Hubble Metrics.
Issue Fixed:
Requirements:
Notes: