Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: switch to health checks for init package waits #2964

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 5 additions & 6 deletions packages/gitea/zarf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,11 @@ components:
namespace: zarf
valuesFiles:
- gitea-values.yaml
healthChecks:
- name: zarf-gitea
namespace: zarf
kind: Deployment
apiVersion: apps/v1
actions:
onDeploy:
before:
Expand All @@ -71,12 +76,6 @@ components:
- name: GIT_SERVER_CREATE_PVC
mute: true
after:
- wait:
cluster:
kind: pod
namespace: zarf
name: app=gitea
condition: Ready
- cmd: ./zarf internal create-read-only-gitea-user --no-progress
maxRetries: 3
maxTotalSeconds: 60
Expand Down
13 changes: 5 additions & 8 deletions packages/zarf-agent/zarf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,11 @@ components:
- manifests/clusterrole.yaml
- manifests/clusterrolebinding.yaml
- manifests/serviceaccount.yaml
healthChecks:
- name: agent-hook
namespace: zarf
kind: Deployment
apiVersion: apps/v1
actions:
onCreate:
before:
Expand All @@ -40,11 +45,3 @@ components:
windows: pwsh
dir: ../..
description: Build the local agent image (if 'AGENT_IMAGE_TAG' was specified as 'local')
onDeploy:
after:
- wait:
cluster:
kind: pod
namespace: zarf
name: app=agent-hook
condition: Ready
14 changes: 5 additions & 9 deletions packages/zarf-registry/zarf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -171,12 +171,8 @@ components:
images:
# This image (or images) must match that used for injection (see zarf-config.toml)
- "###ZARF_PKG_TMPL_REGISTRY_IMAGE_DOMAIN######ZARF_PKG_TMPL_REGISTRY_IMAGE###:###ZARF_PKG_TMPL_REGISTRY_IMAGE_TAG###"
actions:
onDeploy:
after:
- wait:
cluster:
kind: deployment
namespace: zarf
name: app=docker-registry
condition: Available
healthChecks:
- name: zarf-docker-registry
namespace: zarf
kind: Deployment
apiVersion: apps/v1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One interesting thing I'm seeing now that health checks are added is that sometimes we see the following error on image push for the agent (the component right after the registry). The failure happens nearly instantaneously and we retry right away so there is no effect on the user besides seeing the text. It seems that the service is trying to forward to the pod right as it's dying. This didn't happen before because we didn't wait for the seed registry to terminate. This still happens if I add a health check on the service.

    Pushing ghcr.io/zarf-dev/zarf/agent:local  0sE0904 09:28:20.759620 3387486 portforward.go:413] an error occurred forwarding 39239 -> 5000: error forwarding port 5000 to pod 8b4ba41141648cc01c39f674e94dfd83f36755ee5416d118cd012a88d0b46476, uid : failed to execute portforward in network namespace "/var/run/netns/cni-41dd6540-8191-9e97-19ed-a1ca0d90316c": failed to connect to localhost:5000 inside namespace "8b4ba41141648cc01c39f674e94dfd83f36755ee5416d118cd012a88d0b46476", IPv4: dial tcp4 127.0.0.1:5000: connect: connection refused IPv6 dial tcp6 [::1]:5000: connect: connection refused 
  ✔  Pushed 1 images

I don't think this should stop us from merging, but wanted to take a note of it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this because we are now too fast to do a port-forward?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking that it's trying to do a port-forward on the old pod after it died, but the timing is very tight. It always works on the second try, if it fails at all. The UID in the error doesn't match the new pod, so I'm guessing it would match the old one, not sure if there's an easy way to get deleted pod UIDs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is strange, I did re-runs and watched the UID of the registry pods. The UID in the error message is neither the old registry pod or the new registry pod.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current theory is that the deployment and service can be ready but the old endpoint slice for the seed registry still exists. This would be consistent with what's written in the kstatus docs about resources that create other resources

Copy link
Contributor Author

@AustinAbro321 AustinAbro321 Sep 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving this here, this is what the error message looks like. It originates from this line in the Kubernetes port-forward code. I think it gets put on the progress bar somehow, since we aren't returning the error here. The retry doesn't error because it always works on the next attempt.
image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the code to get the pods instead of using the service. As far as I can tell from running a script that inits on my local PC 10 times, we don't end up with the above error message, but it can still error once, it just works on the next try and since the retry function returns nil we don't ever see the error message.

18 changes: 17 additions & 1 deletion src/pkg/cluster/tunnel.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ package cluster

import (
"context"
"errors"
"fmt"
"io"
"net/http"
Expand Down Expand Up @@ -147,8 +148,23 @@ func (c *Cluster) ConnectToZarfRegistryEndpoint(ctx context.Context, registryInf
var err error
var tunnel *Tunnel
if registryInfo.IsInternal() {
registrySvc, err := c.Clientset.CoreV1().Services(ZarfNamespaceName).Get(ctx, ZarfRegistryName, metav1.GetOptions{})
if err != nil {
return "", nil, err
}
selector, err := metav1.LabelSelectorAsSelector(&metav1.LabelSelector{MatchLabels: registrySvc.Spec.Selector})
if err != nil {
return "", nil, err
}
podList, err := c.Clientset.CoreV1().Pods(ZarfNamespaceName).List(ctx, metav1.ListOptions{LabelSelector: selector.String()})
if err != nil {
return "", nil, err
}
if len(podList.Items) < 1 {
return "", nil, errors.New("no pods for internal registry")
}
// Establish a registry tunnel to send the images to the zarf registry
if tunnel, err = c.NewTunnel(ZarfNamespaceName, SvcResource, ZarfRegistryName, "", 0, ZarfRegistryPort); err != nil {
if tunnel, err = c.NewTunnel(ZarfNamespaceName, PodResource, podList.Items[0].Name, "", 0, ZarfRegistryPort); err != nil {
return "", tunnel, err
}
} else {
Expand Down