-
Notifications
You must be signed in to change notification settings - Fork 2.9k
quadlet kube: correctly mark unit as failed #25481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
quadlet kube: correctly mark unit as failed #25481
Conversation
It is very bad practise to print to stdout in our backend code without nay real context. The exact same error message is returned to the caller and printed in the cli frontend hwere it should be. Therefore drop this print as it is redundant. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The first condition is checking an error where no error is returned and the second is checking even though err == nil was matched above already so we know the error is not nil here. Then also replace os.IsNotExist(err) with errors.Is(err, os.ErrNotExist) as that should be used for new code. This should not change behavior in any way. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Luap99 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Ephemeral COPR build failed. @containers/packit-build please check. |
68c31df to
bacab64
Compare
ygalblum
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nits
When no containers could be started we need to make sure the unit status reflects this. This means we should not send the READ=1 message and not keep the service container running when we were unable to start any container. There is the question what should happen when only a subset was started. For systemd we can only be either running or failed. And as podman kube play also just keeps the partial started pods running I opted to let systemd keep considering this as success. Fixes containers#20667 Fixes https://issues.redhat.com/browse/RHEL-80471 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
bacab64 to
945aade
Compare
|
/lgtm |
| setRanContainers := func(r *entities.PlayKubeReport) { | ||
| if !ranContainers { | ||
| for _, p := range r.Pods { | ||
| // If the list of container errors is less then the total number of pod containers then we know it didn't start. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the comment be that it did start?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh yes, I guess I just sneak it in my next PR as small typo fix commit.
919247a
into
containers:main
It did start there, as pointed out by Ygal on containers#25481. Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When no containers could be started we need to make sure the unit status
reflects this. This means we should not send the READ=1 message and not
keep the service container running when we were unable to start any
container.
There is the question what should happen when only a subset was started.
For systemd we can only be either running or failed. And as podman kube
play also just keeps the partial started pods running I opted to let
systemd keep considering this as success.
Fixes #20667
Fixes https://issues.redhat.com/browse/RHEL-80471
And two more minor fixes, see the first two commits
Does this PR introduce a user-facing change?