-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeProxy: Support hook server deployed by k8s pod #718
Conversation
4bf1e8d
to
535992d
Compare
Codecov ReportBase: 68.85% // Head: 68.71% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## main #718 +/- ##
==========================================
- Coverage 68.85% 68.71% -0.14%
==========================================
Files 205 205
Lines 23319 23378 +59
==========================================
+ Hits 16057 16065 +8
- Misses 6152 6207 +55
+ Partials 1110 1106 -4
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
faac717
to
b93c736
Compare
If hookserver is deployed as k8s pod, there may exist cycled dependency: If hookserver is updated, the new hookserver pod create CRI request would come to kubelet, runtime-proxy and then hookserver, however, at this time, the previous pod may have exited, which results in calling hookserver timeout. If RuntimeHookConfig.FailurePolicy is configed as FailurePolicyType, such pod hookserver would always be rejected by RuntimeProxy to re-create. To fix this issue, we introduce annotations to tag pod as hook server. For such tagged pods, runtime-proxy would transfer cri request to backend runtime engine transparently to solve the cycled dependency. Signed-off-by: honpey <honpey@gmail.com>
b93c736
to
2fde2d4
Compare
So how about expose the hook server in Koordlet instead of a new daemonset on all nodes? |
In fact, this is the case, the implementation of hooks is in koordlet, and koordlet is the process of the hooks server. |
klog.Errorf("fail to call hook server %v", err) | ||
} else if response == nil { | ||
// when hook is not registered, the response will become nil | ||
klog.Warningf("runtime hook path %s does not register any PreHooks", string(runtimeHookPath)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe level klog.V(4).Info is enough for no hook registered
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe level klog.V(4).Info is enough for no hook registered
this log would be removed in next patch
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
If hookserver is deployed as k8s pod, there may exist cycled dependency: If hookserver is updated, the new hookserver pod create CRI request would come to kubelet, runtime-proxy and then hookserver, however, at this time, the previous pod may have exited, which results in calling hookserver timeout. If RuntimeHookConfig.FailurePolicy is configed as FailurePolicyType, such pod hookserver would always be rejected by RuntimeProxy to re-create.
To fix this issue, we introduce annotations to tag pod as hook server. For such tagged pods, runtime-proxy would transfer cri request to backend runtime engine transparently to solve the cycled dependency.
Signed-off-by: honpey honpey@gmail.com
Ⅰ. Describe what this PR does
Ⅱ. Does this pull request fix one issue?
Ⅲ. Describe how to verify it
Ⅳ. Special notes for reviews
V. Checklist
make test