New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Endpoints are local to the node on which the cilium agent is running. #24017
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@qmonnet thanks for the ping!
In the policy creation tutorial we're scaling down the deathstart deployment to 1 so that queries to the deathstar svc always hit one pod and thus one specific node. The node count itself depends on how the cluster was created at the Setup Cilium step (e.g. four nodes with kind, one with minikube).
In a multi-node setup, I don't think we can assume that all pods will be deployed on a given single node. Then, no single |
Yes
I noticed as well, this PR uses the right name
Sounds safe, but the assumption could be confusing maybe? I think the change in this PR makes sense, even if we loose the copy/paste simplicity?
Sounds correct to me, so yeah the examples in the doc likely assumes a single pod, but I'd be in favour of the change proposed here, to avoid any risk of confusion. Any objection from your side? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a multi-node setup, I don't think we can assume that all pods will be deployed on a given single node. Then, no single
cilium endpoint list
invocation could give the full endpoints list correct?Sounds correct to me, so yeah the examples in the doc likely assumes a single pod, but I'd be in favour of the change proposed here, to avoid any risk of confusion. Any objection from your side?
No objection, but adding a note about not all endpoint showing on a multi-node installation should be considered IMHO because the proposed patch doesn't fully clear up the potential confusion.
It makes sense. I added clarification regarding multi-node installation as well as left a ds/cilium for the single-noders. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thank you.
Note that for some reason (rebase issue?) you have a commit from Joe in your PR. Can you please clean it up?
Commit a4aa8708e0124271e7a142175550dd12a10d4001 does not contain "Signed-off-by". Please follow instructions provided in https://docs.cilium.io/en/stable/contributing/development/contributing_guide/#developer-s-certificate-of-origin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes LGTM, thanks!
When you are working on cleaning up commit history, can you also squash your commits into one?
The example assumed that there was only one node as invoking against ds/cilium just picks one pod in the daemonset. Signed-off-by: Tony Norlin <tony.norlin@localdomain.se>
Sorry, I don't know what happened there to get that old January commit from Joe (I indeed did something bad with rebase). Hopefully it looks better now when I squashed the commits! :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks all good, thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, thank you!
Doc only change, no need for a full CI run. All reviews are in and resulting documentation looks good so marking as |
The example assumed that there was only one node as invoking against ds/cilium just picks one pod in the daemonset.
As showed:
`✦ ❯ kubectl -n kube-system exec ds/cilium -- cilium endpoint list
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init)
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
1210 Disabled Disabled 1 k8s:node-role.kubernetes.io/control-plane ready
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
2421 Disabled Disabled 4 reserved:health 10.0.4.118 ready
✦ ❯ kubectl -n kube-system exec cilium-zlndj -- cilium endpoint list
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init)
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
970 Disabled Disabled 46385 k8s:app.kubernetes.io/name=tiefighter 10.0.3.181 ready
k8s:class=tiefighter
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=hostport-2431
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=hostport-2431
k8s:org=empire
1002 Disabled Disabled 4 reserved:health 10.0.3.236 ready
1493 Disabled Disabled 57677 k8s:app.kubernetes.io/name=xwing 10.0.3.77 ready
k8s:class=xwing
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=hostport-2431
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=hostport-2431
k8s:org=alliance
1624 Disabled Disabled 1326 k8s:app.kubernetes.io/name=tiefighter 10.0.3.209 ready
k8s:class=tiefighter
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=demo
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=demo
k8s:org=empire
2495 Disabled Disabled 16567 k8s:app.kubernetes.io/name=deathstar 10.0.3.40 ready
k8s:class=deathstar
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=hostport-2431
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=hostport-2431
k8s:org=empire
2522 Disabled Disabled 1 reserved:host `
Please ensure your pull request adheres to the following guidelines:
description and a
Fixes: #XXX
line if the commit addresses a particularGitHub issue.
Fixes: <commit-id>
tag, thenplease add the commit author[s] as reviewer[s] to this issue.
Fixes: #issue-number