-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.7 backports 2020-04-02 #10833
v1.7 backports 2020-04-02 #10833
Conversation
[ upstream commit d12853f ] After each test case, we retrieve Cilium logs to look for known-bad log messages. Daniel recently noticed that tests containing known-bad message logs are passing. The issue is in the kubectl logs command we use to retrieve the logs. Its behavior seems to have changed at some point. When used with a label selector---as we do---the option --tail has a default value of 10 instead of -1. We are therefore only seeing the 10 last lines of each Cilium pod, regardless of the --since option's value. Reported-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit f0049da ] CNP schema validation was incorrectly formatted for some fields which could cause badly formatted yaml files to be accepted by kube-apiserver bypassing the schema validation. This would later cause Cilium to print errors and potentially avoid it from starting, as the invalid CNPs would prevent Cilium from fully synchronize with kube-apiserver an operation that is essential when starting Cilium. This commit fixes all violations presented by kubernetes for the CCNP and CNP validation which would then deny bad CNPs and / or CCNPs from being accepted by kube-apiserver. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit ff863a8 ] As Cilium might require to update its CRD validation schema it is important for the users to make sure all policies installed in their cluster are valid in the point of view the new CRD validation schema before performing an upgrade. Avoiding doing this validation might cause Cilium from updating its NodeStatus in those invalid Network Policies as well as in the worst case scenario it might give a false sense of security to the user if a policy is badly formatted and Cilium is not inforcing that policy due a bad validation. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit 6a9ceba ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit bb9c843 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
never-tell-me-the-odds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My commits LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My changes LGTM, thanks
Legitimate failure by the looks:
|
Re |
[ upstream commit a2d6217 ] This commit removes the ephemeral port range check for NodePort from the BPF datapath. Instead, in the agent we check whether the NodePort range is covered by ip_local_reserved_ports. If it's not, then we append the range to ip_local_reserved_ports. Users can opt out from the latter by setting --enable-auto-protect-node-port-range=false. Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit a064bf5 ] The value configures --enable-auto-protect-node-port-range. The default is true. Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit 9b8f7fc ] Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Paul Chaignon <paul@cilium.io>
Thanks @brb! I went with the former and moved |
@pchaigno I don't think that github registered the new version. |
@joestringer Uh, you're right. Nonetheless:
🤔 EDIT: I changed the commit date of the last commit and that worked. |
[ upstream commit 8162f6b ] Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Paul Chaignon <paul@cilium.io>
90a363f
to
3c3592b
Compare
Cilium-kubernetes-upstream-test hit a timeout error. test-upstream-k8s |
test-missed-k8s Hit #10231. |
restart-ginkgo Hit #10636. |
test-with-kernel |
(test-with-kernel shows no pipeline steps, not sure if that's just because the target is unsupported on v1.7 branch.. https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Kernel/140/flowGraphTable/) |
OK, 1.7 just doesn't support that target. I'll merge this now. |
k8s.CiliumClient().[...].Do()
to not take argument for k8s <1.18.CODEOWNERS
change from ff863a8.make -C daemon apply-bindata
.Documentation/gettingstarted/kubeproxy-free.rst
(9b8f7fc).install/kubernetes/cilium/values.yaml
andinstall/kubernetes/quick-install.yaml
(a064bf5).Once this PR is merged, you can update the PR labels via: