-
Notifications
You must be signed in to change notification settings - Fork 39.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-proxy fails to cleanup iptables chains after restart #70615
Labels
kind/bug
Categorizes issue or PR as related to a bug.
sig/network
Categorizes an issue or PR as relevant to SIG Network.
Comments
k8s-ci-robot
added
kind/bug
Categorizes issue or PR as related to a bug.
needs-sig
Indicates an issue or PR lacks a `sig/foo` label and requires one.
labels
Nov 3, 2018
/sig network |
k8s-ci-robot
added
sig/network
Categorizes an issue or PR as relevant to SIG Network.
and removed
needs-sig
Indicates an issue or PR lacks a `sig/foo` label and requires one.
labels
Nov 3, 2018
teemow
added a commit
to teemow/kubernetes
that referenced
this issue
Nov 3, 2018
while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615
This was referenced Mar 4, 2019
rjaini
added a commit
to msazurestackworkloads/kubernetes
that referenced
this issue
Apr 15, 2019
* Fix kubernetes#73479 AWS NLB target groups missing tags `elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com> * record event on endpoint update failure * Applies zone labels to newly created vsphere volumes * Provision vsphere volume honoring zones * Explicitly set GVK when sending objects to webhooks * Remove reflector metrics as they currently cause a memory leak * add health plugin in the DNS tests * add more logging in azure disk attach/detach * Kubernetes version v1.13.5-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.4. * add Azure Container Registry anonymous repo support apply fix for msi and fix test failure * DaemonSet e2e: Update image and rolling upgrade test timeout Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com> * Revert kubelet to default to ttl cache secret/configmap behavior * cri_stats_provider: overload nil as 0 for exited containers stats Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> * flush iptable chains first and then remove them while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615 * Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity. * Fix panic in kubectl cp command * Bump debian-iptables to v11.0.1 Rebase docker image on debian-base:0.4.1 * Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata. * GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume * Fix the network policy tests. This is a cherrypick of the following commit https://github.com/kubernetes/kubernetes/pull/74290/commits * Update Cluster Autoscaler version to 1.13.2 * Ensure Azure load balancer cleaned up on 404 or 403 * Allow disable outbound snat when Azure standard load balancer is used * Allow session affinity a period of time to setup for new services. This is to deal with the flaky session affinity test. * Distinguish volume path with mount path * Delay CSI client initialization * kubelet: updated logic of verifying a static critical pod - check if a pod is static by its static pod info - meanwhile, check if a pod is critical by its corresponding mirror pod info * Restore username and password kubectl flags * build/gci: bump CNI version to 0.7.5
rjaini
added a commit
to msazurestackworkloads/kubernetes
that referenced
this issue
May 6, 2019
* Fix kubernetes#73479 AWS NLB target groups missing tags `elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com> * remove get azure accounts in the init process set timeout for get azure account operation use const for timeout value remove get azure accounts in the init process add lock for account init * add timeout in GetVolumeLimits operation add timeout for getAllStorageAccounts * add mixed protocol support for azure load balancer * record event on endpoint update failure * fix parse devicePath issue on Azure Disk * Kubernetes version v1.12.7-beta.0 openapi-spec file updates * add retry for detach azure disk add more logging info in detach disk add more logging for azure disk attach/detach * Add/Update CHANGELOG-1.12.md for v1.12.6. * Reduce cardinality of admission webhook metrics * fix negative slice index error in keymutex * Remove reflector metrics as they currently cause a memory leak * Explicitly set GVK when sending objects to webhooks * add Azure Container Registry anonymous repo support apply fix for msi and fix test failure * DaemonSet e2e: Update image and rolling upgrade test timeout Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com> * Revert kubelet to default to ttl cache secret/configmap behavior * cri_stats_provider: overload nil as 0 for exited containers stats Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> * flush iptable chains first and then remove them while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615 * Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity. * Fix panic in kubectl cp command * Augmenting API call retry in nodeinfomanager * Bump debian-iptables to v11.0.1. Rebase docker image on debian-base:0.4.1 * Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata. * GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume * Update Cluster Autoscaler version to 1.12.3 * add module 'nf_conntrack' in ipvs prerequisite check * Allow disable outbound snat when Azure standard load balancer is used * Ensure Azure load balancer cleaned up on 404 or 403 * fix smb unmount issue on Windows fix log warning use IsCorruptedMnt in GetMountRefs on Windows use errorno in IsCorruptedMnt check fix comments: add more error code add more error no checking change year fix comments fix bazel error fix bazel fix bazel fix bazel revert bazel change * kubelet: updated logic of verifying a static critical pod - check if a pod is static by its static pod info - meanwhile, check if a pod is critical by its corresponding mirror pod info * Allow session affinity a period of time to setup for new services. This is to deal with the flaky session affinity test. * Restore username and password kubectl flags * build/gci: bump CNI version to 0.7.5 * fix race condition issue for smb mount on windows change var name * allows configuring NPD release and flags on GCI and add cluster e2e test * allows configuring NPD image version in node e2e test and fix the test * bump repd min size in e2es * Kubernetes version v1.12.8-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.12.md for v1.12.7. * stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236 * Do not delete existing VS and RS when starting * Fix updating 'currentMetrics' field for HPA with 'AverageValue' target * Populate ClientCA in delegating auth setup kubernetes#67768 accidentally removed population of the the ClientCA in the delegating auth setup code. This restores it. * Update gcp images with security patches [stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes. [fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes. [fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes. * Fix AWS driver fails to provision specified fsType * Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template * Bump debian-iptables to v11.0.2 * Avoid panic in cronjob sorting This change handles the case where the ith cronjob may have its start time set to nil. Previously, the Less method could cause a panic in case the ith cronjob had its start time set to nil, but the jth cronjob did not. It would panic when calling Before on a nil StartTime. * Add volume mode downgrade test: should not mount/map in <1.13 * disable HTTP2 ingress test * ensuring that logic is checking for differences in listener * Use Node-Problem-Detector v0.6.3 on GCI * Delete only unscheduled pods if node doesn't exist anymore. * proxy: Take into account exclude CIDRs while deleting legacy real servers * Increase default maximumLoadBalancerRuleCount to 250 * kube-proxy: rename internal field for clarity * kube-proxy: rename vars for clarity, fix err str * kube-proxy: rename field for congruence * kube-proxy: reject 0 endpoints on forward Previously we only REJECTed on OUTPUT which works for packets from the node but not for packets from pods on the node. * kube-proxy: remove old cleanup rules * Kube-proxy: REJECT LB IPs with no endpoints We REJECT every other case. Close this FIXME. To get this to work in all cases, we have to process service in filter.INPUT, since LB IPS might be manged as local addresses. * Retool HTTP and UDP e2e utils This is a prefactoring for followup changes that need to use very similar but subtly different test. Now it is more generic, though it pushes a little logic up the stack. That makes sense to me. * Fix small race in e2e Occasionally we get spurious errors about "no route to host" when we race with kube-proxy. This should reduce that. It's mostly just log noise. * Fix Azure SLB support for multiple backend pools Azure VM and vmssVM support multiple backend pools for the same SLB, but not for different LBs. * Revert "Merge pull request kubernetes#76529 from spencerhance/automated-cherry-pick-of-#72534-kubernetes#74394-upstream-release-1.12" This reverts commit 535e3ad, reversing changes made to 336d787.
rjaini
added a commit
to msazurestackworkloads/kubernetes
that referenced
this issue
May 22, 2019
* Fix kubernetes#73479 AWS NLB target groups missing tags `elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com> * record event on endpoint update failure * Fix scanning of failed targets If a iSCSI target is down while a volume is attached, reading from /sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address fails with an error. Kubelet should assume that such target is not available / logged in and try to relogin. Eventually, if such error persists, it should continue mounting the volume if the other paths are healthy instead of failing whole WaitForAttach(). * Applies zone labels to newly created vsphere volumes * Provision vsphere volume honoring zones * Explicitly set GVK when sending objects to webhooks * Remove reflector metrics as they currently cause a memory leak * add health plugin in the DNS tests * add more logging in azure disk attach/detach * Kubernetes version v1.13.5-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.4. * add Azure Container Registry anonymous repo support apply fix for msi and fix test failure * DaemonSet e2e: Update image and rolling upgrade test timeout Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com> * Revert kubelet to default to ttl cache secret/configmap behavior * cri_stats_provider: overload nil as 0 for exited containers stats Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> * flush iptable chains first and then remove them while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615 * Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity. * Fix panic in kubectl cp command * Bump debian-iptables to v11.0.1 Rebase docker image on debian-base:0.4.1 * Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata. * GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume * Fix the network policy tests. This is a cherrypick of the following commit https://github.com/kubernetes/kubernetes/pull/74290/commits * Update Cluster Autoscaler version to 1.13.2 * Ensure Azure load balancer cleaned up on 404 or 403 * Allow disable outbound snat when Azure standard load balancer is used * Allow session affinity a period of time to setup for new services. This is to deal with the flaky session affinity test. * Distinguish volume path with mount path * Delay CSI client initialization * kubelet: updated logic of verifying a static critical pod - check if a pod is static by its static pod info - meanwhile, check if a pod is critical by its corresponding mirror pod info * Restore username and password kubectl flags * build/gci: bump CNI version to 0.7.5 * fix smb unmount issue on Windows fix log warning use IsCorruptedMnt in GetMountRefs on Windows use errorno in IsCorruptedMnt check fix comments: add more error code add more error no checking change year fix comments * fix race condition issue for smb mount on windows change var name * Fix aad support in kubectl for sovereign cloud * make describers of different versions work properly when autoscaling/v2beta2 is not supported * allows configuring NPD release and flags on GCI and add cluster e2e test * allows configuring NPD image version in node e2e test and fix the test * bump repd min size in e2es * Kubernetes version v1.13.6-beta.0 openapi-spec file updates * stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236 * Add/Update CHANGELOG-1.13.md for v1.13.5. * Add flag to enable strict ARP * Do not delete existing VS and RS when starting * Fix updating 'currentMetrics' field for HPA with 'AverageValue' target * Update config tests * Bump go-openapi/jsonpointer and go-openapi/jsonreference versions xref: kubernetes#75653 Signed-off-by: Jorge Alarcon Ochoa <alarcj137@gmail.com> * Fix nil pointer dereference panic in attachDetachController add check `attachableVolumePlugin == nil` to operationGenerator.GenerateDetachVolumeFunc() * if ephemeral-storage not exist in initialCapacity, don't upgrade ephemeral-storage in node status * Update gcp images with security patches [stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes. [fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes. [fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes. * Fix AWS driver fails to provision specified fsType * Bump debian-iptables to v11.0.2. * Avoid panic in cronjob sorting This change handles the case where the ith cronjob may have its start time set to nil. Previously, the Less method could cause a panic in case the ith cronjob had its start time set to nil, but the jth cronjob did not. It would panic when calling Before on a nil StartTime. * Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template * Check for required name parameter in dynamic client The Create, Delete, Get, Patch, Update and UpdateStatus methods in the dynamic client all expect the name parameter to be non-empty, but did not validate this requirement, which could lead to a panic. Add explicit checks to these methods. * disable HTTP2 ingress test * ensuring that logic is checking for differences in listener * Use Node-Problem-Detector v0.6.3 on GCI * proxy: Take into account exclude CIDRs while deleting legacy real servers * Update addon-manager to use debian-base:v1.0.0 * Increase default maximumLoadBalancerRuleCount to 250 * Set CPU metrics for init containers under containerd metrics-server doesn't return metrics for pods with init containers under containerd because they have incomplete CPU metrics returned by the kubelet /stats/summary API. This problem has been fixed in 1.14 (kubernetes#74336), but the cherry-picks dropped the `usageNanoCores` metric. This change adds the missing `usageNanoCores` metric for init containers. Fixes kubernetes#76292 * kube-proxy: rename internal field for clarity * kube-proxy: rename vars for clarity, fix err str * kube-proxy: rename field for congruence * kube-proxy: reject 0 endpoints on forward Previously we only REJECTed on OUTPUT which works for packets from the node but not for packets from pods on the node. * kube-proxy: remove old cleanup rules * Kube-proxy: REJECT LB IPs with no endpoints We REJECT every other case. Close this FIXME. To get this to work in all cases, we have to process service in filter.INPUT, since LB IPS might be manged as local addresses. * Retool HTTP and UDP e2e utils This is a prefactoring for followup changes that need to use very similar but subtly different test. Now it is more generic, though it pushes a little logic up the stack. That makes sense to me. * Fix small race in e2e Occasionally we get spurious errors about "no route to host" when we race with kube-proxy. This should reduce that. It's mostly just log noise. * Bump coreos/go-semver The https://github.com/coreos/go-semver/ dependency has formally release v0.3.0 at commit e214231b295a8ea9479f11b70b35d5acf3556d9b. This is the commit point we've been using, but the hack/verify-godeps.sh script notices the discrepancy and causes ci-kubernetes-verify job to fail. Fixes: kubernetes#76526 Signed-off-by: Tim Pepper <tpepper@vmware.com> * Fix Azure SLB support for multiple backend pools Azure VM and vmssVM support multiple backend pools for the same SLB, but not for different LBs. * Restore metrics-server using of IP addresses This preference list matches is used to pick prefered field from k8s node object. It was introduced in metrics-server 0.3 and changed default behaviour to use DNS instead of IP addresses. It was merged into k8s 1.12 and caused breaking change by introducing dependency on DNS configuration. * refactor detach azure disk retry operation * move disk lock process to azure cloud provider fix comments fix import keymux check error add unit test for attach/detach disk funcs fix build error fix build error * e2e-node-tests: fix path to system specs e2e-node tests may use custom system specs for validating nodes to conform the specs. The functionality is switched on when the tests are run with this command: make SYSTEM_SPEC_NAME=gke test-e2e-node Currently the command fails with the error: F1228 16:12:41.568836 34514 e2e_node_suite_test.go:106] Failed to load system spec: open /home/rojkov/go/src/k8s.io/kubernetes/k8s.io/kubernetes/cmd/kubeadm/app/util/system/specs/gke.yaml: no such file or directory Move the spec file under `test/e2e_node/system/specs` and introduce a single public constant referring the file to use instead of multiple private constants. * Fix concurrent map access in Portworx create volume call Fixes kubernetes#76340 Signed-off-by: Harsh Desai <harsh@portworx.com> * add shareName param in azure file storage class skip create azure file if it exists * Update Cluster Autoscaler to 1.13.4 * Create the "internal" firewall rule for kubemark master. This is equivalent to the "internal" firewall rule that is created for the regular masters. The main reason for doing it is to allow prometheus scraping metrics from various kubemark master components, e.g. kubelet. Ref. kubernetes/perf-tests#503 * fix disk list corruption issue * Fix verify godeps failure for 1.13 github.com/evanphx/json-patch added a new tag at the same sha this morning: https://github.com/evanphx/json-patch/releases/tag/v4.2.0 This confused godeps. This PR updates our file to match godeps expectation. Fixes issue 77238 * Upgrade Stackdriver Logging Agent addon image from 1.6.0 to 1.6.8. * Test kubectl cp escape * Properly handle links in tar * Update the dynamic volume limit in GCE PD Currently GCE PD support 128 maximum disks attached to a node for all machines types except shared-core. This PR updates the limit number to date. Change-Id: Id9dfdbd24763b6b4138935842c246b1803838b78 * Use consistent imageRef during container startup * Replace vmss update API with instance-level update API * Cleanup codes that not required any more * Add unit tests * Upgrade compute API to version 2019-03-01 * Update vendors * Fix issues because of rebase * Pick up security patches for fluentd-gcp-scaler by upgrading to version 0.5.2 * Fix race condition between actual and desired state in kublet volume manager This PR fixes the issue kubernetes#75345. This fix modified the checking volume in actual state when validating whether volume can be removed from desired state or not. Only if volume status is already mounted in actual state, it can be removed from desired state. For the case of mounting fails always, it can still work because the check also validate whether pod still exist in pod manager. In case of mount fails, pod should be able to removed from pod manager so that volume can also be removed from desired state. * Error when etcd3 watch finds delete event with nil prevKV
rjaini
added a commit
to msazurestackworkloads/kubernetes
that referenced
this issue
May 31, 2019
* Fix kubernetes#73479 AWS NLB target groups missing tags `elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com> * remove get azure accounts in the init process set timeout for get azure account operation use const for timeout value remove get azure accounts in the init process add lock for account init * add timeout in GetVolumeLimits operation add timeout for getAllStorageAccounts * add mixed protocol support for azure load balancer * record event on endpoint update failure * fix parse devicePath issue on Azure Disk * Fix scanning of failed targets If a iSCSI target is down while a volume is attached, reading from /sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address fails with an error. Kubelet should assume that such target is not available / logged in and try to relogin. Eventually, if such error persists, it should continue mounting the volume if the other paths are healthy instead of failing whole WaitForAttach(). * Kubernetes version v1.12.7-beta.0 openapi-spec file updates * add retry for detach azure disk add more logging info in detach disk add more logging for azure disk attach/detach * Add/Update CHANGELOG-1.12.md for v1.12.6. * Reduce cardinality of admission webhook metrics * fix negative slice index error in keymutex * Remove reflector metrics as they currently cause a memory leak * Explicitly set GVK when sending objects to webhooks * add Azure Container Registry anonymous repo support apply fix for msi and fix test failure * DaemonSet e2e: Update image and rolling upgrade test timeout Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com> * Revert kubelet to default to ttl cache secret/configmap behavior * cri_stats_provider: overload nil as 0 for exited containers stats Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> * flush iptable chains first and then remove them while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615 * Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity. * Fix panic in kubectl cp command * Augmenting API call retry in nodeinfomanager * Bump debian-iptables to v11.0.1. Rebase docker image on debian-base:0.4.1 * Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata. * GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume * Update Cluster Autoscaler version to 1.12.3 * add module 'nf_conntrack' in ipvs prerequisite check * Allow disable outbound snat when Azure standard load balancer is used * Ensure Azure load balancer cleaned up on 404 or 403 * fix smb unmount issue on Windows fix log warning use IsCorruptedMnt in GetMountRefs on Windows use errorno in IsCorruptedMnt check fix comments: add more error code add more error no checking change year fix comments fix bazel error fix bazel fix bazel fix bazel revert bazel change * kubelet: updated logic of verifying a static critical pod - check if a pod is static by its static pod info - meanwhile, check if a pod is critical by its corresponding mirror pod info * Allow session affinity a period of time to setup for new services. This is to deal with the flaky session affinity test. * Restore username and password kubectl flags * build/gci: bump CNI version to 0.7.5 * fix race condition issue for smb mount on windows change var name * allows configuring NPD release and flags on GCI and add cluster e2e test * allows configuring NPD image version in node e2e test and fix the test * bump repd min size in e2es * Kubernetes version v1.12.8-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.12.md for v1.12.7. * stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236 * Do not delete existing VS and RS when starting * Fix updating 'currentMetrics' field for HPA with 'AverageValue' target * Populate ClientCA in delegating auth setup kubernetes#67768 accidentally removed population of the the ClientCA in the delegating auth setup code. This restores it. * Update gcp images with security patches [stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes. [fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes. [fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes. * Fix AWS driver fails to provision specified fsType * Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template * Bump debian-iptables to v11.0.2 * Avoid panic in cronjob sorting This change handles the case where the ith cronjob may have its start time set to nil. Previously, the Less method could cause a panic in case the ith cronjob had its start time set to nil, but the jth cronjob did not. It would panic when calling Before on a nil StartTime. * Add volume mode downgrade test: should not mount/map in <1.13 * disable HTTP2 ingress test * ensuring that logic is checking for differences in listener * Use Node-Problem-Detector v0.6.3 on GCI * Delete only unscheduled pods if node doesn't exist anymore. * proxy: Take into account exclude CIDRs while deleting legacy real servers * Increase default maximumLoadBalancerRuleCount to 250 * kube-proxy: rename internal field for clarity * kube-proxy: rename vars for clarity, fix err str * kube-proxy: rename field for congruence * kube-proxy: reject 0 endpoints on forward Previously we only REJECTed on OUTPUT which works for packets from the node but not for packets from pods on the node. * kube-proxy: remove old cleanup rules * Kube-proxy: REJECT LB IPs with no endpoints We REJECT every other case. Close this FIXME. To get this to work in all cases, we have to process service in filter.INPUT, since LB IPS might be manged as local addresses. * Retool HTTP and UDP e2e utils This is a prefactoring for followup changes that need to use very similar but subtly different test. Now it is more generic, though it pushes a little logic up the stack. That makes sense to me. * Fix small race in e2e Occasionally we get spurious errors about "no route to host" when we race with kube-proxy. This should reduce that. It's mostly just log noise. * Fix Azure SLB support for multiple backend pools Azure VM and vmssVM support multiple backend pools for the same SLB, but not for different LBs. * Set CPU metrics for init containers under containerd Copies PR kubernetes#76503 for release-1.12. metrics-server doesn't return metrics for pods with init containers under containerd because they have incomplete CPU metrics returned by the kubelet /stats/summary API. This problem has been fixed in 1.14 (kubernetes#74336), but the cherry-picks dropped the usageNanoCores metric. This change adds the missing usageNanoCores metric for init containers in Kubernetes v1.12. Fixes kubernetes#76292 * Restore metrics-server using of IP addresses This preference list matches is used to pick prefered field from k8s node object. It was introduced in metrics-server 0.3 and changed default behaviour to use DNS instead of IP addresses. It was merged into k8s 1.12 and caused breaking change by introducing dependency on DNS configuration. * Revert "Merge pull request kubernetes#76529 from spencerhance/automated-cherry-pick-of-#72534-kubernetes#74394-upstream-release-1.12" This reverts commit 535e3ad, reversing changes made to 336d787. * Kubernetes version v1.12.9-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.12.md for v1.12.8. * Upgrade compute API to version 2019-03-01 * Replace vmss update API with instance-level update API * Cleanup codes that not required any more * Add unit tests * Update vendors * Update Cluster Autoscaler to 1.12.5 * add shareName param in azure file storage class skip create azure file if it exists remove comments * Create the "internal" firewall rule for kubemark master. This is equivalent to the "internal" firewall rule that is created for the regular masters. The main reason for doing it is to allow prometheus scraping metrics from various kubemark master components, e.g. kubelet. Ref. kubernetes/perf-tests#503 * refactor detach azure disk retry operation * move disk lock process to azure cloud provider fix comments fix import keymux check error add unit test for attach/detach disk funcs fix bazel issue rebase * fix disk list corruption issue * Fix verify godeps failure for 1.12 github.com/evanphx/json-patch added a new tag at the same sha this morning: https://github.com/evanphx/json-patch/releases/tag/v4.2.0 This confused godeps. This PR updates our file to match godeps expectation. Fixes issue 77238 * Upgrade Stackdriver Logging Agent addon image from 1.6.0 to 1.6.8. * Test kubectl cp escape * Properly handle links in tar * use k8s.gcr.io/pause instead of kubernetes/pause * Pick up security patches for fluentd-gcp-scaler by upgrading to version 0.5.2 * Error when etcd3 watch finds delete event with nil prevKV * Make CreatePrivilegedPSPBinding reentrant Make CreatePrivilegedPSPBinding reentrant so tests using it (e.g. DNS) can be executed more than once against a cluster. Without this change, such tests will fail because the PSP already exists, short circuiting test setup. * check if Memory is not nil for container stats * In GuaranteedUpdate, retry on any error if we are working with stale data * BoundServiceAccountTokenVolume: fix InClusterConfig * fix CVE-2019-11244: `kubectl --http-cache=<world-accessible dir>` creates world-writeable cached schema files * Terminate watchers when watch cache is destroyed * honor overridden tokenfile, add InClusterConfig override tests * fix incorrect prometheus metrics
rjaini
added a commit
to msazurestackworkloads/kubernetes
that referenced
this issue
Jun 20, 2019
* Fix kubernetes#73479 AWS NLB target groups missing tags `elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com> * record event on endpoint update failure * Fix scanning of failed targets If a iSCSI target is down while a volume is attached, reading from /sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address fails with an error. Kubelet should assume that such target is not available / logged in and try to relogin. Eventually, if such error persists, it should continue mounting the volume if the other paths are healthy instead of failing whole WaitForAttach(). * Applies zone labels to newly created vsphere volumes * Provision vsphere volume honoring zones * Explicitly set GVK when sending objects to webhooks * Remove reflector metrics as they currently cause a memory leak * add health plugin in the DNS tests * add more logging in azure disk attach/detach * Kubernetes version v1.13.5-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.4. * add Azure Container Registry anonymous repo support apply fix for msi and fix test failure * DaemonSet e2e: Update image and rolling upgrade test timeout Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com> * Revert kubelet to default to ttl cache secret/configmap behavior * cri_stats_provider: overload nil as 0 for exited containers stats Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> * flush iptable chains first and then remove them while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615 * Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity. * Fix panic in kubectl cp command * Bump debian-iptables to v11.0.1 Rebase docker image on debian-base:0.4.1 * Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata. * GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume * Fix the network policy tests. This is a cherrypick of the following commit https://github.com/kubernetes/kubernetes/pull/74290/commits * Update Cluster Autoscaler version to 1.13.2 * Ensure Azure load balancer cleaned up on 404 or 403 * Allow disable outbound snat when Azure standard load balancer is used * Allow session affinity a period of time to setup for new services. This is to deal with the flaky session affinity test. * Distinguish volume path with mount path * Delay CSI client initialization * kubelet: updated logic of verifying a static critical pod - check if a pod is static by its static pod info - meanwhile, check if a pod is critical by its corresponding mirror pod info * Restore username and password kubectl flags * build/gci: bump CNI version to 0.7.5 * fix smb unmount issue on Windows fix log warning use IsCorruptedMnt in GetMountRefs on Windows use errorno in IsCorruptedMnt check fix comments: add more error code add more error no checking change year fix comments * fix race condition issue for smb mount on windows change var name * Fix aad support in kubectl for sovereign cloud * make describers of different versions work properly when autoscaling/v2beta2 is not supported * allows configuring NPD release and flags on GCI and add cluster e2e test * allows configuring NPD image version in node e2e test and fix the test * bump repd min size in e2es * Kubernetes version v1.13.6-beta.0 openapi-spec file updates * stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236 * Add/Update CHANGELOG-1.13.md for v1.13.5. * Add flag to enable strict ARP * Do not delete existing VS and RS when starting * Fix updating 'currentMetrics' field for HPA with 'AverageValue' target * Update config tests * Bump go-openapi/jsonpointer and go-openapi/jsonreference versions xref: kubernetes#75653 Signed-off-by: Jorge Alarcon Ochoa <alarcj137@gmail.com> * Fix nil pointer dereference panic in attachDetachController add check `attachableVolumePlugin == nil` to operationGenerator.GenerateDetachVolumeFunc() * if ephemeral-storage not exist in initialCapacity, don't upgrade ephemeral-storage in node status * Update gcp images with security patches [stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes. [fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes. [fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes. * Fix AWS driver fails to provision specified fsType * Bump debian-iptables to v11.0.2. * Avoid panic in cronjob sorting This change handles the case where the ith cronjob may have its start time set to nil. Previously, the Less method could cause a panic in case the ith cronjob had its start time set to nil, but the jth cronjob did not. It would panic when calling Before on a nil StartTime. * Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template * Check for required name parameter in dynamic client The Create, Delete, Get, Patch, Update and UpdateStatus methods in the dynamic client all expect the name parameter to be non-empty, but did not validate this requirement, which could lead to a panic. Add explicit checks to these methods. * disable HTTP2 ingress test * ensuring that logic is checking for differences in listener * Delete only unscheduled pods if node doesn't exist anymore. * Use Node-Problem-Detector v0.6.3 on GCI * proxy: Take into account exclude CIDRs while deleting legacy real servers * Update addon-manager to use debian-base:v1.0.0 * Increase default maximumLoadBalancerRuleCount to 250 * Set CPU metrics for init containers under containerd metrics-server doesn't return metrics for pods with init containers under containerd because they have incomplete CPU metrics returned by the kubelet /stats/summary API. This problem has been fixed in 1.14 (kubernetes#74336), but the cherry-picks dropped the `usageNanoCores` metric. This change adds the missing `usageNanoCores` metric for init containers. Fixes kubernetes#76292 * kube-proxy: rename internal field for clarity * kube-proxy: rename vars for clarity, fix err str * kube-proxy: rename field for congruence * kube-proxy: reject 0 endpoints on forward Previously we only REJECTed on OUTPUT which works for packets from the node but not for packets from pods on the node. * kube-proxy: remove old cleanup rules * Kube-proxy: REJECT LB IPs with no endpoints We REJECT every other case. Close this FIXME. To get this to work in all cases, we have to process service in filter.INPUT, since LB IPS might be manged as local addresses. * Retool HTTP and UDP e2e utils This is a prefactoring for followup changes that need to use very similar but subtly different test. Now it is more generic, though it pushes a little logic up the stack. That makes sense to me. * Fix small race in e2e Occasionally we get spurious errors about "no route to host" when we race with kube-proxy. This should reduce that. It's mostly just log noise. * Bump coreos/go-semver The https://github.com/coreos/go-semver/ dependency has formally release v0.3.0 at commit e214231b295a8ea9479f11b70b35d5acf3556d9b. This is the commit point we've been using, but the hack/verify-godeps.sh script notices the discrepancy and causes ci-kubernetes-verify job to fail. Fixes: kubernetes#76526 Signed-off-by: Tim Pepper <tpepper@vmware.com> * Fix Azure SLB support for multiple backend pools Azure VM and vmssVM support multiple backend pools for the same SLB, but not for different LBs. * Kubelet: add usageNanoCores from CRI stats provider * Fix computing of cpu nano core usage CRI runtimes do not supply cpu nano core usage as it is not part of CRI stats. However, there are upstream components that still rely on such stats to function. The previous fix was faulty because the multiple callers could compete and update the stats, causing inconsistent/incoherent metrics. This change, instead, creates a separate call for updating the usage, and rely on eviction manager, which runs periodically, to trigger the updates. The caveat is that if eviction manager is completley turned off, no one would compute the usage. * Restore metrics-server using of IP addresses This preference list matches is used to pick prefered field from k8s node object. It was introduced in metrics-server 0.3 and changed default behaviour to use DNS instead of IP addresses. It was merged into k8s 1.12 and caused breaking change by introducing dependency on DNS configuration. * refactor detach azure disk retry operation * move disk lock process to azure cloud provider fix comments fix import keymux check error add unit test for attach/detach disk funcs fix build error fix build error * e2e-node-tests: fix path to system specs e2e-node tests may use custom system specs for validating nodes to conform the specs. The functionality is switched on when the tests are run with this command: make SYSTEM_SPEC_NAME=gke test-e2e-node Currently the command fails with the error: F1228 16:12:41.568836 34514 e2e_node_suite_test.go:106] Failed to load system spec: open /home/rojkov/go/src/k8s.io/kubernetes/k8s.io/kubernetes/cmd/kubeadm/app/util/system/specs/gke.yaml: no such file or directory Move the spec file under `test/e2e_node/system/specs` and introduce a single public constant referring the file to use instead of multiple private constants. * Fix concurrent map access in Portworx create volume call Fixes kubernetes#76340 Signed-off-by: Harsh Desai <harsh@portworx.com> * add shareName param in azure file storage class skip create azure file if it exists * Update Cluster Autoscaler to 1.13.4 * Create the "internal" firewall rule for kubemark master. This is equivalent to the "internal" firewall rule that is created for the regular masters. The main reason for doing it is to allow prometheus scraping metrics from various kubemark master components, e.g. kubelet. Ref. kubernetes/perf-tests#503 * fix disk list corruption issue * Fix verify godeps failure for 1.13 github.com/evanphx/json-patch added a new tag at the same sha this morning: https://github.com/evanphx/json-patch/releases/tag/v4.2.0 This confused godeps. This PR updates our file to match godeps expectation. Fixes issue 77238 * Upgrade Stackdriver Logging Agent addon image from 1.6.0 to 1.6.8. * Test kubectl cp escape * Properly handle links in tar * Update the dynamic volume limit in GCE PD Currently GCE PD support 128 maximum disks attached to a node for all machines types except shared-core. This PR updates the limit number to date. Change-Id: Id9dfdbd24763b6b4138935842c246b1803838b78 * Use consistent imageRef during container startup * Replace vmss update API with instance-level update API * Cleanup codes that not required any more * Add unit tests * Upgrade compute API to version 2019-03-01 * Update vendors * Fix issues because of rebase * Pick up security patches for fluentd-gcp-scaler by upgrading to version 0.5.2 * Fix race condition between actual and desired state in kublet volume manager This PR fixes the issue kubernetes#75345. This fix modified the checking volume in actual state when validating whether volume can be removed from desired state or not. Only if volume status is already mounted in actual state, it can be removed from desired state. For the case of mounting fails always, it can still work because the check also validate whether pod still exist in pod manager. In case of mount fails, pod should be able to removed from pod manager so that volume can also be removed from desired state. * Error when etcd3 watch finds delete event with nil prevKV * Kubernetes version v1.13.7-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.6. * check if Memory is not nil for container stats * Update k8s-dns-node-cache image version This revised image resolves kubernetes dns#292 by updating the image from `k8s-dns-node-cache:1.15.2` to `k8s-dns-node-cache:1.15.2` * In GuaranteedUpdate, retry on any error if we are working with stale data * BoundServiceAccountTokenVolume: fix InClusterConfig * fix CVE-2019-11244: `kubectl --http-cache=<world-accessible dir>` creates world-writeable cached schema files * Upgrade Azure network API version to 2018-07-01 * Terminate watchers when watch cache is destroyed * Update godeps * honor overridden tokenfile, add InClusterConfig override tests * Remove terminated pod from summary api. Signed-off-by: Lantao Liu <lantaol@google.com> * fix incorrect prometheus metrics little code refactor * Fix eviction dry-run * Revert "Use consistent imageRef during container startup" This reverts commit 26e3c86. * fix azure retry issue when return 2XX with error fix comments * Disable graceful termination for udp
rjaini
added a commit
to msazurestackworkloads/kubernetes
that referenced
this issue
Jul 11, 2019
* Fix bug with volume getting marked as not in-use with pending op Add test for verifying volume detach * Fix flake with e2e test that checks detach while mount in progress A volume can show up as in-use even before it gets attached to the node. * Fix kubernetes#73479 AWS NLB target groups missing tags `elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com> * remove get azure accounts in the init process set timeout for get azure account operation use const for timeout value remove get azure accounts in the init process add lock for account init * add timeout in GetVolumeLimits operation add timeout for getAllStorageAccounts * add mixed protocol support for azure load balancer * record event on endpoint update failure * fix parse devicePath issue on Azure Disk * Fix scanning of failed targets If a iSCSI target is down while a volume is attached, reading from /sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address fails with an error. Kubelet should assume that such target is not available / logged in and try to relogin. Eventually, if such error persists, it should continue mounting the volume if the other paths are healthy instead of failing whole WaitForAttach(). * Kubernetes version v1.12.7-beta.0 openapi-spec file updates * add retry for detach azure disk add more logging info in detach disk add more logging for azure disk attach/detach * Add/Update CHANGELOG-1.12.md for v1.12.6. * Reduce cardinality of admission webhook metrics * fix negative slice index error in keymutex * Remove reflector metrics as they currently cause a memory leak * Explicitly set GVK when sending objects to webhooks * add Azure Container Registry anonymous repo support apply fix for msi and fix test failure * DaemonSet e2e: Update image and rolling upgrade test timeout Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com> * Revert kubelet to default to ttl cache secret/configmap behavior * cri_stats_provider: overload nil as 0 for exited containers stats Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> * flush iptable chains first and then remove them while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615 * Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity. * Fix panic in kubectl cp command * Augmenting API call retry in nodeinfomanager * Bump debian-iptables to v11.0.1. Rebase docker image on debian-base:0.4.1 * Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata. * GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume * Update Cluster Autoscaler version to 1.12.3 * add module 'nf_conntrack' in ipvs prerequisite check * Allow disable outbound snat when Azure standard load balancer is used * Ensure Azure load balancer cleaned up on 404 or 403 * fix smb unmount issue on Windows fix log warning use IsCorruptedMnt in GetMountRefs on Windows use errorno in IsCorruptedMnt check fix comments: add more error code add more error no checking change year fix comments fix bazel error fix bazel fix bazel fix bazel revert bazel change * kubelet: updated logic of verifying a static critical pod - check if a pod is static by its static pod info - meanwhile, check if a pod is critical by its corresponding mirror pod info * Allow session affinity a period of time to setup for new services. This is to deal with the flaky session affinity test. * Restore username and password kubectl flags * build/gci: bump CNI version to 0.7.5 * fix race condition issue for smb mount on windows change var name * allows configuring NPD release and flags on GCI and add cluster e2e test * allows configuring NPD image version in node e2e test and fix the test * bump repd min size in e2es * Kubernetes version v1.12.8-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.12.md for v1.12.7. * stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236 * Do not delete existing VS and RS when starting * Fix updating 'currentMetrics' field for HPA with 'AverageValue' target * Populate ClientCA in delegating auth setup kubernetes#67768 accidentally removed population of the the ClientCA in the delegating auth setup code. This restores it. * Update gcp images with security patches [stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes. [fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes. [fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes. * Fix AWS driver fails to provision specified fsType * Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template * Bump debian-iptables to v11.0.2 * Avoid panic in cronjob sorting This change handles the case where the ith cronjob may have its start time set to nil. Previously, the Less method could cause a panic in case the ith cronjob had its start time set to nil, but the jth cronjob did not. It would panic when calling Before on a nil StartTime. * Add volume mode downgrade test: should not mount/map in <1.13 * disable HTTP2 ingress test * ensuring that logic is checking for differences in listener * Use Node-Problem-Detector v0.6.3 on GCI * Delete only unscheduled pods if node doesn't exist anymore. * proxy: Take into account exclude CIDRs while deleting legacy real servers * Increase default maximumLoadBalancerRuleCount to 250 * kube-proxy: rename internal field for clarity * kube-proxy: rename vars for clarity, fix err str * kube-proxy: rename field for congruence * kube-proxy: reject 0 endpoints on forward Previously we only REJECTed on OUTPUT which works for packets from the node but not for packets from pods on the node. * kube-proxy: remove old cleanup rules * Kube-proxy: REJECT LB IPs with no endpoints We REJECT every other case. Close this FIXME. To get this to work in all cases, we have to process service in filter.INPUT, since LB IPS might be manged as local addresses. * Retool HTTP and UDP e2e utils This is a prefactoring for followup changes that need to use very similar but subtly different test. Now it is more generic, though it pushes a little logic up the stack. That makes sense to me. * Fix small race in e2e Occasionally we get spurious errors about "no route to host" when we race with kube-proxy. This should reduce that. It's mostly just log noise. * Fix Azure SLB support for multiple backend pools Azure VM and vmssVM support multiple backend pools for the same SLB, but not for different LBs. * Set CPU metrics for init containers under containerd Copies PR kubernetes#76503 for release-1.12. metrics-server doesn't return metrics for pods with init containers under containerd because they have incomplete CPU metrics returned by the kubelet /stats/summary API. This problem has been fixed in 1.14 (kubernetes#74336), but the cherry-picks dropped the usageNanoCores metric. This change adds the missing usageNanoCores metric for init containers in Kubernetes v1.12. Fixes kubernetes#76292 * Restore metrics-server using of IP addresses This preference list matches is used to pick prefered field from k8s node object. It was introduced in metrics-server 0.3 and changed default behaviour to use DNS instead of IP addresses. It was merged into k8s 1.12 and caused breaking change by introducing dependency on DNS configuration. * Revert "Merge pull request kubernetes#76529 from spencerhance/automated-cherry-pick-of-#72534-kubernetes#74394-upstream-release-1.12" This reverts commit 535e3ad, reversing changes made to 336d787. * Kubernetes version v1.12.9-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.12.md for v1.12.8. * Upgrade compute API to version 2019-03-01 * Replace vmss update API with instance-level update API * Cleanup codes that not required any more * Add unit tests * Update vendors * Update Cluster Autoscaler to 1.12.5 * add shareName param in azure file storage class skip create azure file if it exists remove comments * Create the "internal" firewall rule for kubemark master. This is equivalent to the "internal" firewall rule that is created for the regular masters. The main reason for doing it is to allow prometheus scraping metrics from various kubemark master components, e.g. kubelet. Ref. kubernetes/perf-tests#503 * refactor detach azure disk retry operation * move disk lock process to azure cloud provider fix comments fix import keymux check error add unit test for attach/detach disk funcs fix bazel issue rebase * fix disk list corruption issue * Fix verify godeps failure for 1.12 github.com/evanphx/json-patch added a new tag at the same sha this morning: https://github.com/evanphx/json-patch/releases/tag/v4.2.0 This confused godeps. This PR updates our file to match godeps expectation. Fixes issue 77238 * Upgrade Stackdriver Logging Agent addon image from 1.6.0 to 1.6.8. * Test kubectl cp escape * Properly handle links in tar * use k8s.gcr.io/pause instead of kubernetes/pause * Pick up security patches for fluentd-gcp-scaler by upgrading to version 0.5.2 * Error when etcd3 watch finds delete event with nil prevKV * Make CreatePrivilegedPSPBinding reentrant Make CreatePrivilegedPSPBinding reentrant so tests using it (e.g. DNS) can be executed more than once against a cluster. Without this change, such tests will fail because the PSP already exists, short circuiting test setup. * check if Memory is not nil for container stats * Bump ip-masq-agent version to v2.3.0 * In GuaranteedUpdate, retry on any error if we are working with stale data * BoundServiceAccountTokenVolume: fix InClusterConfig * fix CVE-2019-11244: `kubectl --http-cache=<world-accessible dir>` creates world-writeable cached schema files * Terminate watchers when watch cache is destroyed * honor overridden tokenfile, add InClusterConfig override tests * fix incorrect prometheus metrics * Kubernetes version v1.12.10-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.12.md for v1.12.9. * fix azure retry issue when return 2XX with error fix comments * Disable graceful termination for udp * fix: update vm if detach a non-existing disk fix gofmt issue fix build error * Fix incorrect procMount defaulting * ipvs: fix string check for IPVS protocol during graceful termination Signed-off-by: Andrew Sy Kim <kiman@vmware.com> * kubeadm: apply taints on non-control-plane node join This backports a change made in 1.13 which fixes the process applying taints when joining worker nodes. * fix flexvol stuck issue due to corrupted mnt point fix comments about PathExists fix comments revert change in PathExists func * Avoid the default server mux * Default resourceGroup should be used when value of annotation azure-load-balancer-resource-group is empty string
rjaini
added a commit
to msazurestackworkloads/kubernetes
that referenced
this issue
Jul 11, 2019
* Fix kubernetes#73479 AWS NLB target groups missing tags `elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com> * record event on endpoint update failure * Fix scanning of failed targets If a iSCSI target is down while a volume is attached, reading from /sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address fails with an error. Kubelet should assume that such target is not available / logged in and try to relogin. Eventually, if such error persists, it should continue mounting the volume if the other paths are healthy instead of failing whole WaitForAttach(). * Applies zone labels to newly created vsphere volumes * Provision vsphere volume honoring zones * Explicitly set GVK when sending objects to webhooks * Remove reflector metrics as they currently cause a memory leak * add health plugin in the DNS tests * add more logging in azure disk attach/detach * Kubernetes version v1.13.5-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.4. * add Azure Container Registry anonymous repo support apply fix for msi and fix test failure * DaemonSet e2e: Update image and rolling upgrade test timeout Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com> * Revert kubelet to default to ttl cache secret/configmap behavior * cri_stats_provider: overload nil as 0 for exited containers stats Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> * flush iptable chains first and then remove them while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615 * Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity. * Fix panic in kubectl cp command * Bump debian-iptables to v11.0.1 Rebase docker image on debian-base:0.4.1 * Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata. * GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume * Fix the network policy tests. This is a cherrypick of the following commit https://github.com/kubernetes/kubernetes/pull/74290/commits * Update Cluster Autoscaler version to 1.13.2 * Ensure Azure load balancer cleaned up on 404 or 403 * Allow disable outbound snat when Azure standard load balancer is used * Allow session affinity a period of time to setup for new services. This is to deal with the flaky session affinity test. * Distinguish volume path with mount path * Delay CSI client initialization * kubelet: updated logic of verifying a static critical pod - check if a pod is static by its static pod info - meanwhile, check if a pod is critical by its corresponding mirror pod info * Restore username and password kubectl flags * build/gci: bump CNI version to 0.7.5 * fix smb unmount issue on Windows fix log warning use IsCorruptedMnt in GetMountRefs on Windows use errorno in IsCorruptedMnt check fix comments: add more error code add more error no checking change year fix comments * fix race condition issue for smb mount on windows change var name * Fix aad support in kubectl for sovereign cloud * make describers of different versions work properly when autoscaling/v2beta2 is not supported * allows configuring NPD release and flags on GCI and add cluster e2e test * allows configuring NPD image version in node e2e test and fix the test * bump repd min size in e2es * Kubernetes version v1.13.6-beta.0 openapi-spec file updates * stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236 * Add/Update CHANGELOG-1.13.md for v1.13.5. * Add flag to enable strict ARP * Do not delete existing VS and RS when starting * Fix updating 'currentMetrics' field for HPA with 'AverageValue' target * Update config tests * Bump go-openapi/jsonpointer and go-openapi/jsonreference versions xref: kubernetes#75653 Signed-off-by: Jorge Alarcon Ochoa <alarcj137@gmail.com> * Fix nil pointer dereference panic in attachDetachController add check `attachableVolumePlugin == nil` to operationGenerator.GenerateDetachVolumeFunc() * if ephemeral-storage not exist in initialCapacity, don't upgrade ephemeral-storage in node status * Update gcp images with security patches [stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes. [fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes. [fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes. * Fix AWS driver fails to provision specified fsType * Bump debian-iptables to v11.0.2. * Avoid panic in cronjob sorting This change handles the case where the ith cronjob may have its start time set to nil. Previously, the Less method could cause a panic in case the ith cronjob had its start time set to nil, but the jth cronjob did not. It would panic when calling Before on a nil StartTime. * Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template * Check for required name parameter in dynamic client The Create, Delete, Get, Patch, Update and UpdateStatus methods in the dynamic client all expect the name parameter to be non-empty, but did not validate this requirement, which could lead to a panic. Add explicit checks to these methods. * disable HTTP2 ingress test * ensuring that logic is checking for differences in listener * Delete only unscheduled pods if node doesn't exist anymore. * Use Node-Problem-Detector v0.6.3 on GCI * proxy: Take into account exclude CIDRs while deleting legacy real servers * Update addon-manager to use debian-base:v1.0.0 * Increase default maximumLoadBalancerRuleCount to 250 * Set CPU metrics for init containers under containerd metrics-server doesn't return metrics for pods with init containers under containerd because they have incomplete CPU metrics returned by the kubelet /stats/summary API. This problem has been fixed in 1.14 (kubernetes#74336), but the cherry-picks dropped the `usageNanoCores` metric. This change adds the missing `usageNanoCores` metric for init containers. Fixes kubernetes#76292 * kube-proxy: rename internal field for clarity * kube-proxy: rename vars for clarity, fix err str * kube-proxy: rename field for congruence * kube-proxy: reject 0 endpoints on forward Previously we only REJECTed on OUTPUT which works for packets from the node but not for packets from pods on the node. * kube-proxy: remove old cleanup rules * Kube-proxy: REJECT LB IPs with no endpoints We REJECT every other case. Close this FIXME. To get this to work in all cases, we have to process service in filter.INPUT, since LB IPS might be manged as local addresses. * Retool HTTP and UDP e2e utils This is a prefactoring for followup changes that need to use very similar but subtly different test. Now it is more generic, though it pushes a little logic up the stack. That makes sense to me. * Fix small race in e2e Occasionally we get spurious errors about "no route to host" when we race with kube-proxy. This should reduce that. It's mostly just log noise. * Bump coreos/go-semver The https://github.com/coreos/go-semver/ dependency has formally release v0.3.0 at commit e214231b295a8ea9479f11b70b35d5acf3556d9b. This is the commit point we've been using, but the hack/verify-godeps.sh script notices the discrepancy and causes ci-kubernetes-verify job to fail. Fixes: kubernetes#76526 Signed-off-by: Tim Pepper <tpepper@vmware.com> * Fix Azure SLB support for multiple backend pools Azure VM and vmssVM support multiple backend pools for the same SLB, but not for different LBs. * Kubelet: add usageNanoCores from CRI stats provider * Fix computing of cpu nano core usage CRI runtimes do not supply cpu nano core usage as it is not part of CRI stats. However, there are upstream components that still rely on such stats to function. The previous fix was faulty because the multiple callers could compete and update the stats, causing inconsistent/incoherent metrics. This change, instead, creates a separate call for updating the usage, and rely on eviction manager, which runs periodically, to trigger the updates. The caveat is that if eviction manager is completley turned off, no one would compute the usage. * Restore metrics-server using of IP addresses This preference list matches is used to pick prefered field from k8s node object. It was introduced in metrics-server 0.3 and changed default behaviour to use DNS instead of IP addresses. It was merged into k8s 1.12 and caused breaking change by introducing dependency on DNS configuration. * refactor detach azure disk retry operation * move disk lock process to azure cloud provider fix comments fix import keymux check error add unit test for attach/detach disk funcs fix build error fix build error * e2e-node-tests: fix path to system specs e2e-node tests may use custom system specs for validating nodes to conform the specs. The functionality is switched on when the tests are run with this command: make SYSTEM_SPEC_NAME=gke test-e2e-node Currently the command fails with the error: F1228 16:12:41.568836 34514 e2e_node_suite_test.go:106] Failed to load system spec: open /home/rojkov/go/src/k8s.io/kubernetes/k8s.io/kubernetes/cmd/kubeadm/app/util/system/specs/gke.yaml: no such file or directory Move the spec file under `test/e2e_node/system/specs` and introduce a single public constant referring the file to use instead of multiple private constants. * Fix concurrent map access in Portworx create volume call Fixes kubernetes#76340 Signed-off-by: Harsh Desai <harsh@portworx.com> * add shareName param in azure file storage class skip create azure file if it exists * Update Cluster Autoscaler to 1.13.4 * Create the "internal" firewall rule for kubemark master. This is equivalent to the "internal" firewall rule that is created for the regular masters. The main reason for doing it is to allow prometheus scraping metrics from various kubemark master components, e.g. kubelet. Ref. kubernetes/perf-tests#503 * fix disk list corruption issue * Fix verify godeps failure for 1.13 github.com/evanphx/json-patch added a new tag at the same sha this morning: https://github.com/evanphx/json-patch/releases/tag/v4.2.0 This confused godeps. This PR updates our file to match godeps expectation. Fixes issue 77238 * Upgrade Stackdriver Logging Agent addon image from 1.6.0 to 1.6.8. * Test kubectl cp escape * Properly handle links in tar * Update the dynamic volume limit in GCE PD Currently GCE PD support 128 maximum disks attached to a node for all machines types except shared-core. This PR updates the limit number to date. Change-Id: Id9dfdbd24763b6b4138935842c246b1803838b78 * Use consistent imageRef during container startup * Replace vmss update API with instance-level update API * Cleanup codes that not required any more * Add unit tests * Upgrade compute API to version 2019-03-01 * Update vendors * Fix issues because of rebase * Pick up security patches for fluentd-gcp-scaler by upgrading to version 0.5.2 * Fix race condition between actual and desired state in kublet volume manager This PR fixes the issue kubernetes#75345. This fix modified the checking volume in actual state when validating whether volume can be removed from desired state or not. Only if volume status is already mounted in actual state, it can be removed from desired state. For the case of mounting fails always, it can still work because the check also validate whether pod still exist in pod manager. In case of mount fails, pod should be able to removed from pod manager so that volume can also be removed from desired state. * Short-circuit quota admission rejection on zero-delta updates * Error when etcd3 watch finds delete event with nil prevKV * Accept admission request if resource is being deleted * Kubernetes version v1.13.7-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.6. * Bump addon-manager to v8.9.1 - Rebase image on debian-base:v1.0.0 * check if Memory is not nil for container stats * Update k8s-dns-node-cache image version This revised image resolves kubernetes dns#292 by updating the image from `k8s-dns-node-cache:1.15.2` to `k8s-dns-node-cache:1.15.2` * Bump ip-masq-agent version to v2.3.0 * In GuaranteedUpdate, retry on any error if we are working with stale data * BoundServiceAccountTokenVolume: fix InClusterConfig * fix CVE-2019-11244: `kubectl --http-cache=<world-accessible dir>` creates world-writeable cached schema files * Upgrade Azure network API version to 2018-07-01 * Terminate watchers when watch cache is destroyed * Update godeps * honor overridden tokenfile, add InClusterConfig override tests * Remove terminated pod from summary api. Signed-off-by: Lantao Liu <lantaol@google.com> * fix incorrect prometheus metrics little code refactor * Fix eviction dry-run * Revert "Use consistent imageRef during container startup" This reverts commit 26e3c86. * fix azure retry issue when return 2XX with error fix comments * Disable graceful termination for udp * Kubernetes version v1.13.8-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.7. * fix: update vm if detach a non-existing disk fix gofmt issue * Fix incorrect procMount defaulting * ipvs: fix string check for IPVS protocol during graceful termination Signed-off-by: Andrew Sy Kim <kiman@vmware.com> * fix flexvol stuck issue due to corrupted mnt point fix comments about PathExists fix comments revert change in PathExists func * Avoid the default server mux * kubelet: retry pod sandbox creation when containers were never created If kubelet never gets past sandbox creation (i.e., never attempted to create containers for a pod), it should retry the sandbox creation on failure, regardless of the restart policy of the pod. * Default resourceGroup should be used when value of annotation azure-load-balancer-resource-group is empty string * Replace bitbucket with github This commit has the following changes: - Replace `bitbucket.org/ww/goautoneg` with `github.com/munnerz/goautoneg`. - Replace `bitbucket.org/bertimus9/systemstat` with `github.com/nikhita/systemstat`. - Bump kube-openapi to remove so that it's dependency on `bitbucket.org/ww/goautoneg` moves to `github.com/munnerz/goautoneg`. - Generate `swagger.json` generated from the above change. - Update `BUILD` files. Bitbucket is replaced with GitHub because: Atlassian finally pulled the plug on their 1.0 api and forces everyone to use 2.0 now: https://developer.atlassian.com/cloud/bitbucket/deprecation-notice-v1-apis/ This leads to an error like: ``` godep: error downloading dep (bitbucket.org/ww/goautoneg): https://api.bitbucket.org/1.0/repositories/ww/goautoneg: 410 Gone ``` This was fixed in upstream go in golang/tools@13ba8ad. To fix this in k/k: 1) We'll need to either bump our vendored version https://github.com/kubernetes/kubernetes/blob/release-1.13/vendor/golang.org/x/tools/go/vcs/vcs.go#L676. However, this bump brings in _lots_ of changes. 2) We can entirely remove our dependency on bitbucket. The second point is better because: 1) godep itself vendors in an older version: https://github.com/tools/godep/blob/master/vendor/golang.org/x/tools/go/vcs/vcs.go#L667. This means that anyone who installs godep directly, without forking it, will not be able to use it with Kubernetes if we stick to bitbucket. 2) Bumping `golang/x/tools` requires running `godep restore`, which doesn't work because that uses the 1.0 api...leading to a catch-22 like situation. * Allow unit test to pass on machines without ipv6 * fix kubelet can not delete orphaned pod directory when the kubelet's root directory symbolically links to another device's directory * Fix AWS DHCP option set domain names causing garbled InternalDNS or Hostname addresses on Node * Fix closing of dirs in doSafeMakeDir This fixes the issue where "childFD" from syscall.Openat is assigned to a local variable inside the for loop, instead of the correct one in the function scope. This results in that when trying to close the "childFD" in the function scope, it will be equal to "-1", instead of the correct value. * There are various reasons that the HPA will decide not the change the current scale. Two important ones are when missing metrics might change the direction of scaling, and when the recommended scale is within tolerance of the current scale. The way that ReplicaCalculator signals it's desire to not change the current scale is by returning the current scale. However the current scale is from scale.Status.Replicas and can be larger than scale.Spec.Replicas (e.g. during Deployment rollout with configured surge). This causes a positive feedback loop because scale.Status.Replicas is written back into scale.Spec.Replicas, further increasing the current scale. This PR fixes the feedback loop by plumbing the replica count from spec through horizontal.go and replica_calculator.go so the calculator can punt with the right value. * edit google dns hostname
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
kind/bug
Categorizes issue or PR as related to a bug.
sig/network
Categorizes an issue or PR as relevant to SIG Network.
What happened:
If
kube-proxy
restarts it reportsThis is due to another chain referencing
KUBE-MARK-MASQ
. This would not happen if the chains are first all flushed and then removed.Some examples of chains referencing
KUBE-MARK-MASQ
:See: https://github.com/alexjx/kubernetes/blob/4ca62e4f39c59ebe66a59685a8f020315dfd3016/pkg/proxy/ipvs/proxier.go#L520
We are running in
iptables
mode. The error comes from ipvs cleanup.What you expected to happen:
That all chains get cleaned up properly.
How to reproduce it (as minimally and precisely as possible):
Start the proxy in
iptables
mode. Kill it. Start it again. Uncertain if specific services cause this problem. I can reproduce this in an almost empty cluster.Anything else we need to know?:
Environment:
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration:
AWS
OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1855.5.0
VERSION_ID=1855.5.0
BUILD_ID=2018-10-22-2305
PRETTY_NAME="Container Linux by CoreOS 1855.5.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
Kernel (e.g.
uname -a
):Linux ip-10-1-2-53.eu-central-1.compute.internal 4.14.74-coreos Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Mon Oct 22 22:12:42 UTC 2018 x86_64 Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz GenuineIntel GNU/Linux
/kind bug
The text was updated successfully, but these errors were encountered: