This repository has been archived by the owner on Apr 3, 2023. It is now read-only.
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: Add K8s 1.13.8 payload for Azure Stack. (#27)
* Fix kubernetes#73479 AWS NLB target groups missing tags `elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com> * record event on endpoint update failure * Fix scanning of failed targets If a iSCSI target is down while a volume is attached, reading from /sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address fails with an error. Kubelet should assume that such target is not available / logged in and try to relogin. Eventually, if such error persists, it should continue mounting the volume if the other paths are healthy instead of failing whole WaitForAttach(). * Applies zone labels to newly created vsphere volumes * Provision vsphere volume honoring zones * Explicitly set GVK when sending objects to webhooks * Remove reflector metrics as they currently cause a memory leak * add health plugin in the DNS tests * add more logging in azure disk attach/detach * Kubernetes version v1.13.5-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.4. * add Azure Container Registry anonymous repo support apply fix for msi and fix test failure * DaemonSet e2e: Update image and rolling upgrade test timeout Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com> * Revert kubelet to default to ttl cache secret/configmap behavior * cri_stats_provider: overload nil as 0 for exited containers stats Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> * flush iptable chains first and then remove them while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615 * Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity. * Fix panic in kubectl cp command * Bump debian-iptables to v11.0.1 Rebase docker image on debian-base:0.4.1 * Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata. * GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume * Fix the network policy tests. This is a cherrypick of the following commit https://github.com/kubernetes/kubernetes/pull/74290/commits * Update Cluster Autoscaler version to 1.13.2 * Ensure Azure load balancer cleaned up on 404 or 403 * Allow disable outbound snat when Azure standard load balancer is used * Allow session affinity a period of time to setup for new services. This is to deal with the flaky session affinity test. * Distinguish volume path with mount path * Delay CSI client initialization * kubelet: updated logic of verifying a static critical pod - check if a pod is static by its static pod info - meanwhile, check if a pod is critical by its corresponding mirror pod info * Restore username and password kubectl flags * build/gci: bump CNI version to 0.7.5 * fix smb unmount issue on Windows fix log warning use IsCorruptedMnt in GetMountRefs on Windows use errorno in IsCorruptedMnt check fix comments: add more error code add more error no checking change year fix comments * fix race condition issue for smb mount on windows change var name * Fix aad support in kubectl for sovereign cloud * make describers of different versions work properly when autoscaling/v2beta2 is not supported * allows configuring NPD release and flags on GCI and add cluster e2e test * allows configuring NPD image version in node e2e test and fix the test * bump repd min size in e2es * Kubernetes version v1.13.6-beta.0 openapi-spec file updates * stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236 * Add/Update CHANGELOG-1.13.md for v1.13.5. * Add flag to enable strict ARP * Do not delete existing VS and RS when starting * Fix updating 'currentMetrics' field for HPA with 'AverageValue' target * Update config tests * Bump go-openapi/jsonpointer and go-openapi/jsonreference versions xref: kubernetes#75653 Signed-off-by: Jorge Alarcon Ochoa <alarcj137@gmail.com> * Fix nil pointer dereference panic in attachDetachController add check `attachableVolumePlugin == nil` to operationGenerator.GenerateDetachVolumeFunc() * if ephemeral-storage not exist in initialCapacity, don't upgrade ephemeral-storage in node status * Update gcp images with security patches [stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes. [fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes. [fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes. [metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes. * Fix AWS driver fails to provision specified fsType * Bump debian-iptables to v11.0.2. * Avoid panic in cronjob sorting This change handles the case where the ith cronjob may have its start time set to nil. Previously, the Less method could cause a panic in case the ith cronjob had its start time set to nil, but the jth cronjob did not. It would panic when calling Before on a nil StartTime. * Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template * Check for required name parameter in dynamic client The Create, Delete, Get, Patch, Update and UpdateStatus methods in the dynamic client all expect the name parameter to be non-empty, but did not validate this requirement, which could lead to a panic. Add explicit checks to these methods. * disable HTTP2 ingress test * ensuring that logic is checking for differences in listener * Delete only unscheduled pods if node doesn't exist anymore. * Use Node-Problem-Detector v0.6.3 on GCI * proxy: Take into account exclude CIDRs while deleting legacy real servers * Update addon-manager to use debian-base:v1.0.0 * Increase default maximumLoadBalancerRuleCount to 250 * Set CPU metrics for init containers under containerd metrics-server doesn't return metrics for pods with init containers under containerd because they have incomplete CPU metrics returned by the kubelet /stats/summary API. This problem has been fixed in 1.14 (kubernetes#74336), but the cherry-picks dropped the `usageNanoCores` metric. This change adds the missing `usageNanoCores` metric for init containers. Fixes kubernetes#76292 * kube-proxy: rename internal field for clarity * kube-proxy: rename vars for clarity, fix err str * kube-proxy: rename field for congruence * kube-proxy: reject 0 endpoints on forward Previously we only REJECTed on OUTPUT which works for packets from the node but not for packets from pods on the node. * kube-proxy: remove old cleanup rules * Kube-proxy: REJECT LB IPs with no endpoints We REJECT every other case. Close this FIXME. To get this to work in all cases, we have to process service in filter.INPUT, since LB IPS might be manged as local addresses. * Retool HTTP and UDP e2e utils This is a prefactoring for followup changes that need to use very similar but subtly different test. Now it is more generic, though it pushes a little logic up the stack. That makes sense to me. * Fix small race in e2e Occasionally we get spurious errors about "no route to host" when we race with kube-proxy. This should reduce that. It's mostly just log noise. * Bump coreos/go-semver The https://github.com/coreos/go-semver/ dependency has formally release v0.3.0 at commit e214231b295a8ea9479f11b70b35d5acf3556d9b. This is the commit point we've been using, but the hack/verify-godeps.sh script notices the discrepancy and causes ci-kubernetes-verify job to fail. Fixes: kubernetes#76526 Signed-off-by: Tim Pepper <tpepper@vmware.com> * Fix Azure SLB support for multiple backend pools Azure VM and vmssVM support multiple backend pools for the same SLB, but not for different LBs. * Kubelet: add usageNanoCores from CRI stats provider * Fix computing of cpu nano core usage CRI runtimes do not supply cpu nano core usage as it is not part of CRI stats. However, there are upstream components that still rely on such stats to function. The previous fix was faulty because the multiple callers could compete and update the stats, causing inconsistent/incoherent metrics. This change, instead, creates a separate call for updating the usage, and rely on eviction manager, which runs periodically, to trigger the updates. The caveat is that if eviction manager is completley turned off, no one would compute the usage. * Restore metrics-server using of IP addresses This preference list matches is used to pick prefered field from k8s node object. It was introduced in metrics-server 0.3 and changed default behaviour to use DNS instead of IP addresses. It was merged into k8s 1.12 and caused breaking change by introducing dependency on DNS configuration. * refactor detach azure disk retry operation * move disk lock process to azure cloud provider fix comments fix import keymux check error add unit test for attach/detach disk funcs fix build error fix build error * e2e-node-tests: fix path to system specs e2e-node tests may use custom system specs for validating nodes to conform the specs. The functionality is switched on when the tests are run with this command: make SYSTEM_SPEC_NAME=gke test-e2e-node Currently the command fails with the error: F1228 16:12:41.568836 34514 e2e_node_suite_test.go:106] Failed to load system spec: open /home/rojkov/go/src/k8s.io/kubernetes/k8s.io/kubernetes/cmd/kubeadm/app/util/system/specs/gke.yaml: no such file or directory Move the spec file under `test/e2e_node/system/specs` and introduce a single public constant referring the file to use instead of multiple private constants. * Fix concurrent map access in Portworx create volume call Fixes kubernetes#76340 Signed-off-by: Harsh Desai <harsh@portworx.com> * add shareName param in azure file storage class skip create azure file if it exists * Update Cluster Autoscaler to 1.13.4 * Create the "internal" firewall rule for kubemark master. This is equivalent to the "internal" firewall rule that is created for the regular masters. The main reason for doing it is to allow prometheus scraping metrics from various kubemark master components, e.g. kubelet. Ref. kubernetes/perf-tests#503 * fix disk list corruption issue * Fix verify godeps failure for 1.13 github.com/evanphx/json-patch added a new tag at the same sha this morning: https://github.com/evanphx/json-patch/releases/tag/v4.2.0 This confused godeps. This PR updates our file to match godeps expectation. Fixes issue 77238 * Upgrade Stackdriver Logging Agent addon image from 1.6.0 to 1.6.8. * Test kubectl cp escape * Properly handle links in tar * Update the dynamic volume limit in GCE PD Currently GCE PD support 128 maximum disks attached to a node for all machines types except shared-core. This PR updates the limit number to date. Change-Id: Id9dfdbd24763b6b4138935842c246b1803838b78 * Use consistent imageRef during container startup * Replace vmss update API with instance-level update API * Cleanup codes that not required any more * Add unit tests * Upgrade compute API to version 2019-03-01 * Update vendors * Fix issues because of rebase * Pick up security patches for fluentd-gcp-scaler by upgrading to version 0.5.2 * Fix race condition between actual and desired state in kublet volume manager This PR fixes the issue kubernetes#75345. This fix modified the checking volume in actual state when validating whether volume can be removed from desired state or not. Only if volume status is already mounted in actual state, it can be removed from desired state. For the case of mounting fails always, it can still work because the check also validate whether pod still exist in pod manager. In case of mount fails, pod should be able to removed from pod manager so that volume can also be removed from desired state. * Short-circuit quota admission rejection on zero-delta updates * Error when etcd3 watch finds delete event with nil prevKV * Accept admission request if resource is being deleted * Kubernetes version v1.13.7-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.6. * Bump addon-manager to v8.9.1 - Rebase image on debian-base:v1.0.0 * check if Memory is not nil for container stats * Update k8s-dns-node-cache image version This revised image resolves kubernetes dns#292 by updating the image from `k8s-dns-node-cache:1.15.2` to `k8s-dns-node-cache:1.15.2` * Bump ip-masq-agent version to v2.3.0 * In GuaranteedUpdate, retry on any error if we are working with stale data * BoundServiceAccountTokenVolume: fix InClusterConfig * fix CVE-2019-11244: `kubectl --http-cache=<world-accessible dir>` creates world-writeable cached schema files * Upgrade Azure network API version to 2018-07-01 * Terminate watchers when watch cache is destroyed * Update godeps * honor overridden tokenfile, add InClusterConfig override tests * Remove terminated pod from summary api. Signed-off-by: Lantao Liu <lantaol@google.com> * fix incorrect prometheus metrics little code refactor * Fix eviction dry-run * Revert "Use consistent imageRef during container startup" This reverts commit 26e3c86. * fix azure retry issue when return 2XX with error fix comments * Disable graceful termination for udp * Kubernetes version v1.13.8-beta.0 openapi-spec file updates * Add/Update CHANGELOG-1.13.md for v1.13.7. * fix: update vm if detach a non-existing disk fix gofmt issue * Fix incorrect procMount defaulting * ipvs: fix string check for IPVS protocol during graceful termination Signed-off-by: Andrew Sy Kim <kiman@vmware.com> * fix flexvol stuck issue due to corrupted mnt point fix comments about PathExists fix comments revert change in PathExists func * Avoid the default server mux * kubelet: retry pod sandbox creation when containers were never created If kubelet never gets past sandbox creation (i.e., never attempted to create containers for a pod), it should retry the sandbox creation on failure, regardless of the restart policy of the pod. * Default resourceGroup should be used when value of annotation azure-load-balancer-resource-group is empty string * Replace bitbucket with github This commit has the following changes: - Replace `bitbucket.org/ww/goautoneg` with `github.com/munnerz/goautoneg`. - Replace `bitbucket.org/bertimus9/systemstat` with `github.com/nikhita/systemstat`. - Bump kube-openapi to remove so that it's dependency on `bitbucket.org/ww/goautoneg` moves to `github.com/munnerz/goautoneg`. - Generate `swagger.json` generated from the above change. - Update `BUILD` files. Bitbucket is replaced with GitHub because: Atlassian finally pulled the plug on their 1.0 api and forces everyone to use 2.0 now: https://developer.atlassian.com/cloud/bitbucket/deprecation-notice-v1-apis/ This leads to an error like: ``` godep: error downloading dep (bitbucket.org/ww/goautoneg): https://api.bitbucket.org/1.0/repositories/ww/goautoneg: 410 Gone ``` This was fixed in upstream go in golang/tools@13ba8ad. To fix this in k/k: 1) We'll need to either bump our vendored version https://github.com/kubernetes/kubernetes/blob/release-1.13/vendor/golang.org/x/tools/go/vcs/vcs.go#L676. However, this bump brings in _lots_ of changes. 2) We can entirely remove our dependency on bitbucket. The second point is better because: 1) godep itself vendors in an older version: https://github.com/tools/godep/blob/master/vendor/golang.org/x/tools/go/vcs/vcs.go#L667. This means that anyone who installs godep directly, without forking it, will not be able to use it with Kubernetes if we stick to bitbucket. 2) Bumping `golang/x/tools` requires running `godep restore`, which doesn't work because that uses the 1.0 api...leading to a catch-22 like situation. * Allow unit test to pass on machines without ipv6 * fix kubelet can not delete orphaned pod directory when the kubelet's root directory symbolically links to another device's directory * Fix AWS DHCP option set domain names causing garbled InternalDNS or Hostname addresses on Node * Fix closing of dirs in doSafeMakeDir This fixes the issue where "childFD" from syscall.Openat is assigned to a local variable inside the for loop, instead of the correct one in the function scope. This results in that when trying to close the "childFD" in the function scope, it will be equal to "-1", instead of the correct value. * There are various reasons that the HPA will decide not the change the current scale. Two important ones are when missing metrics might change the direction of scaling, and when the recommended scale is within tolerance of the current scale. The way that ReplicaCalculator signals it's desire to not change the current scale is by returning the current scale. However the current scale is from scale.Status.Replicas and can be larger than scale.Spec.Replicas (e.g. during Deployment rollout with configured surge). This causes a positive feedback loop because scale.Status.Replicas is written back into scale.Spec.Replicas, further increasing the current scale. This PR fixes the feedback loop by plumbing the replica count from spec through horizontal.go and replica_calculator.go so the calculator can punt with the right value. * edit google dns hostname
- Loading branch information