Skip to content
This repository has been archived by the owner on Apr 3, 2023. It is now read-only.

Commit

Permalink
Getting 1.12.8 Upstream Code (#20)
Browse files Browse the repository at this point in the history
* Fix kubernetes#73479 AWS NLB target groups missing tags

`elbv2.AddTags` doesn't seem to support assigning the same set of
tags to multiple resources at once leading to the following error:
  Error adding tags after modifying load balancer targets:
  "ValidationError: Only one resource can be tagged at a time"

This can happen when using AWS NLB with multiple listeners pointing
to different node ports.

When k8s creates a NLB it creates a target group per listener along
with installing security group ingress rules allowing the traffic to
reach the k8s nodes.

Unfortunately if those target groups are not tagged, k8s will not
manage them, thinking it is not the owner.

This small changes assigns tags one resource at a time instead of
batching them as before.

Signed-off-by: Brice Figureau <brice@daysofwonder.com>

* remove get azure accounts in the init process set timeout for get azure account operation

use const for timeout value

remove get azure accounts in the init process

add lock for account init

* add timeout in GetVolumeLimits operation

add timeout for getAllStorageAccounts

* add mixed protocol support for azure load balancer

* record event on endpoint update failure

* fix parse devicePath issue on Azure Disk

* Kubernetes version v1.12.7-beta.0 openapi-spec file updates

* add retry for detach azure disk

add more logging info in detach disk

add more logging for azure disk attach/detach

* Add/Update CHANGELOG-1.12.md for v1.12.6.

* Reduce cardinality of admission webhook metrics

* fix negative slice index error in keymutex

* Remove reflector metrics as they currently cause a memory leak

* Explicitly set GVK when sending objects to webhooks

* add Azure Container Registry anonymous repo support

apply fix for msi and fix test failure

* DaemonSet e2e: Update image and rolling upgrade test timeout

Use Nginx as the DaemonSet image instead of the ServeHostname image.
This was changed because the ServeHostname has a sleep after terminating
which makes it incompatible with the DaemonSet Rolling Upgrade e2e test.

In addition, make the DaemonSet Rolling Upgrade e2e test timeout a
function of the number of nodes that make up the cluster. This is
required because the more nodes there are, the longer the time it will
take to complete a rolling upgrade.

Signed-off-by: Alexander Brand <alexbrand09@gmail.com>

* Revert kubelet to default to ttl cache secret/configmap behavior

* cri_stats_provider: overload nil as 0 for exited containers stats

Always report 0 cpu/memory usage for exited containers to make
metrics-server work as expect.

Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>

* flush iptable chains first and then remove them

while cleaning up ipvs mode. flushing iptable chains first and then
remove the chains. this avoids trying to remove chains that are still
referenced by rules in other chains.

fixes kubernetes#70615

* Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity.

* Fix panic in kubectl cp command

* Augmenting API call retry in nodeinfomanager

* Bump debian-iptables to v11.0.1. Rebase docker image on debian-base:0.4.1

* Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata.

* GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume

* Update Cluster Autoscaler version to 1.12.3

* add module 'nf_conntrack' in ipvs prerequisite check

* Allow disable outbound snat when Azure standard load balancer is used

* Ensure Azure load balancer cleaned up on 404 or 403

* fix smb unmount issue on Windows

fix log warning

use IsCorruptedMnt in GetMountRefs on Windows

use errorno in IsCorruptedMnt check

fix comments: add more error code

add more error no checking

change year

fix comments

fix bazel error

fix bazel

fix bazel

fix bazel

revert bazel change

* kubelet: updated logic of verifying a static critical pod

- check if a pod is static by its static pod info
- meanwhile, check if a pod is critical by its corresponding mirror pod info

* Allow session affinity a period of time to setup for new services.

This is to deal with the flaky session affinity test.

* Restore username and password kubectl flags

* build/gci: bump CNI version to 0.7.5

* fix race condition issue for smb mount on windows

change var name

* allows configuring NPD release and flags on GCI and add cluster e2e test

* allows configuring NPD image version in node e2e test and fix the test

* bump repd min size in e2es

* Kubernetes version v1.12.8-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.12.md for v1.12.7.

* stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236

* Do not delete existing VS and RS when starting

* Fix updating 'currentMetrics' field for HPA with 'AverageValue' target

* Populate ClientCA in delegating auth setup

kubernetes#67768 accidentally removed population of the the ClientCA
in the delegating auth setup code.  This restores it.

* Update gcp images with security patches

[stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes.
[fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes.
[fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes.
[fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes.
[metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes.

* Fix AWS driver fails to provision specified fsType

* Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template

* Bump debian-iptables to v11.0.2

* Avoid panic in cronjob sorting

This change handles the case where the ith cronjob may have its start
time set to nil.

Previously, the Less method could cause a panic in case the ith
cronjob had its start time set to nil, but the jth cronjob did not. It
would panic when calling Before on a nil StartTime.

* Add volume mode downgrade test: should not mount/map in <1.13

* disable HTTP2 ingress test

* ensuring that logic is checking for differences in listener

* Use Node-Problem-Detector v0.6.3 on GCI

* Delete only unscheduled pods if node doesn't exist anymore.

* proxy: Take into account exclude CIDRs while deleting legacy real servers

* Increase default maximumLoadBalancerRuleCount to 250

* kube-proxy: rename internal field for clarity

* kube-proxy: rename vars for clarity, fix err str

* kube-proxy: rename field for congruence

* kube-proxy: reject 0 endpoints on forward

Previously we only REJECTed on OUTPUT which works for packets from the
node but not for packets from pods on the node.

* kube-proxy: remove old cleanup rules

* Kube-proxy: REJECT LB IPs with no endpoints

We REJECT every other case.  Close this FIXME.

To get this to work in all cases, we have to process service in
filter.INPUT, since LB IPS might be manged as local addresses.

* Retool HTTP and UDP e2e utils

This is a prefactoring for followup changes that need to use very
similar but subtly different test.  Now it is more generic, though it
pushes a little logic up the stack.  That makes sense to me.

* Fix small race in e2e

Occasionally we get spurious errors about "no route to host" when we
race with kube-proxy.  This should reduce that.  It's mostly just log
noise.

* Fix Azure SLB support for multiple backend pools

Azure VM and vmssVM support multiple backend pools for the same SLB, but
not for different LBs.

* Revert "Merge pull request kubernetes#76529 from spencerhance/automated-cherry-pick-of-#72534-kubernetes#74394-upstream-release-1.12"

This reverts commit 535e3ad, reversing
changes made to 336d787.
  • Loading branch information
rjaini committed May 6, 2019
1 parent 6a44fc8 commit 7424899
Show file tree
Hide file tree
Showing 65 changed files with 1,262 additions and 328 deletions.
176 changes: 133 additions & 43 deletions CHANGELOG-1.12.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion api/openapi-spec/swagger.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion build/common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ readonly KUBE_CONTAINER_RSYNC_PORT=8730
#
# $1 - server architecture
kube::build::get_docker_wrapped_binaries() {
debian_iptables_version=v11.0.1
debian_iptables_version=v11.0.2
### If you change any of these lists, please also update DOCKERIZED_BINARIES
### in build/BUILD. And kube::golang::server_image_targets
case $1 in
Expand Down
2 changes: 1 addition & 1 deletion build/debian-base/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ REGISTRY ?= staging-k8s.gcr.io
IMAGE ?= $(REGISTRY)/debian-base
BUILD_IMAGE ?= debian-build

TAG ?= 0.4.1
TAG ?= v1.0.0

TAR_FILE ?= rootfs.tar
ARCH?=amd64
Expand Down
4 changes: 2 additions & 2 deletions build/debian-iptables/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,12 @@

REGISTRY?="staging-k8s.gcr.io"
IMAGE=$(REGISTRY)/debian-iptables
TAG?=v11.0.1
TAG?=v11.0.2
ARCH?=amd64
ALL_ARCH = amd64 arm arm64 ppc64le s390x
TEMP_DIR:=$(shell mktemp -d)

BASEIMAGE?=k8s.gcr.io/debian-base-$(ARCH):0.4.1
BASEIMAGE?=k8s.gcr.io/debian-base-$(ARCH):v1.0.0

# This option is for running docker manifest command
export DOCKER_CLI_EXPERIMENTAL := enabled
Expand Down
4 changes: 2 additions & 2 deletions build/root/WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -70,10 +70,10 @@ http_file(

docker_pull(
name = "debian-iptables-amd64",
digest = "sha256:9c41b4c326304b94eb96fdd2e181aa6e9995cc4642fcdfb570cedd73a419ba39",
digest = "sha256:adc40e9ec817c15d35b26d1d6aa4d0f8096fba4c99e26a026159bb0bc98c6a89",
registry = "k8s.gcr.io",
repository = "debian-iptables-amd64",
tag = "v11.0.1", # ignored, but kept here for documentation
tag = "v11.0.2", # ignored, but kept here for documentation
)

docker_pull(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ spec:
- --sink=stackdriver:?cluster_name={{ cluster_name }}&use_old_resources={{ use_old_resources }}&use_new_resources={{ use_new_resources }}&min_interval_sec=100&batch_export_timeout_sec=110&cluster_location={{ cluster_location }}
# BEGIN_PROMETHEUS_TO_SD
- name: prom-to-sd
image: k8s.gcr.io/prometheus-to-sd:v0.3.1
image: k8s.gcr.io/prometheus-to-sd:v0.5.0
command:
- /monitor
- --source=heapster:http://localhost:8082?whitelisted=stackdriver_requests_count,stackdriver_timeseries_count
Expand Down
10 changes: 5 additions & 5 deletions cluster/addons/fluentd-gcp/event-exporter.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ subjects:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: event-exporter-v0.2.3
name: event-exporter-v0.2.4
namespace: kube-system
labels:
k8s-app: event-exporter
version: v0.2.3
version: v0.2.4
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
Expand All @@ -42,18 +42,18 @@ spec:
metadata:
labels:
k8s-app: event-exporter
version: v0.2.3
version: v0.2.4
spec:
serviceAccountName: event-exporter-sa
containers:
- name: event-exporter
image: k8s.gcr.io/event-exporter:v0.2.3
image: k8s.gcr.io/event-exporter:v0.2.4
command:
- /event-exporter
- -sink-opts=-stackdriver-resource-model={{ exporter_sd_resource_model }}
# BEGIN_PROMETHEUS_TO_SD
- name: prometheus-to-sd-exporter
image: k8s.gcr.io/prometheus-to-sd:v0.3.1
image: k8s.gcr.io/prometheus-to-sd:v0.5.0
command:
- /monitor
- --stackdriver-prefix={{ prometheus_to_sd_prefix }}/addons
Expand Down
2 changes: 1 addition & 1 deletion cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ spec:
fi;
# BEGIN_PROMETHEUS_TO_SD
- name: prometheus-to-sd-exporter
image: k8s.gcr.io/prometheus-to-sd:v0.3.1
image: k8s.gcr.io/prometheus-to-sd:v0.5.0
command:
- /monitor
- --stackdriver-prefix={{ prometheus_to_sd_prefix }}/addons
Expand Down
4 changes: 2 additions & 2 deletions cluster/addons/fluentd-gcp/scaler-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
namespace: kube-system
labels:
k8s-app: fluentd-gcp-scaler
version: v0.5.0
version: v0.5.1
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
Expand All @@ -19,7 +19,7 @@ spec:
serviceAccountName: fluentd-gcp-scaler
containers:
- name: fluentd-gcp-scaler
image: k8s.gcr.io/fluentd-gcp-scaler:0.5
image: k8s.gcr.io/fluentd-gcp-scaler:0.5.1
command:
- /scaler.sh
- --ds-name=fluentd-gcp-{{ fluentd_gcp_yaml_version }}
Expand Down
2 changes: 1 addition & 1 deletion cluster/addons/metadata-proxy/gce/metadata-proxy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ spec:
cpu: "30m"
# BEGIN_PROMETHEUS_TO_SD
- name: prometheus-to-sd-exporter
image: k8s.gcr.io/prometheus-to-sd:v0.3.1
image: k8s.gcr.io/prometheus-to-sd:v0.5.0
# Request and limit resources to get guaranteed QoS.
resources:
requests:
Expand Down
2 changes: 2 additions & 0 deletions cluster/gce/config-default.sh
Original file line number Diff line number Diff line change
Expand Up @@ -285,6 +285,8 @@ else
fi
NODE_PROBLEM_DETECTOR_VERSION="${NODE_PROBLEM_DETECTOR_VERSION:-}"
NODE_PROBLEM_DETECTOR_TAR_HASH="${NODE_PROBLEM_DETECTOR_TAR_HASH:-}"
NODE_PROBLEM_DETECTOR_RELEASE_PATH="${NODE_PROBLEM_DETECTOR_RELEASE_PATH:-}"
NODE_PROBLEM_DETECTOR_CUSTOM_FLAGS="${NODE_PROBLEM_DETECTOR_CUSTOM_FLAGS:-}"

# Optional: Create autoscaler for cluster's nodes.
ENABLE_CLUSTER_AUTOSCALER="${KUBE_ENABLE_CLUSTER_AUTOSCALER:-false}"
Expand Down
2 changes: 2 additions & 0 deletions cluster/gce/config-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -292,6 +292,8 @@ else
fi
NODE_PROBLEM_DETECTOR_VERSION="${NODE_PROBLEM_DETECTOR_VERSION:-}"
NODE_PROBLEM_DETECTOR_TAR_HASH="${NODE_PROBLEM_DETECTOR_TAR_HASH:-}"
NODE_PROBLEM_DETECTOR_RELEASE_PATH="${NODE_PROBLEM_DETECTOR_RELEASE_PATH:-}"
NODE_PROBLEM_DETECTOR_CUSTOM_FLAGS="${NODE_PROBLEM_DETECTOR_CUSTOM_FLAGS:-}"

# Optional: Create autoscaler for cluster's nodes.
ENABLE_CLUSTER_AUTOSCALER="${KUBE_ENABLE_CLUSTER_AUTOSCALER:-false}"
Expand Down
30 changes: 17 additions & 13 deletions cluster/gce/gci/configure-helper.sh
Original file line number Diff line number Diff line change
Expand Up @@ -1248,21 +1248,25 @@ EOF
function start-node-problem-detector {
echo "Start node problem detector"
local -r npd_bin="${KUBE_HOME}/bin/node-problem-detector"
local -r km_config="${KUBE_HOME}/node-problem-detector/config/kernel-monitor.json"
# TODO(random-liu): Handle this for alternative container runtime.
local -r dm_config="${KUBE_HOME}/node-problem-detector/config/docker-monitor.json"
local -r custom_km_config="${KUBE_HOME}/node-problem-detector/config/kernel-monitor-counter.json,${KUBE_HOME}/node-problem-detector/config/systemd-monitor-counter.json,${KUBE_HOME}/node-problem-detector/config/docker-monitor-counter.json"
echo "Using node problem detector binary at ${npd_bin}"
local flags="${NPD_TEST_LOG_LEVEL:-"--v=2"} ${NPD_TEST_ARGS:-}"
flags+=" --logtostderr"
flags+=" --system-log-monitors=${km_config},${dm_config}"
flags+=" --custom-plugin-monitors=${custom_km_config}"
flags+=" --apiserver-override=https://${KUBERNETES_MASTER_NAME}?inClusterConfig=false&auth=/var/lib/node-problem-detector/kubeconfig"
local -r npd_port=${NODE_PROBLEM_DETECTOR_PORT:-20256}
flags+=" --port=${npd_port}"
if [[ -n "${EXTRA_NPD_ARGS:-}" ]]; then
flags+=" ${EXTRA_NPD_ARGS}"

local flags="${NODE_PROBLEM_DETECTOR_CUSTOM_FLAGS:-}"
if [[ -z "${flags}" ]]; then
local -r km_config="${KUBE_HOME}/node-problem-detector/config/kernel-monitor.json"
# TODO(random-liu): Handle this for alternative container runtime.
local -r dm_config="${KUBE_HOME}/node-problem-detector/config/docker-monitor.json"
local -r custom_km_config="${KUBE_HOME}/node-problem-detector/config/kernel-monitor-counter.json,${KUBE_HOME}/node-problem-detector/config/systemd-monitor-counter.json,${KUBE_HOME}/node-problem-detector/config/docker-monitor-counter.json"
flags="${NPD_TEST_LOG_LEVEL:-"--v=2"} ${NPD_TEST_ARGS:-}"
flags+=" --logtostderr"
flags+=" --system-log-monitors=${km_config},${dm_config}"
flags+=" --custom-plugin-monitors=${custom_km_config}"
local -r npd_port=${NODE_PROBLEM_DETECTOR_PORT:-20256}
flags+=" --port=${npd_port}"
if [[ -n "${EXTRA_NPD_ARGS:-}" ]]; then
flags+=" ${EXTRA_NPD_ARGS}"
fi
fi
flags+=" --apiserver-override=https://${KUBERNETES_MASTER_NAME}?inClusterConfig=false&auth=/var/lib/node-problem-detector/kubeconfig"

# Write the systemd service file for node problem detector.
cat <<EOF >/etc/systemd/system/node-problem-detector.service
Expand Down
10 changes: 5 additions & 5 deletions cluster/gce/gci/configure.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ set -o pipefail
### Hardcoded constants
DEFAULT_CNI_VERSION="v0.7.5"
DEFAULT_CNI_SHA1="52e9d2de8a5f927307d9397308735658ee44ab8d"
DEFAULT_NPD_VERSION="v0.6.0"
DEFAULT_NPD_SHA1="a28e960a21bb74bc0ae09c267b6a340f30e5b3a6"
DEFAULT_NPD_VERSION="v0.6.3"
DEFAULT_NPD_SHA1="3a6ac56be6c121f1b94450bfd1a81ad28d532369"
DEFAULT_CRICTL_VERSION="v1.12.0"
DEFAULT_CRICTL_SHA1="82ef8b44849f9da0589c87e9865d4716573eec7f"
DEFAULT_MOUNTER_TAR_SHA="8003b798cf33c7f91320cd6ee5cec4fa22244571"
Expand Down Expand Up @@ -202,12 +202,12 @@ function install-node-problem-detector {
local -r npd_tar="node-problem-detector-${npd_version}.tar.gz"

if is-preloaded "${npd_tar}" "${npd_sha1}"; then
echo "node-problem-detector is preloaded."
echo "${npd_tar} is preloaded."
return
fi

echo "Downloading node problem detector."
local -r npd_release_path="https://storage.googleapis.com/kubernetes-release"
echo "Downloading ${npd_tar}."
local -r npd_release_path="${NODE_PROBLEM_DETECTOR_RELEASE_PATH:-https://storage.googleapis.com/kubernetes-release}"
download-or-bust "${npd_sha1}" "${npd_release_path}/node-problem-detector/${npd_tar}"
local -r npd_dir="${KUBE_HOME}/node-problem-detector"
mkdir -p "${npd_dir}"
Expand Down
2 changes: 2 additions & 0 deletions cluster/gce/util.sh
Original file line number Diff line number Diff line change
Expand Up @@ -843,6 +843,8 @@ ENABLE_CLUSTER_UI: $(yaml-quote ${ENABLE_CLUSTER_UI:-false})
ENABLE_NODE_PROBLEM_DETECTOR: $(yaml-quote ${ENABLE_NODE_PROBLEM_DETECTOR:-none})
NODE_PROBLEM_DETECTOR_VERSION: $(yaml-quote ${NODE_PROBLEM_DETECTOR_VERSION:-})
NODE_PROBLEM_DETECTOR_TAR_HASH: $(yaml-quote ${NODE_PROBLEM_DETECTOR_TAR_HASH:-})
NODE_PROBLEM_DETECTOR_RELEASE_PATH: $(yaml-quote ${NODE_PROBLEM_DETECTOR_RELEASE_PATH:-})
NODE_PROBLEM_DETECTOR_CUSTOM_FLAGS: $(yaml-quote ${NODE_PROBLEM_DETECTOR_CUSTOM_FLAGS:-})
ENABLE_NODE_LOGGING: $(yaml-quote ${ENABLE_NODE_LOGGING:-false})
LOGGING_DESTINATION: $(yaml-quote ${LOGGING_DESTINATION:-})
ELASTICSEARCH_LOGGING_REPLICAS: $(yaml-quote ${ELASTICSEARCH_LOGGING_REPLICAS:-})
Expand Down
7 changes: 4 additions & 3 deletions hack/make-rules/test-e2e-node.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ image_service_endpoint=${IMAGE_SERVICE_ENDPOINT:-""}
run_until_failure=${RUN_UNTIL_FAILURE:-"false"}
test_args=${TEST_ARGS:-""}
system_spec_name=${SYSTEM_SPEC_NAME:-}
extra_envs=${EXTRA_ENVS:-}

# Parse the flags to pass to ginkgo
ginkgoflags=""
Expand Down Expand Up @@ -148,7 +149,7 @@ if [ $remote = true ] ; then
--image-project="$image_project" --instance-name-prefix="$instance_prefix" \
--delete-instances="$delete_instances" --test_args="$test_args" --instance-metadata="$metadata" \
--image-config-file="$image_config_file" --system-spec-name="$system_spec_name" \
--test-suite="$test_suite" \
--extra-envs="$extra_envs" --test-suite="$test_suite" \
2>&1 | tee -i "${artifacts}/build-log.txt"
exit $?

Expand All @@ -169,8 +170,8 @@ else
# Test using the host the script was run on
# Provided for backwards compatibility
go run test/e2e_node/runner/local/run_local.go \
--system-spec-name="$system_spec_name" --ginkgo-flags="$ginkgoflags" \
--test-flags="--container-runtime=${runtime} \
--system-spec-name="$system_spec_name" --extra-envs="$extra_envs" \
--ginkgo-flags="$ginkgoflags" --test-flags="--container-runtime=${runtime} \
--alsologtostderr --v 4 --report-dir=${artifacts} --node-name $(hostname) \
$test_args" --build-dependencies=true 2>&1 | tee -i "${artifacts}/build-log.txt"
exit $?
Expand Down
6 changes: 3 additions & 3 deletions pkg/cloudprovider/providers/aws/aws_loadbalancer.go
Original file line number Diff line number Diff line change
Expand Up @@ -1049,10 +1049,10 @@ func (c *Cloud) ensureLoadBalancer(namespacedName types.NamespacedName, loadBala

found := -1
for i, expected := range listeners {
if elbProtocolsAreEqual(actual.Protocol, expected.Protocol) {
if !elbProtocolsAreEqual(actual.Protocol, expected.Protocol) {
continue
}
if elbProtocolsAreEqual(actual.InstanceProtocol, expected.InstanceProtocol) {
if !elbProtocolsAreEqual(actual.InstanceProtocol, expected.InstanceProtocol) {
continue
}
if aws.Int64Value(actual.InstancePort) != aws.Int64Value(expected.InstancePort) {
Expand All @@ -1061,7 +1061,7 @@ func (c *Cloud) ensureLoadBalancer(namespacedName types.NamespacedName, loadBala
if aws.Int64Value(actual.LoadBalancerPort) != aws.Int64Value(expected.LoadBalancerPort) {
continue
}
if awsArnEquals(actual.SSLCertificateId, expected.SSLCertificateId) {
if !awsArnEquals(actual.SSLCertificateId, expected.SSLCertificateId) {
continue
}
found = i
Expand Down
17 changes: 9 additions & 8 deletions pkg/cloudprovider/providers/azure/azure.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,14 +48,15 @@ import (

const (
// CloudProviderName is the value used for the --cloud-provider flag
CloudProviderName = "azure"
rateLimitQPSDefault = 1.0
rateLimitBucketDefault = 5
backoffRetriesDefault = 6
backoffExponentDefault = 1.5
backoffDurationDefault = 5 // in seconds
backoffJitterDefault = 1.0
maximumLoadBalancerRuleCount = 148 // According to Azure LB rule default limit
CloudProviderName = "azure"
rateLimitQPSDefault = 1.0
rateLimitBucketDefault = 5
backoffRetriesDefault = 6
backoffExponentDefault = 1.5
backoffDurationDefault = 5 // in seconds
backoffJitterDefault = 1.0
// According to https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits#load-balancer.
maximumLoadBalancerRuleCount = 250

vmTypeVMSS = "vmss"
vmTypeStandard = "standard"
Expand Down
19 changes: 11 additions & 8 deletions pkg/cloudprovider/providers/azure/azure_standard.go
Original file line number Diff line number Diff line change
Expand Up @@ -673,17 +673,20 @@ func (as *availabilitySet) ensureHostInPool(service *v1.Service, nodeName types.
// sets, the same network interface couldn't be added to more than one load balancer of
// the same type. Omit those nodes (e.g. masters) so Azure ARM won't complain
// about this.
newBackendPoolsIDs := make([]string, 0, len(newBackendPools))
for _, pool := range newBackendPools {
backendPool := *pool.ID
matches := backendPoolIDRE.FindStringSubmatch(backendPool)
if len(matches) == 2 {
lbName := matches[1]
if strings.HasSuffix(lbName, InternalLoadBalancerNameSuffix) == isInternal {
glog.V(4).Infof("Node %q has already been added to LB %q, omit adding it to a new one", nodeName, lbName)
return nil
}
if pool.ID != nil {
newBackendPoolsIDs = append(newBackendPoolsIDs, *pool.ID)
}
}
isSameLB, oldLBName, err := isBackendPoolOnSameLB(backendPoolID, newBackendPoolsIDs)
if err != nil {
return err
}
if !isSameLB {
glog.V(4).Infof("Node %q has already been added to LB %q, omit adding it to a new one", nodeName, oldLBName)
return nil
}
}

newBackendPools = append(newBackendPools,
Expand Down
19 changes: 11 additions & 8 deletions pkg/cloudprovider/providers/azure/azure_vmss.go
Original file line number Diff line number Diff line change
Expand Up @@ -729,17 +729,20 @@ func (ss *scaleSet) ensureHostsInVMSetPool(service *v1.Service, backendPoolID st
// the same network interface couldn't be added to more than one load balancer of
// the same type. Omit those nodes (e.g. masters) so Azure ARM won't complain
// about this.
newBackendPoolsIDs := make([]string, 0, len(newBackendPools))
for _, pool := range newBackendPools {
backendPool := *pool.ID
matches := backendPoolIDRE.FindStringSubmatch(backendPool)
if len(matches) == 2 {
lbName := matches[1]
if strings.HasSuffix(lbName, InternalLoadBalancerNameSuffix) == isInternal {
glog.V(4).Infof("vmss %q has already been added to LB %q, omit adding it to a new one", vmSetName, lbName)
return nil
}
if pool.ID != nil {
newBackendPoolsIDs = append(newBackendPoolsIDs, *pool.ID)
}
}
isSameLB, oldLBName, err := isBackendPoolOnSameLB(backendPoolID, newBackendPoolsIDs)
if err != nil {
return err
}
if !isSameLB {
glog.V(4).Infof("VMSS %q has already been added to LB %q, omit adding it to a new one", vmSetName, oldLBName)
return nil
}
}

newBackendPools = append(newBackendPools,
Expand Down
26 changes: 26 additions & 0 deletions pkg/cloudprovider/providers/azure/azure_wrap.go
Original file line number Diff line number Diff line change
Expand Up @@ -324,3 +324,29 @@ func (az *Cloud) IsNodeUnmanaged(nodeName string) (bool, error) {
func (az *Cloud) IsNodeUnmanagedByProviderID(providerID string) bool {
return !azureNodeProviderIDRE.Match([]byte(providerID))
}

// isBackendPoolOnSameLB checks whether newBackendPoolID is on the same load balancer as existingBackendPools.
// Since both public and internal LBs are supported, lbName and lbName-internal are treated as same.
// If not same, the lbName for existingBackendPools would also be returned.
func isBackendPoolOnSameLB(newBackendPoolID string, existingBackendPools []string) (bool, string, error) {
matches := backendPoolIDRE.FindStringSubmatch(newBackendPoolID)
if len(matches) != 2 {
return false, "", fmt.Errorf("new backendPoolID %q is in wrong format", newBackendPoolID)
}

newLBName := matches[1]
newLBNameTrimmed := strings.TrimRight(newLBName, InternalLoadBalancerNameSuffix)
for _, backendPool := range existingBackendPools {
matches := backendPoolIDRE.FindStringSubmatch(backendPool)
if len(matches) != 2 {
return false, "", fmt.Errorf("existing backendPoolID %q is in wrong format", backendPool)
}

lbName := matches[1]
if !strings.EqualFold(strings.TrimRight(lbName, InternalLoadBalancerNameSuffix), newLBNameTrimmed) {
return false, lbName, nil
}
}

return true, "", nil
}
Loading

0 comments on commit 7424899

Please sign in to comment.