Skip to content
This repository has been archived by the owner on Apr 3, 2023. It is now read-only.

Commit

Permalink
feat: Add K8s 1.12.10 payload for Azure Stack. (#26)
Browse files Browse the repository at this point in the history
* Fix bug with volume getting marked as not in-use with pending op

Add test for verifying volume detach

* Fix flake with e2e test that checks detach while mount in progress

A volume can show up as in-use even before it gets attached
to the node.

* Fix kubernetes#73479 AWS NLB target groups missing tags

`elbv2.AddTags` doesn't seem to support assigning the same set of
tags to multiple resources at once leading to the following error:
  Error adding tags after modifying load balancer targets:
  "ValidationError: Only one resource can be tagged at a time"

This can happen when using AWS NLB with multiple listeners pointing
to different node ports.

When k8s creates a NLB it creates a target group per listener along
with installing security group ingress rules allowing the traffic to
reach the k8s nodes.

Unfortunately if those target groups are not tagged, k8s will not
manage them, thinking it is not the owner.

This small changes assigns tags one resource at a time instead of
batching them as before.

Signed-off-by: Brice Figureau <brice@daysofwonder.com>

* remove get azure accounts in the init process set timeout for get azure account operation

use const for timeout value

remove get azure accounts in the init process

add lock for account init

* add timeout in GetVolumeLimits operation

add timeout for getAllStorageAccounts

* add mixed protocol support for azure load balancer

* record event on endpoint update failure

* fix parse devicePath issue on Azure Disk

* Fix scanning of failed targets

If a iSCSI target is down while a volume is attached, reading from
/sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address
fails with an error. Kubelet should assume that such target is not
available / logged in and try to relogin. Eventually, if such error
persists, it should continue mounting the volume if the other
paths are healthy instead of failing whole WaitForAttach().

* Kubernetes version v1.12.7-beta.0 openapi-spec file updates

* add retry for detach azure disk

add more logging info in detach disk

add more logging for azure disk attach/detach

* Add/Update CHANGELOG-1.12.md for v1.12.6.

* Reduce cardinality of admission webhook metrics

* fix negative slice index error in keymutex

* Remove reflector metrics as they currently cause a memory leak

* Explicitly set GVK when sending objects to webhooks

* add Azure Container Registry anonymous repo support

apply fix for msi and fix test failure

* DaemonSet e2e: Update image and rolling upgrade test timeout

Use Nginx as the DaemonSet image instead of the ServeHostname image.
This was changed because the ServeHostname has a sleep after terminating
which makes it incompatible with the DaemonSet Rolling Upgrade e2e test.

In addition, make the DaemonSet Rolling Upgrade e2e test timeout a
function of the number of nodes that make up the cluster. This is
required because the more nodes there are, the longer the time it will
take to complete a rolling upgrade.

Signed-off-by: Alexander Brand <alexbrand09@gmail.com>

* Revert kubelet to default to ttl cache secret/configmap behavior

* cri_stats_provider: overload nil as 0 for exited containers stats

Always report 0 cpu/memory usage for exited containers to make
metrics-server work as expect.

Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>

* flush iptable chains first and then remove them

while cleaning up ipvs mode. flushing iptable chains first and then
remove the chains. this avoids trying to remove chains that are still
referenced by rules in other chains.

fixes kubernetes#70615

* Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity.

* Fix panic in kubectl cp command

* Augmenting API call retry in nodeinfomanager

* Bump debian-iptables to v11.0.1. Rebase docker image on debian-base:0.4.1

* Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata.

* GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume

* Update Cluster Autoscaler version to 1.12.3

* add module 'nf_conntrack' in ipvs prerequisite check

* Allow disable outbound snat when Azure standard load balancer is used

* Ensure Azure load balancer cleaned up on 404 or 403

* fix smb unmount issue on Windows

fix log warning

use IsCorruptedMnt in GetMountRefs on Windows

use errorno in IsCorruptedMnt check

fix comments: add more error code

add more error no checking

change year

fix comments

fix bazel error

fix bazel

fix bazel

fix bazel

revert bazel change

* kubelet: updated logic of verifying a static critical pod

- check if a pod is static by its static pod info
- meanwhile, check if a pod is critical by its corresponding mirror pod info

* Allow session affinity a period of time to setup for new services.

This is to deal with the flaky session affinity test.

* Restore username and password kubectl flags

* build/gci: bump CNI version to 0.7.5

* fix race condition issue for smb mount on windows

change var name

* allows configuring NPD release and flags on GCI and add cluster e2e test

* allows configuring NPD image version in node e2e test and fix the test

* bump repd min size in e2es

* Kubernetes version v1.12.8-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.12.md for v1.12.7.

* stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236

* Do not delete existing VS and RS when starting

* Fix updating 'currentMetrics' field for HPA with 'AverageValue' target

* Populate ClientCA in delegating auth setup

kubernetes#67768 accidentally removed population of the the ClientCA
in the delegating auth setup code.  This restores it.

* Update gcp images with security patches

[stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes.
[fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes.
[fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes.
[fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes.
[metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes.

* Fix AWS driver fails to provision specified fsType

* Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template

* Bump debian-iptables to v11.0.2

* Avoid panic in cronjob sorting

This change handles the case where the ith cronjob may have its start
time set to nil.

Previously, the Less method could cause a panic in case the ith
cronjob had its start time set to nil, but the jth cronjob did not. It
would panic when calling Before on a nil StartTime.

* Add volume mode downgrade test: should not mount/map in <1.13

* disable HTTP2 ingress test

* ensuring that logic is checking for differences in listener

* Use Node-Problem-Detector v0.6.3 on GCI

* Delete only unscheduled pods if node doesn't exist anymore.

* proxy: Take into account exclude CIDRs while deleting legacy real servers

* Increase default maximumLoadBalancerRuleCount to 250

* kube-proxy: rename internal field for clarity

* kube-proxy: rename vars for clarity, fix err str

* kube-proxy: rename field for congruence

* kube-proxy: reject 0 endpoints on forward

Previously we only REJECTed on OUTPUT which works for packets from the
node but not for packets from pods on the node.

* kube-proxy: remove old cleanup rules

* Kube-proxy: REJECT LB IPs with no endpoints

We REJECT every other case.  Close this FIXME.

To get this to work in all cases, we have to process service in
filter.INPUT, since LB IPS might be manged as local addresses.

* Retool HTTP and UDP e2e utils

This is a prefactoring for followup changes that need to use very
similar but subtly different test.  Now it is more generic, though it
pushes a little logic up the stack.  That makes sense to me.

* Fix small race in e2e

Occasionally we get spurious errors about "no route to host" when we
race with kube-proxy.  This should reduce that.  It's mostly just log
noise.

* Fix Azure SLB support for multiple backend pools

Azure VM and vmssVM support multiple backend pools for the same SLB, but
not for different LBs.

* Set CPU metrics for init containers under containerd

Copies PR kubernetes#76503 for release-1.12.

metrics-server doesn't return metrics for pods with init containers
under containerd because they have incomplete CPU metrics returned by
the kubelet /stats/summary API.

This problem has been fixed in 1.14 (kubernetes#74336), but the cherry-picks
dropped the usageNanoCores metric.

This change adds the missing usageNanoCores metric for init containers
in Kubernetes v1.12.

Fixes kubernetes#76292

* Restore metrics-server using of IP addresses

This preference list matches is used to pick prefered field from k8s
node object. It was introduced in metrics-server 0.3 and changed default
behaviour to use DNS instead of IP addresses. It was merged into k8s
1.12 and caused breaking change by introducing dependency on DNS
configuration.

* Revert "Merge pull request kubernetes#76529 from spencerhance/automated-cherry-pick-of-#72534-kubernetes#74394-upstream-release-1.12"

This reverts commit 535e3ad, reversing
changes made to 336d787.

* Kubernetes version v1.12.9-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.12.md for v1.12.8.

* Upgrade compute API to version 2019-03-01

* Replace vmss update API with instance-level update API

* Cleanup codes that not required any more

* Add unit tests

* Update vendors

* Update Cluster Autoscaler to 1.12.5

* add shareName param in azure file storage class

skip create azure file if it exists

remove comments

* Create the "internal" firewall rule for kubemark master.

This is equivalent to the "internal" firewall rule that is created for
the regular masters.
The main reason for doing it is to allow prometheus scraping metrics
from various kubemark master components, e.g. kubelet.

Ref. kubernetes/perf-tests#503

* refactor detach azure disk retry operation

* move disk lock process to azure cloud provider

fix comments

fix import keymux check error

add unit test for attach/detach disk funcs

fix bazel issue

rebase

* fix disk list corruption issue

* Fix verify godeps failure for 1.12

github.com/evanphx/json-patch added a new tag at the same sha this
morning: https://github.com/evanphx/json-patch/releases/tag/v4.2.0

This confused godeps. This PR updates our file to match godeps
expectation.

Fixes issue 77238

* Upgrade Stackdriver Logging Agent addon image from 1.6.0 to 1.6.8.

* Test kubectl cp escape

* Properly handle links in tar

* use k8s.gcr.io/pause instead of kubernetes/pause

* Pick up security patches for fluentd-gcp-scaler by upgrading to version 0.5.2

* Error when etcd3 watch finds delete event with nil prevKV

* Make CreatePrivilegedPSPBinding reentrant

Make CreatePrivilegedPSPBinding reentrant so tests using it (e.g. DNS) can be
executed more than once against a cluster. Without this change, such tests will
fail because the PSP already exists, short circuiting test setup.

* check if Memory is not nil for container stats

* Bump ip-masq-agent version to v2.3.0

* In GuaranteedUpdate, retry on any error if we are working with stale data

* BoundServiceAccountTokenVolume: fix InClusterConfig

* fix CVE-2019-11244: `kubectl --http-cache=<world-accessible dir>` creates world-writeable cached schema files

* Terminate watchers when watch cache is destroyed

* honor overridden tokenfile, add InClusterConfig override tests

* fix incorrect prometheus metrics

* Kubernetes version v1.12.10-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.12.md for v1.12.9.

* fix azure retry issue when return 2XX with error

fix comments

* Disable graceful termination for udp

* fix: update vm if detach a non-existing disk

fix gofmt issue

fix build error

* Fix incorrect procMount defaulting

* ipvs: fix string check for IPVS protocol during graceful termination

Signed-off-by: Andrew Sy Kim <kiman@vmware.com>

* kubeadm: apply taints on non-control-plane node join

This backports a change made in 1.13 which fixes the process applying
taints when joining worker nodes.

* fix flexvol stuck issue due to corrupted mnt point

fix comments about PathExists

fix comments

revert change in PathExists func

* Avoid the default server mux

* Default resourceGroup should be used when value of annotation azure-load-balancer-resource-group is empty string
  • Loading branch information
rjaini committed Jul 11, 2019
1 parent 2cf3e11 commit aa31f4f
Show file tree
Hide file tree
Showing 27 changed files with 852 additions and 104 deletions.
407 changes: 358 additions & 49 deletions CHANGELOG-1.12.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion api/openapi-spec/swagger.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 4 additions & 4 deletions cluster/addons/ip-masq-agent/ip-masq-agent.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@ metadata:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
# https://github.com/kubernetes-incubator/ip-masq-agent/blob/v2.0.0/README.md
# https://github.com/kubernetes-incubator/ip-masq-agent/blob/v2.3.0/README.md
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ip-masq-agent
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
addonmanager.kubernetes.io/mode: Reconcile
spec:
template:
metadata:
Expand All @@ -29,7 +29,7 @@ spec:
hostNetwork: true
containers:
- name: ip-masq-agent
image: k8s.gcr.io/ip-masq-agent-amd64:v2.1.1
image: k8s.gcr.io/ip-masq-agent-amd64:v2.3.0
args:
- --masq-chain=IP-MASQ
resources:
Expand All @@ -42,7 +42,7 @@ spec:
- name: config
mountPath: /etc/config
nodeSelector:
beta.kubernetes.io/masq-agent-ds-ready: "true"
beta.kubernetes.io/masq-agent-ds-ready: "true"
volumes:
- name: config
configMap:
Expand Down
3 changes: 2 additions & 1 deletion cmd/kubeadm/app/cmd/join.go
Original file line number Diff line number Diff line change
Expand Up @@ -492,7 +492,8 @@ func (j *Join) BootstrapKubelet(tlsBootstrapCfg *clientcmdapi.Config) error {
// Write env file with flags for the kubelet to use. We do not need to write the --register-with-taints for the master,
// as we handle that ourselves in the markmaster phase
// TODO: Maybe we want to do that some time in the future, in order to remove some logic from the markmaster phase?
if err := kubeletphase.WriteKubeletDynamicEnvFile(&j.cfg.NodeRegistration, j.cfg.FeatureGates, false, kubeadmconstants.KubeletRunDirectory); err != nil {
registerTaintsUsingFlags := j.cfg.ControlPlane == false
if err := kubeletphase.WriteKubeletDynamicEnvFile(&j.cfg.NodeRegistration, j.cfg.FeatureGates, registerTaintsUsingFlags, kubeadmconstants.KubeletRunDirectory); err != nil {
return err
}

Expand Down
5 changes: 3 additions & 2 deletions cmd/kubelet/app/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -746,9 +746,10 @@ func run(s *options.KubeletServer, kubeDeps *kubelet.Dependencies, stopCh <-chan
}

if s.HealthzPort > 0 {
healthz.DefaultHealthz()
mux := http.NewServeMux()
healthz.InstallHandler(mux)
go wait.Until(func() {
err := http.ListenAndServe(net.JoinHostPort(s.HealthzBindAddress, strconv.Itoa(int(s.HealthzPort))), nil)
err := http.ListenAndServe(net.JoinHostPort(s.HealthzBindAddress, strconv.Itoa(int(s.HealthzPort))), mux)
if err != nil {
glog.Errorf("Starting health server failed: %v", err)
}
Expand Down
14 changes: 12 additions & 2 deletions pkg/api/pod/util.go
Original file line number Diff line number Diff line change
Expand Up @@ -293,12 +293,22 @@ func DropDisabledProcMountField(podSpec *api.PodSpec) {
defProcMount := api.DefaultProcMount
for i := range podSpec.Containers {
if podSpec.Containers[i].SecurityContext != nil {
podSpec.Containers[i].SecurityContext.ProcMount = &defProcMount
if podSpec.Containers[i].SecurityContext.ProcMount != nil {
// The ProcMount field was improperly forced to non-nil in 1.12.
// If the feature is disabled, and the ProcMount field is present in the incoming object, force to the default value.
// Note: we cannot force the field to nil when the feature is disabled because it causes a diff against previously persisted data.
podSpec.Containers[i].SecurityContext.ProcMount = &defProcMount
}
}
}
for i := range podSpec.InitContainers {
if podSpec.InitContainers[i].SecurityContext != nil {
podSpec.InitContainers[i].SecurityContext.ProcMount = &defProcMount
if podSpec.InitContainers[i].SecurityContext.ProcMount != nil {
// The ProcMount field was improperly forced to non-nil in 1.12.
// If the feature is disabled, and the ProcMount field is present in the incoming object, force to the default value.
// Note: we cannot force the field to nil when the feature is disabled because it causes a diff against previously persisted data.
podSpec.InitContainers[i].SecurityContext.ProcMount = &defProcMount
}
}
}
}
Expand Down
3 changes: 2 additions & 1 deletion pkg/cloudprovider/providers/azure/azure_backoff.go
Original file line number Diff line number Diff line change
Expand Up @@ -336,8 +336,9 @@ func shouldRetryHTTPRequest(resp *http.Response, err error) bool {
return false
}

// processHTTPRetryResponse : return true means stop retry, false means continue retry
func (az *Cloud) processHTTPRetryResponse(service *v1.Service, reason string, resp *http.Response, err error) (bool, error) {
if resp != nil && isSuccessHTTPResponse(*resp) {
if err == nil && resp != nil && isSuccessHTTPResponse(*resp) {
// HTTP 2xx suggests a successful response
return true, nil
}
Expand Down
5 changes: 5 additions & 0 deletions pkg/cloudprovider/providers/azure/azure_backoff_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,11 @@ func TestProcessRetryResponse(t *testing.T) {
code: http.StatusOK,
stop: true,
},
{
code: http.StatusOK,
err: fmt.Errorf("some error"),
stop: false,
},
{
code: 399,
stop: true,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ limitations under the License.
package azure

import (
"fmt"
"net/http"
"strings"

Expand Down Expand Up @@ -132,7 +131,8 @@ func (as *availabilitySet) DetachDisk(diskName, diskURI string, nodeName types.N
}

if !bFoundDisk {
return nil, fmt.Errorf("detach azure disk failure, disk %s not found, diskURI: %s", diskName, diskURI)
// only log here, next action is to update VM status with original meta data
glog.Errorf("detach azure disk: disk %s not found, diskURI: %s", diskName, diskURI)
}

newVM := compute.VirtualMachine{
Expand Down
4 changes: 2 additions & 2 deletions pkg/cloudprovider/providers/azure/azure_controller_vmss.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ limitations under the License.
package azure

import (
"fmt"
"net/http"
"strings"

Expand Down Expand Up @@ -136,7 +135,8 @@ func (ss *scaleSet) DetachDisk(diskName, diskURI string, nodeName types.NodeName
}

if !bFoundDisk {
return nil, fmt.Errorf("detach azure disk failure, disk %s not found, diskURI: %s", diskName, diskURI)
// only log here, next action is to update VM status with original meta data
glog.Errorf("detach azure disk: disk %s not found, diskURI: %s", diskName, diskURI)
}

newVM := compute.VirtualMachineScaleSetVM{
Expand Down
5 changes: 4 additions & 1 deletion pkg/cloudprovider/providers/azure/azure_loadbalancer.go
Original file line number Diff line number Diff line change
Expand Up @@ -1542,7 +1542,10 @@ func findSecurityRule(rules []network.SecurityRule, rule network.SecurityRule) b

func (az *Cloud) getPublicIPAddressResourceGroup(service *v1.Service) string {
if resourceGroup, found := service.Annotations[ServiceAnnotationLoadBalancerResourceGroup]; found {
return resourceGroup
resourceGroupName := strings.TrimSpace(resourceGroup)
if len(resourceGroupName) > 0 {
return resourceGroupName
}
}

return az.ResourceGroup
Expand Down
32 changes: 32 additions & 0 deletions pkg/cloudprovider/providers/azure/azure_loadbalancer_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -303,3 +303,35 @@ func TestEnsureLoadBalancerDeleted(t *testing.T) {
assert.Equal(t, len(result), 0, "TestCase[%d]: %s", i, c.desc)
}
}

func TestGetPublicIPAddressResourceGroup(t *testing.T) {
az := getTestCloud()

for i, c := range []struct {
desc string
annotations map[string]string
expected string
}{
{
desc: "no annotation",
expected: "rg",
},
{
desc: "annoation with empty string resource group",
annotations: map[string]string{ServiceAnnotationLoadBalancerResourceGroup: ""},
expected: "rg",
},
{
desc: "annoation with non-empty resource group ",
annotations: map[string]string{ServiceAnnotationLoadBalancerResourceGroup: "rg2"},
expected: "rg2",
},
} {
t.Run(c.desc, func(t *testing.T) {
s := &v1.Service{}
s.Annotations = c.annotations
real := az.getPublicIPAddressResourceGroup(s)
assert.Equal(t, c.expected, real, "TestCase[%d]: %s", i, c.desc)
})
}
}
3 changes: 0 additions & 3 deletions pkg/kubectl/cmd/cp_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -129,8 +129,6 @@ func TestGetPrefix(t *testing.T) {
}
}

<<<<<<< HEAD
=======
func TestStripPathShortcuts(t *testing.T) {
tests := []struct {
name string
Expand Down Expand Up @@ -253,7 +251,6 @@ func TestIsDestRelative(t *testing.T) {
}
}

>>>>>>> v1.12.9
func checkErr(t *testing.T, err error) {
if err != nil {
t.Errorf("unexpected error: %v", err)
Expand Down
19 changes: 19 additions & 0 deletions pkg/kubelet/volumemanager/cache/actual_state_of_world.go
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,11 @@ type ActualStateOfWorld interface {
// mounted for the specified pod as requiring file system resize (if the plugin for the
// volume indicates it requires file system resize).
MarkFSResizeRequired(volumeName v1.UniqueVolumeName, podName volumetypes.UniquePodName)

// GetAttachedVolumes returns a list of volumes that is known to be attached
// to the node. This list can be used to determine volumes that are either in-use
// or have a mount/unmount operation pending.
GetAttachedVolumes() []AttachedVolume
}

// MountedVolume represents a volume that has successfully been mounted to a pod.
Expand Down Expand Up @@ -711,6 +716,20 @@ func (asw *actualStateOfWorld) GetGloballyMountedVolumes() []AttachedVolume {
return globallyMountedVolumes
}

func (asw *actualStateOfWorld) GetAttachedVolumes() []AttachedVolume {
asw.RLock()
defer asw.RUnlock()
allAttachedVolumes := make(
[]AttachedVolume, 0 /* len */, len(asw.attachedVolumes) /* cap */)
for _, volumeObj := range asw.attachedVolumes {
allAttachedVolumes = append(
allAttachedVolumes,
asw.newAttachedVolume(&volumeObj))
}

return allAttachedVolumes
}

func (asw *actualStateOfWorld) GetUnmountedVolumes() []AttachedVolume {
asw.RLock()
defer asw.RUnlock()
Expand Down
8 changes: 4 additions & 4 deletions pkg/kubelet/volumemanager/volume_manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -295,9 +295,9 @@ func (vm *volumeManager) GetVolumesInUse() []v1.UniqueVolumeName {
// that volumes are marked in use as soon as the decision is made that the
// volume *should* be attached to this node until it is safely unmounted.
desiredVolumes := vm.desiredStateOfWorld.GetVolumesToMount()
mountedVolumes := vm.actualStateOfWorld.GetGloballyMountedVolumes()
volumesToReportInUse := make([]v1.UniqueVolumeName, 0, len(desiredVolumes)+len(mountedVolumes))
desiredVolumesMap := make(map[v1.UniqueVolumeName]bool, len(desiredVolumes)+len(mountedVolumes))
allAttachedVolumes := vm.actualStateOfWorld.GetAttachedVolumes()
volumesToReportInUse := make([]v1.UniqueVolumeName, 0, len(desiredVolumes)+len(allAttachedVolumes))
desiredVolumesMap := make(map[v1.UniqueVolumeName]bool, len(desiredVolumes)+len(allAttachedVolumes))

for _, volume := range desiredVolumes {
if volume.PluginIsAttachable {
Expand All @@ -308,7 +308,7 @@ func (vm *volumeManager) GetVolumesInUse() []v1.UniqueVolumeName {
}
}

for _, volume := range mountedVolumes {
for _, volume := range allAttachedVolumes {
if volume.PluginIsAttachable {
if _, exists := desiredVolumesMap[volume.VolumeName]; !exists {
volumesToReportInUse = append(volumesToReportInUse, volume.VolumeName)
Expand Down
9 changes: 5 additions & 4 deletions pkg/proxy/ipvs/graceful_termination.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ limitations under the License.
package ipvs

import (
"strings"
"sync"
"time"

Expand Down Expand Up @@ -164,10 +165,10 @@ func (m *GracefulTerminationManager) deleteRsFunc(rsToDelete *listItem) (bool, e
}
for _, rs := range rss {
if rsToDelete.RealServer.Equal(rs) {
// Delete RS with no connections
// For UDP, ActiveConn is always 0
// For TCP, InactiveConn are connections not in ESTABLISHED state
if rs.ActiveConn+rs.InactiveConn != 0 {
// For UDP traffic, no graceful termination, we immediately delete the RS
// (existing connections will be deleted on the next packet because sysctlExpireNoDestConn=1)
// For other protocols, don't delete until all connections have expired)
if strings.ToUpper(rsToDelete.VirtualServer.Protocol) != "UDP" && rs.ActiveConn+rs.InactiveConn != 0 {
glog.Infof("Not deleting, RS %v: %v ActiveConn, %v InactiveConn", rsToDelete.String(), rs.ActiveConn, rs.InactiveConn)
return false, nil
}
Expand Down
16 changes: 8 additions & 8 deletions pkg/volume/flexvolume/unmounter.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,15 +43,15 @@ func (f *flexVolumeUnmounter) TearDown() error {
}

func (f *flexVolumeUnmounter) TearDownAt(dir string) error {

pathExists, pathErr := util.PathExists(dir)
if !pathExists {
glog.Warningf("Warning: Unmount skipped because path does not exist: %v", dir)
return nil
}

if pathErr != nil && !util.IsCorruptedMnt(pathErr) {
return fmt.Errorf("Error checking path: %v", pathErr)
if pathErr != nil {
// only log warning here since plugins should anyways have to deal with errors
glog.Warningf("Error checking path: %v", pathErr)
} else {
if !pathExists {
glog.Warningf("Warning: Unmount skipped because path does not exist: %v", dir)
return nil
}
}

call := f.plugin.NewDriverCall(unmountCmd)
Expand Down
2 changes: 1 addition & 1 deletion staging/src/k8s.io/apiserver/pkg/server/healthz/doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,5 @@ limitations under the License.
// Package healthz implements basic http server health checking.
// Usage:
// import "k8s.io/apiserver/pkg/server/healthz"
// healthz.DefaultHealthz()
// healthz.InstallHandler(mux)
package healthz // import "k8s.io/apiserver/pkg/server/healthz"
9 changes: 0 additions & 9 deletions staging/src/k8s.io/apiserver/pkg/server/healthz/healthz.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,15 +37,6 @@ type HealthzChecker interface {
Check(req *http.Request) error
}

var defaultHealthz = sync.Once{}

// DefaultHealthz installs the default healthz check to the http.DefaultServeMux.
func DefaultHealthz(checks ...HealthzChecker) {
defaultHealthz.Do(func() {
InstallHandler(http.DefaultServeMux, checks...)
})
}

// PingHealthz returns true automatically when checked
var PingHealthz HealthzChecker = ping{}

Expand Down
5 changes: 3 additions & 2 deletions test/e2e/framework/deployment_util.go
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,9 @@ func NewDeployment(deploymentName string, replicas int32, podLabels map[string]s
TerminationGracePeriodSeconds: &zero,
Containers: []v1.Container{
{
Name: imageName,
Image: image,
Name: imageName,
Image: image,
SecurityContext: &v1.SecurityContext{},
},
},
},
Expand Down
1 change: 1 addition & 0 deletions test/e2e/framework/jobs_util.go
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ func NewTestJob(behavior, name string, rPol v1.RestartPolicy, parallelism, compl
Name: "data",
},
},
SecurityContext: &v1.SecurityContext{},
},
},
},
Expand Down
5 changes: 3 additions & 2 deletions test/e2e/framework/rs_util.go
Original file line number Diff line number Diff line change
Expand Up @@ -148,8 +148,9 @@ func NewReplicaSet(name, namespace string, replicas int32, podLabels map[string]
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: imageName,
Image: image,
Name: imageName,
Image: image,
SecurityContext: &v1.SecurityContext{},
},
},
},
Expand Down
7 changes: 4 additions & 3 deletions test/e2e/framework/statefulset_utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -809,9 +809,10 @@ func NewStatefulSet(name, ns, governingSvcName string, replicas int32, statefulP
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "nginx",
Image: imageutils.GetE2EImage(imageutils.Nginx),
VolumeMounts: mounts,
Name: "nginx",
Image: imageutils.GetE2EImage(imageutils.Nginx),
VolumeMounts: mounts,
SecurityContext: &v1.SecurityContext{},
},
},
Volumes: vols,
Expand Down
1 change: 1 addition & 0 deletions test/e2e/storage/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ go_library(
srcs = [
"csi_objects.go",
"csi_volumes.go",
"detach_mounted.go",
"empty_dir_wrapper.go",
"ephemeral_volume.go",
"flexvolume.go",
Expand Down
Loading

0 comments on commit aa31f4f

Please sign in to comment.