Skip to content

Conversation

@kokhang
Copy link

@kokhang kokhang commented Sep 22, 2016

hpe-release-1.3 branch backport of kubernetes#33123
See issue kubernetes#23568

Node-ip arg for kubelet is being ignored when kubelet is configured
with a cloud provider.

Vipul Sabhaya and others added 16 commits September 1, 2016 14:48
* Currently the vSphere cloud provider treats Datacenter as the failure
  Zone.  This doesn't necessarily work since in the current implemention
  Kubernetes nodes cannot span Datacenters.
* This change introduces Clusters as the failure zone, while treating
  Datacenters as Regions
* Also updated tests for Zones
Config-drive is an alternate no-network method for publishing local
instance metadata on OpenStack.  This change implements support for
fetching data from config-drive, and tries it before querying the
network metadata service (since config-drive will fail quickly if not
available).

Note config-drive involves mounting the filesystem with label
"config-2", so anyone using config-drive and running kubelet in a
container will need to ensure /dev/disk/by-label/config-2 is available
inside the container (read-only).
Previously the OpenStack provider just returned the hostname in
CurrentNodeName.  With this change, we return the local OpenStack
instance name, as the API intended.
Set FailureDomain in GetZone result to value of availability_zone in
local instance metadata.
- Fix unmount for vsanDatastore
- Add support for vsan datastore
To deal with Security Groups in kubernetes we need the gophercloud
code for groups and rules.

This adds the required vendored code
This allows security groups to be created and attached to the neutron
port that the loadbalancer is using on the subnet.

The security group ID that is assigned to the nodes needs to be
provided, to allow for traffic from the loadbalancer to the nodePort
to be refelected in the rules.

This adds two config items to the LoadBalancer options -

ManageSecurityGroups (bool)
NodeSecurityGroupID  (string)
GetDevicePath was currently coded to only support Nova+KVM style device
paths, update so we also support Nova+ESXi and leave the code such that
new pattern additions are easy.
This has been unused since 542f2dc, and relies on deviceName, which
can no longer be relied upon (see issue kubernetes#33128).

This needs to be removed now, as part of kubernetes#33128, as the code can't be
updated to attempt device detection and fallback through to the Cinder
provided deviceName, as detection "fails" when the device is gone, and
if cinder has reported a deviceName that another volume has used in
relaity, then this will block forever (or until the other, unreleated,
volume has been detached)
See issue kubernetes#33128

We can't rely on the device name provided by Cinder, and thus must perform
detection based on the drive serial number (aka It's cinder ID) on the
kubelet itself.

This patch re-works the cinder volume attacher to ignore the supplied
deviceName, and instead defer to the pre-existing GetDevicePath method to
discover the device path based on it's serial number and /dev/disk/by-id
mapping.

This new behavior is controller by a config option, as falling back
to the cinder value when we can't discover a device would risk devices
not showing up, falling back to cinder's guess, and detecting the wrong
disk as attached.
Don't rely on device name provided by Cinder
Security Groups for OpenStack Load Balancers
@kokhang kokhang changed the base branch from master to release-1.3 September 22, 2016 19:26
hpe-release-1.3 branch backport of kubernetes#33123
See issue kubernetes#23568

Node-ip arg for kubelet is being ignored when kubelet is configured
with a cloud provider.
@kokhang kokhang force-pushed the bug/23568-backport-1.3 branch from 57494f6 to 8a26306 Compare September 22, 2016 19:30
@kokhang kokhang closed this Sep 22, 2016
@kokhang kokhang deleted the bug/23568-backport-1.3 branch September 22, 2016 19:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants