Skip to content

Commit

Permalink
Add support for nested pods with Vlan trunk port
Browse files Browse the repository at this point in the history
Enable support for pods running in Nova vms.

I will be pushing a patch with devstack plugin changes.

Reference: https://review.openstack.org/#/c/411116/1/doc/source/devref/howto_binding_drivers.rst
Change-Id: Ib2aed7a0d1fa705f17a62d0fa4e272f19212e39e
Partially-Implements: blueprint binding-drivers-porting
  • Loading branch information
vikaschoudhary16 committed Jan 18, 2017
1 parent f4aab74 commit dc65eb1
Show file tree
Hide file tree
Showing 19 changed files with 944 additions and 2 deletions.
39 changes: 39 additions & 0 deletions README.rst
Expand Up @@ -55,6 +55,45 @@ vif binding executables. For example, if you installed it on Debian or Ubuntu::
bindir = /usr/local/libexec/kuryr


How to try out nested-pods locally:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1. To install OpenStack services run devstack with ``devstack/local.conf.pod-in-vm.undercloud.sample``.
Ensure that "trunk" service plugin is enabled in ``/etc/neutron/neutron.conf``::

[DEFAULT]
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin

2. Launch a VM with `Neutron trunk port. <https://wiki.openstack.org/wiki/Neutron/TrunkPort>`_
3. Inside VM, install and setup Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, neutron services should be
disabled in localrc.
- git clone kuryr-kubernetes at ``/opt/stack/``.
- In the ``devstack/plugin.sh``, comment out `configure_neutron_defaults <https://github.com/openstack/kuryr-kubernetes/blob/master/devstack/plugin.sh#L453>`_.
This method is getting UUID of default Neutron resources project, pod_subnet etc. using local neutron client
and setting those values in ``/etc/kuryr/kuryr.conf``.
This will not work at the moment because Neutron is running remotely. Thats why this is being commented out
and manually these variables will be configured in ``/etc/kuryr/kuryr.conf``
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
4. Once devstack is done and all services are up inside VM:
- Configure ``/etc/kuryr/kuryr.conf`` to set UUID of Neutron resources from undercloud Neutron::

[neutron_defaults]
ovs_bridge = br-int
pod_security_groups = <UNDERCLOUD_DEFAULT_SG_UUID>
pod_subnet = <UNDERCLOUD_SUBNET_FOR_PODS_UUID>
project = <UNDERCLOUD_DEFAULT_PROJECT_UUID>
worker_nodes_subnet = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>

- Configure “pod_vif_driver” as “nested-vlan”::

[kubernetes]
pod_vif_driver = nested-vlan

- Restart kuryr-k8s-controller from within devstack screen.

Now launch pods using kubectl, Undercloud Neutron will serve the networking.

Features
--------

Expand Down
28 changes: 28 additions & 0 deletions devstack/local.conf.pod-in-vm.overcloud.sample
@@ -0,0 +1,28 @@
[[local|localrc]]

RECLONE="no"

enable_plugin kuryr-kubernetes \
https://git.openstack.org/openstack/kuryr-kubernetes

OFFLINE="no"
LOGFILE=devstack.log
LOG_COLOR=False
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
IDENTITY_API_VERSION=3
ENABLED_SERVICES=""

enable_service key
enable_service mysql

enable_service docker
enable_service etcd
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubelet
enable_service kuryr-kubernetes
32 changes: 32 additions & 0 deletions devstack/local.conf.pod-in-vm.undercloud.sample
@@ -0,0 +1,32 @@
[[local|localrc]]

# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"

# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
# If you want the screen tabs logged in a specific location, you can use:
# SCREEN_LOGDIR="${HOME}/devstack_logs"

# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
TUNNEL_TYPE=vxlan
# Enable Keystone v3
IDENTITY_API_VERSION=3

# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
git://git.openstack.org/openstack/neutron-lbaas
enable_service q-lbaasv2
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"

[[post-config|/$Q_PLUGIN_CONF_FILE]]
[securitygroup]
firewall_driver = openvswitch
52 changes: 52 additions & 0 deletions kuryr_kubernetes/cni/binding/nested.py
@@ -0,0 +1,52 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

# from kuryr.lib import constants
# from kuryr.lib import utils
from kuryr_kubernetes.cni.binding import base as b_base
from kuryr_kubernetes import config


class VlanDriver(object):
def connect(self, vif, ifname, netns):
h_ipdb = b_base.get_ipdb()
c_ipdb = b_base.get_ipdb(netns)

# NOTE(vikasc): Ideally 'ifname' should be used here but instead a
# temporary name is being used while creating the device for container
# in host network namespace. This is because cni expects only 'eth0'
# as interface name and if host already has an interface named 'eth0',
# device creation will fail with 'already exists' error.
temp_name = vif.vif_name

# TODO(vikasc): evaluate whether we should have stevedore
# driver for getting the link device.
vm_iface_name = config.CONF.binding.link_iface
vlan_id = vif.vlan_id

with h_ipdb.create(ifname=temp_name,
link=h_ipdb.interfaces[vm_iface_name],
kind='vlan', vlan_id=vlan_id) as iface:
iface.net_ns_fd = netns

with c_ipdb.interfaces[temp_name] as iface:
iface.ifname = ifname
iface.mtu = vif.network.mtu
iface.address = str(vif.address)
iface.up()

def disconnect(self, vif, ifname, netns):
# NOTE(vikasc): device will get deleted with container namespace, so
# nothing to be done here.
pass
5 changes: 5 additions & 0 deletions kuryr_kubernetes/cni/main.py
Expand Up @@ -25,6 +25,7 @@
from kuryr_kubernetes.cni import handlers as h_cni
from kuryr_kubernetes import config
from kuryr_kubernetes import constants as k_const
from kuryr_kubernetes import objects
from kuryr_kubernetes import watcher as k_watcher

LOG = logging.getLogger(__name__)
Expand Down Expand Up @@ -75,6 +76,10 @@ def run():
# REVISIT(ivc): current CNI implementation provided by this package is
# experimental and its primary purpose is to enable development of other
# components (e.g. functional tests, service/LBaaSv2 support)

# TODO(vikasc): Should be done using dynamically loadable OVO types plugin.
objects.register_locally_defined_vifs()

runner = cni_api.CNIRunner(K8sCNIPlugin())

def _timeout(signum, frame):
Expand Down
5 changes: 4 additions & 1 deletion kuryr_kubernetes/config.py
Expand Up @@ -56,9 +56,12 @@
help=_("Default Neutron security groups' IDs for Kubernetes pods")),
cfg.StrOpt('ovs_bridge',
help=_("Default OpenVSwitch integration bridge"),
sample_default="br-int")
sample_default="br-int"),
cfg.StrOpt('worker_nodes_subnet',
help=_("Neutron subnet ID for k8s worker node vms.")),
]


CONF = cfg.CONF
CONF.register_opts(kuryr_k8s_opts)
CONF.register_opts(k8s_opts, group='kubernetes')
Expand Down
2 changes: 2 additions & 0 deletions kuryr_kubernetes/constants.py
Expand Up @@ -26,5 +26,7 @@
K8S_ANNOTATION_PREFIX = 'openstack.org/kuryr'
K8S_ANNOTATION_VIF = K8S_ANNOTATION_PREFIX + '-vif'

K8S_OS_VIF_NOOP_PLUGIN = "noop"

CNI_EXCEPTION_CODE = 100
CNI_TIMEOUT_CODE = 200
190 changes: 190 additions & 0 deletions kuryr_kubernetes/controller/drivers/nested_vlan_vif.py
@@ -0,0 +1,190 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from time import sleep

from kuryr.lib._i18n import _LE
from kuryr.lib import constants as kl_const
from kuryr.lib import segmentation_type_drivers as seg_driver
from neutronclient.common import exceptions as n_exc
from oslo_config import cfg as oslo_cfg
from oslo_log import log as logging

from kuryr_kubernetes import clients
from kuryr_kubernetes import config
from kuryr_kubernetes import constants as const
from kuryr_kubernetes.controller.drivers import generic_vif
from kuryr_kubernetes import exceptions as k_exc
from kuryr_kubernetes import os_vif_util as ovu


LOG = logging.getLogger(__name__)

DEFAULT_MAX_RETRY_COUNT = 3
DEFAULT_RETRY_INTERVAL = 1


class NestedVlanPodVIFDriver(generic_vif.GenericPodVIFDriver):
"""Manages ports for nested-containers to provide VIFs."""

def request_vif(self, pod, project_id, subnets, security_groups):
neutron = clients.get_neutron_client()
parent_port = self._get_parent_port(neutron, pod)
trunk_id = self._get_trunk_id(parent_port)

rq = self._get_port_request(pod, project_id, subnets, security_groups)
port = neutron.create_port(rq).get('port')

vlan_id = self._add_subport(neutron, trunk_id, port['id'])

vif_plugin = const.K8S_OS_VIF_NOOP_PLUGIN
vif = ovu.neutron_to_osvif_vif(vif_plugin, port, subnets)
vif.vlan_id = vlan_id
return vif

def release_vif(self, pod, vif):
neutron = clients.get_neutron_client()
parent_port = self._get_parent_port(neutron, pod)
trunk_id = self._get_trunk_id(parent_port)
self._remove_subport(neutron, trunk_id, vif.id)
self._release_vlan_id(vif.vlan_id)
try:
neutron.delete_port(vif.id)
except n_exc.PortNotFoundClient:
LOG.debug('Unable to release port %s as it no longer exists.',
vif.id)

def _get_port_request(self, pod, project_id, subnets, security_groups):
port_req_body = {'project_id': project_id,
'name': self._get_port_name(pod),
'network_id': self._get_network_id(subnets),
'fixed_ips': ovu.osvif_to_neutron_fixed_ips(subnets),
'device_owner': kl_const.DEVICE_OWNER,
'admin_state_up': True}

if security_groups:
port_req_body['security_groups'] = security_groups

return {'port': port_req_body}

def _get_trunk_id(self, port):
try:
return port['trunk_details']['trunk_id']
except KeyError:
LOG.error(_LE("Neutron port is missing trunk details. "
"Please ensure that k8s node port is associated "
"with a Neutron vlan trunk"))
raise k_exc.K8sNodeTrunkPortFailure

def _get_parent_port(self, neutron, pod):
node_subnet_id = config.CONF.neutron_defaults.worker_nodes_subnet
if not node_subnet_id:
raise oslo_cfg.RequiredOptError('worker_nodes_subnet',
'neutron_defaults')

try:
# REVISIT(vikasc): Assumption is being made that hostIP is the IP
# of trunk interface on the node(vm).
node_fixed_ip = pod['status']['hostIP']
except KeyError:
if pod['status']['conditions'][0]['type'] != "Initialized":
LOG.debug("Pod condition type is not 'Initialized'")

LOG.error(_LE("Failed to get parent vm port ip"))
raise

try:
fixed_ips = ['subnet_id=%s' % str(node_subnet_id),
'ip_address=%s' % str(node_fixed_ip)]
ports = neutron.list_ports(fixed_ips=fixed_ips)
except n_exc.NeutronClientException as ex:
LOG.error(_LE("Parent vm port with fixed ips %s not found!"),
fixed_ips)
raise ex

if ports['ports']:
return ports['ports'][0]
else:
LOG.error(_LE("Neutron port for vm port with fixed ips %s"
" not found!"), fixed_ips)
raise k_exc.K8sNodeTrunkPortFailure

def _add_subport(self, neutron, trunk_id, subport):
"""Adds subport port to Neutron trunk
This method gets vlanid allocated from kuryr segmentation driver.
In active/active HA type deployment, possibility of vlanid conflict
is there. In such a case, vlanid will be requested again and subport
addition is re-tried. This is tried DEFAULT_MAX_RETRY_COUNT times in
case of vlanid conflict.
"""
# TODO(vikasc): Better approach for retrying in case of
# vlan-id conflict.
retry_count = 1
while True:
try:
vlan_id = self._get_vlan_id(trunk_id)
except n_exc.NeutronClientException as ex:
LOG.error(_LE("Getting VlanID for subport on "
"trunk %s failed!!"), trunk_id)
raise ex
subport = [{'segmentation_id': vlan_id,
'port_id': subport,
'segmentation_type': 'vlan'}]
try:
neutron.trunk_add_subports(trunk_id,
{'sub_ports': subport})
except n_exc.Conflict as ex:
if retry_count < DEFAULT_MAX_RETRY_COUNT:
LOG.error(_LE("vlanid already in use on trunk, "
"%s. Retrying..."), trunk_id)
retry_count += 1
sleep(DEFAULT_RETRY_INTERVAL)
continue
else:
LOG.error(_LE(
"MAX retry count reached. Failed to add subport"))
raise ex

except n_exc.NeutronClientException as ex:
LOG.error(_LE("Error happened during subport"
"addition to trunk, %s"), trunk_id)
raise ex
return vlan_id

def _remove_subport(self, neutron, trunk_id, subport_id):
subport_id = [{'port_id': subport_id}]
try:
neutron.trunk_remove_subports(trunk_id,
{'sub_ports': subport_id})
except n_exc.NeutronClientException as ex:
LOG.error(_LE(
"Error happened during subport removal from trunk,"
"%s"), trunk_id)
raise ex

def _get_vlan_id(self, trunk_id):
vlan_ids = self._get_in_use_vlan_ids_set(trunk_id)
return seg_driver.allocate_segmentation_id(vlan_ids)

def _release_vlan_id(self, id):
return seg_driver.release_segmentation_id(id)

def _get_in_use_vlan_ids_set(self, trunk_id):
vlan_ids = set()
neutron = clients.get_neutron_client()
trunk = neutron.show_trunk(trunk_id)
for port in trunk['trunk']['sub_ports']:
vlan_ids.add(port['segmentation_id'])

return vlan_ids

0 comments on commit dc65eb1

Please sign in to comment.