Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebase openshift/kuryr-kubernetes from https://opendev.org/openstack/kuryr-kubernetes #376

Merged
merged 8 commits into from Oct 19, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 3 additions & 3 deletions contrib/vagrant/vagrant.sh
Expand Up @@ -14,12 +14,12 @@ set -ex

export HOST_IP=127.0.0.1

# run script
bash /vagrant/devstack.sh "$1"

# Enable IPv6
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0

# run script
bash /vagrant/devstack.sh "$1"

#set environment variables for kuryr
su "$OS_USER" -c "echo 'source /vagrant/config/kuryr_rc' >> ~/.bash_profile"
2 changes: 1 addition & 1 deletion devstack/local.conf.odl.sample
Expand Up @@ -160,7 +160,7 @@ enable_service kuryr-daemon
# By default, some Kuryr Handlers are set for DevStack installation. This can be
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,lb,lbaasspec
# KURYR_ENABLED_HANDLERS = vif,endpoints,service,kuryrloadbalancer,kuryrport

# Kuryr Ports Pools
# =================
Expand Down
3 changes: 1 addition & 2 deletions devstack/local.conf.openshift.sample
Expand Up @@ -158,8 +158,7 @@ enable_service kuryr-daemon
# By default, some Kuryr Handlers are set for DevStack installation. This can be
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,lb,lbaasspec

# KURYR_ENABLED_HANDLERS = vif,endpoints,service,kuryrloadbalancer,kuryrport
# Kuryr Ports Pools
# =================
#
Expand Down
3 changes: 1 addition & 2 deletions devstack/local.conf.ovn.sample
Expand Up @@ -208,8 +208,7 @@ KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
# By default, some Kuryr Handlers are set for DevStack installation. This can be
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,lb,lbaasspec

# KURYR_ENABLED_HANDLERS = vif,endpoints,service,kuryrloadbalancer,kuryrport
# Kuryr Ports Pools
# =================
#
Expand Down
2 changes: 1 addition & 1 deletion devstack/local.conf.sample
Expand Up @@ -188,7 +188,7 @@ enable_service kuryr-daemon
# By default, some Kuryr Handlers are set for DevStack installation. This can be
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,lb,lbaasspec
# KURYR_ENABLED_HANDLERS = vif,endpoints,service,kuryrloadbalancer,kuryrport

# Kuryr Ports Pools
# =================
Expand Down
4 changes: 2 additions & 2 deletions doc/source/devref/kuryr_kubernetes_design.rst
Expand Up @@ -178,8 +178,8 @@ currently includes the following:
================ =========================
vif Pod
kuryrport KuryrPort CRD
lb Endpoint
lbaasspec Service
endpoints Endpoint
service Service
================ =========================

For example, to enable only the 'vif' controller handler we should set the
Expand Down
9 changes: 6 additions & 3 deletions doc/source/installation/network_namespace.rst
Expand Up @@ -13,7 +13,8 @@ the next steps are needed:
.. code-block:: ini

[kubernetes]
enabled_handlers=vif,lb,lbaasspec,namespace,kuryrnetwork,kuryrport
enabled_handlers=vif,endpoints,service,kuryrloadbalancer,namespace,
kuryrnetwork,kuryrport

Note that if you also want to enable prepopulation of ports pools upon new
namespace creation, you need to also add the kuryrnetwork_population
Expand All @@ -22,7 +23,8 @@ the next steps are needed:
.. code-block:: ini

[kubernetes]
enabled_handlers=vif,lb,lbaasspec,namespace,kuryrnetwork,kuryrport,kuryrnetwork_population
enabled_handlers=vif,endpoints,service,kuryrloadbalancer,namespace,
kuryrnetwork,kuryrport,kuryrnetwork_population

#. Enable the namespace subnet driver by modifying the default
pod_subnet_driver option at kuryr.conf:
Expand Down Expand Up @@ -73,7 +75,8 @@ to add the namespace handler and state the namespace subnet driver with:
.. code-block:: console

KURYR_SUBNET_DRIVER=namespace
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace,kuryrnetwork,kuryrport
KURYR_ENABLED_HANDLERS=vif,endpoints,service,kuryrloadbalancer,namespace,
kuryrnetwork,kuryrport

.. note::

Expand Down
10 changes: 7 additions & 3 deletions doc/source/installation/network_policy.rst
Expand Up @@ -10,7 +10,9 @@ be found at :doc:`./devstack/containerized`):
.. code-block:: ini

[kubernetes]
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetwork,kuryrnetworkpolicy,kuryrport
enabled_handlers=vif,endpoints,service,kuryrloadbalancer,policy,
pod_label,namespace,kuryrnetwork,kuryrnetworkpolicy,
kuryrport

Note that if you also want to enable prepopulation of ports pools upon new
namespace creation, you need to also dd the kuryrnetwork_population handler
Expand All @@ -19,7 +21,9 @@ namespace creation, you need to also dd the kuryrnetwork_population handler
.. code-block:: ini

[kubernetes]
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetworkpolicy,kuryrnetwork,kuryrnetwork_population,kuryrport
enabled_handlers=vif,endpoints,service,kuryrloadbalancer,policy,
pod_label,namespace,kuryrnetworkpolicy,kuryrnetwork,
kuryrport,kuryrnetwork_population

After that, enable also the security group drivers for policies:

Expand Down Expand Up @@ -82,7 +86,7 @@ to add the policy, pod_label and namespace handler and drivers with:

.. code-block:: bash

KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetworkpolicy,kuryrport
KURYR_ENABLED_HANDLERS=vif,endpoints,service,kuryrloadbalancer,policy,pod_label,namespace,kuryrnetworkpolicy,kuryrport
KURYR_SG_DRIVER=policy
KURYR_SUBNET_DRIVER=namespace

Expand Down
6 changes: 4 additions & 2 deletions doc/source/installation/ports-pool.rst
Expand Up @@ -169,12 +169,14 @@ subnet), the next handler needs to be enabled:
.. code-block:: ini

[kubernetes]
enabled_handlers=vif,lb,lbaasspec,namespace,*kuryrnetwork*
enabled_handlers=vif,endpoints,service,kuryrloadbalancer,namespace,
*kuryrnetwork*


This can be enabled at devstack deployment time to by adding the next to the
local.conf:

.. code-block:: bash

KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,namespace,*kuryrnetwork*
KURYR_ENABLED_HANDLERS=vif,endpoints,service,kuryrloadbalancer,namespace,
*kuryrnetwork*
63 changes: 18 additions & 45 deletions doc/source/installation/services.rst
Expand Up @@ -3,7 +3,7 @@ Kubernetes services networking
==============================

Kuryr-Kubernetes default handler for handling Kubernetes `services`_ and
endpoints uses the OpenStack Neutron `LBaaS API`_ in order to have each service
endpoints uses the OpenStack `Octavia API`_ in order to have each service
be implemented in the following way:

* **Service**: It is translated to a single **LoadBalancer** and as many
Expand All @@ -21,59 +21,32 @@ be implemented in the following way:
corner are implemented in plain Kubernetes networking (top-right) and in
Kuryr's default configuration (bottom)

If you are paying attention and are familiar with the `LBaaS API`_ you probably
noticed that we have separate pools for each exposed port in a service. This is
probably not optimal and we would probably benefit from keeping a single
Neutron pool that lists each of the per port listeners. Since `LBaaS API`_
doesn't support UDP load balancing, service exported UDP ports will be ignored.
If you are paying attention and are familiar with the `Octavia API`_ you
probably noticed that we have separate pools for each exposed port in a
service. This is probably not optimal and we would probably benefit from
keeping a single Neutron pool that lists each of the per port listeners.

When installing you can decide to use the legacy Neutron HAProxy driver for
LBaaSv2 or install and configure OpenStack Octavia, which as of Pike implements
the whole API without need of the neutron-lbaas package.
Kuryr-Kubernetes uses OpenStack Octavia as the load balancing solution for
OpenStack and to provide connectivity to the Kubernetes Services.

It is beyond the scope of this document to explain in detail the inner workings
of these two possible Neutron LBaaSv2 backends thus, only a brief explanation
will be offered on each.


Legacy Neutron HAProxy agent
----------------------------

The requirements for running Kuryr with the legacy Neutron HAProxy agent are
the following:

* Keystone
* Neutron
* Neutron-lbaasv2 agent

As you can see, the only addition from the minimal OpenStack deployment for
Kuryr is the Neutron lbaasv2 agent.

In order to use Neutron HAProxy as the Neutron LBaaSv2 implementation you
should not only install the neutron-lbaas agent but also place this snippet in
the *[service_providers]* section of neutron.conf in your network controller
node:

.. code-block:: ini

NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"

When Kuryr sees a service and creates a load balancer, the HAProxy agent will
spawn a HAProxy process. The HAProxy will then configure the LoadBalancer as
listeners and pools are added. Thus you should take into consideration the
memory requirements that arise from having one HAProxy process per Kubernetes
Service.
It is beyond the scope of this document to explain in detail the inner
workings of Openstack Octavia thus, only a brief explanation will be offered.


Octavia
-------

OpenStack Octavia is a new project that provides advanced Load Balancing by
using pre-existing OpenStack services. The OpenStack requirements that Octavia
adds over the Neutron HAProxy agent are:
OpenStack Octavia is a project that provides advanced Load Balancing by using
pre-existing OpenStack services. The requirements for running Kuryr with
OpenStack Octavia are the following:

* Nova
* Neutron
* Glance
* Barbican (if TLS offloading functionality is enabled)
* Keystone
* Rabbit
* MySQL

You can find a good explanation about the involved steps to install Octavia in
the `Octavia installation docs`_.
Expand Down Expand Up @@ -787,5 +760,5 @@ Troubleshooting


.. _services: https://kubernetes.io/docs/concepts/services-networking/service/
.. _LBaaS API: https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0
.. _Octavia API: https://docs.openstack.org/api-ref/load-balancer/v2/
.. _Octavia installation docs: https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html
3 changes: 2 additions & 1 deletion kuryr_kubernetes/config.py
Expand Up @@ -179,7 +179,8 @@
cfg.ListOpt('enabled_handlers',
help=_("The comma-separated handlers that should be "
"registered for watching in the pipeline."),
default=['vif', 'lb', 'lbaasspec']),
default=['vif', 'endpoints', 'service', 'kuryrloadbalancer',
'kuryrport']),
cfg.BoolOpt('controller_ha',
help=_('Enable kuryr-controller active/passive HA. Only '
'supported in containerized deployments on Kubernetes '
Expand Down
2 changes: 1 addition & 1 deletion tox.ini
Expand Up @@ -77,7 +77,7 @@ commands = oslo-config-generator --config-file=etc/oslo-config-generator/kuryr.c

[testenv:releasenotes]
basepython = python3
deps = -r{toxinidir}/doc/requirements.txt
deps = {[testenv:docs]deps}
commands = sphinx-build -a -W -E -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html

[testenv:lower-constraints]
Expand Down