Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for subnet per namespace kuryr feature #8340

Merged
merged 1 commit into from Jun 27, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
13 changes: 13 additions & 0 deletions playbooks/openstack/configuration.md
Expand Up @@ -291,6 +291,19 @@ openshift_openstack_cluster_node_labels:
```


### Namespace Subnet driver

By default, kuryr is configured with the default subnet driver where all the
pods are deployed on the same Neutron subnet. However, there is an option of
enabling a different subnet driver, named namespace, which makes pods to be
allocated on different subnets depending on the namespace they belong to. To
enable this new kuryr subnet driver you need to uncomment:

```yaml
openshift_kuryr_subnet_driver: namespace
```


### Deploying OpenShift Registry

Since we've disabled the OpenShift registry creation, you will have to create
Expand Down
4 changes: 4 additions & 0 deletions playbooks/openstack/inventory.py
Expand Up @@ -205,6 +205,10 @@ def _get_kuryr_vars(cloud_client, data):
"""Returns a dictionary of Kuryr variables resulting of heat stacking"""
settings = {}
settings['kuryr_openstack_pod_subnet_id'] = data['pod_subnet']
if 'pod_subnet_pool' in data:
settings['kuryr_openstack_pod_subnet_pool_id'] = data[
'pod_subnet_pool']
settings['kuryr_openstack_pod_router_id'] = data['pod_router']
settings['kuryr_openstack_worker_nodes_subnet_id'] = data['vm_subnet']
settings['kuryr_openstack_service_subnet_id'] = data['service_subnet']
settings['kuryr_openstack_pod_sg_id'] = data['pod_access_sg_id']
Expand Down
3 changes: 3 additions & 0 deletions playbooks/openstack/sample-inventory/group_vars/all.yml
Expand Up @@ -56,6 +56,9 @@ openshift_openstack_external_network_name: "public"
# information
# kuryr_openstack_public_subnet_id: uuid_of_my_fip_subnet

# # Kuryr can use a different subnet per namespace
# openshift_kuryr_subnet_driver: namespace

# If you VM images will name the ethernet device different than 'eth0',
# override this
#kuryr_cni_link_interface: eth0
Expand Down
3 changes: 3 additions & 0 deletions roles/kuryr/README.md
Expand Up @@ -28,6 +28,8 @@ pods. This allows to have interconnectivity between pods and OpenStack VMs.
* ``kuryr_openstack_password=kuryr_pass``
* ``kuryr_openstack_pod_sg_id=pod_security_group_uuid``
* ``kuryr_openstack_pod_subnet_id=pod_subnet_uuid``
* ``kuryr_openstack_pod_subnet_pool_id=pod_subnet_pool_uuid``
* ``kuryr_openstack_pod_router_id=pod_router_uuid``
* ``kuryr_openstack_pod_service_id=service_subnet_uuid``
* ``kuryr_openstack_pod_project_id=pod_project_uuid``
* ``kuryr_openstack_worker_nodes_subnet_id=worker_nodes_subnet_uuid``
Expand All @@ -38,6 +40,7 @@ pods. This allows to have interconnectivity between pods and OpenStack VMs.
* ``kuryr_openstack_pool_update_frequency=20``
* ``openshift_kuryr_precreate_subports=5``
* ``openshift_kuryr_device_owner=compute:kuryr``
* ``openshift_kuryr_subnet_driver=default``

## Kuryr resources

Expand Down
14 changes: 14 additions & 0 deletions roles/kuryr/defaults/main.yaml
Expand Up @@ -87,3 +87,17 @@ kuryr_clusterrole:
- services
- services/status
- routes
- apiGroups:
- apiextensions.k8s.io/v1beta1
attributeRestrictions: null
verbs:
- "*"
resources:
- customresourcedefinitions
- apiGroups:
- openstack.org
attributeRestrictions: null
verbs:
- "*"
resources:
- kuryrnets
17 changes: 17 additions & 0 deletions roles/kuryr/tasks/master.yaml
Expand Up @@ -31,6 +31,13 @@
src: cni-daemonset.yaml.j2
dest: "{{ manifests_tmpdir.stdout }}/cni-daemonset.yaml"

- name: Create kuryrnet CRD manifest
become: yes
template:
src: kuryrnet.yaml.j2
dest: "{{ manifests_tmpdir.stdout }}/kuryrnet.yaml"
when: openshift_kuryr_subnet_driver|default("default") == 'namespace'

- name: Apply OpenShift node's ImageStreamTag manifest
oc_obj:
state: present
Expand Down Expand Up @@ -71,3 +78,13 @@
files:
- "{{ manifests_tmpdir.stdout }}/cni-daemonset.yaml"
run_once: true

- name: Apply kuryrnet CRD manifest
oc_obj:
state: present
kind: CustomResourceDefinition
name: "kuryrnets"
files:
- "{{ manifests_tmpdir.stdout }}/kuryrnet.yaml"
run_once: true
when: openshift_kuryr_subnet_driver|default("default") == 'namespace'
15 changes: 14 additions & 1 deletion roles/kuryr/templates/configmap.yaml.j2
Expand Up @@ -216,7 +216,7 @@ data:
service_project_driver = default

# The driver to determine Neutron subnets for pod ports (string value)
pod_subnets_driver = default
pod_subnets_driver = {{ openshift_kuryr_subnet_driver|default('default') }}

# The driver to determine Neutron subnets for services (string value)
service_subnets_driver = default
Expand All @@ -233,6 +233,14 @@ data:
# The driver that manages VIFs pools for Kubernetes Pods (string value)
vif_pool_driver = {{ kuryr_openstack_pool_driver }}

# The comma-separated handlers that should be registered for watching
# in the pipeline. (list value)
{% if openshift_kuryr_subnet_driver|default('default') == 'namespace' %}
enabled_handlers = vif,lb,lbaasspec,namespace
{% else %}
enabled_handlers = vif,lb,lbaasspec
{% endif %}

[neutron]
# Configuration options for OpenStack Neutron

Expand Down Expand Up @@ -298,6 +306,11 @@ data:
external_svc_subnet = {{ kuryr_openstack_public_subnet_id }}
{% endif %}

{% if openshift_kuryr_subnet_driver|default('default') == 'namespace' %}
[namespace_subnet]
pod_subnet_pool = {{ kuryr_openstack_pod_subnet_pool_id }}
pod_router = {{ kuryr_openstack_pod_router_id }}
{% endif %}

[pod_vif_nested]

Expand Down
14 changes: 14 additions & 0 deletions roles/kuryr/templates/kuryrnet.yaml.j2
@@ -0,0 +1,14 @@
# More info about the template: https://docs.openstack.org/kuryr-kubernetes/latest/installation/containerized.html#generating-kuryr-resource-definitions-for-kubernetes

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kuryrnets.openstack.org
spec:
group: openstack.org
version: v1
scope: Cluster
names:
plural: kuryrnets
singular: kuryrnet
kind: KuryrNet
@@ -0,0 +1,98 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-

# Copyright 2018 Red Hat, Inc. and/or its affiliates
# and other contributors as indicated by the @author tags.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


# pylint: disable=unused-wildcard-import,wildcard-import,unused-import,redefined-builtin

''' os_namespace_resources_deletion '''
import keystoneauth1

from ansible.module_utils.basic import AnsibleModule

try:
import shade
HAS_SHADE = True
except ImportError:
HAS_SHADE = False

DOCUMENTATION = '''
---
module: os_namespace_resources_deletion
short_description: Delete network resources associated to the namespace
description:
- Detach namespace's subnet from the router and delete the network
author:
- "Luis Tomas Bolivar <ltomasbo@redhat.com>"
'''

RETURN = '''
'''


def main():
''' Main module function '''
module = AnsibleModule(
argument_spec=dict(
router_id=dict(default=False, type='str'),
subnet_id=dict(default=False, type='str'),
net_id=dict(default=False, type='str'),
),
supports_check_mode=True,
)

if not HAS_SHADE:
module.fail_json(msg='shade is required for this module')

try:
cloud = shade.openstack_cloud()
# pylint: disable=broad-except
except Exception:
module.fail_json(msg='Failed to connect to the cloud')

try:
adapter = keystoneauth1.adapter.Adapter(
session=cloud.keystone_session,
service_type=cloud.cloud_config.get_service_type('network'),
interface=cloud.cloud_config.get_interface('network'),
endpoint_override=cloud.cloud_config.get_endpoint('network'),
version=cloud.cloud_config.get_api_version('network'))
# pylint: disable=broad-except
except Exception:
module.fail_json(msg='Failed to get an adapter to talk to the Neutron '
'API')

try:
subnet_info = {"subnet_id": module.params['subnet_id'].encode('ascii')}
data = {'data': str(subnet_info).replace('\'', '\"')}
adapter.put('/routers/' + module.params['router_id'] + '/remove_router_interface', **data)
# pylint: disable=broad-except
except Exception:
module.fail_json(msg='Failed to detach subnet from the router')

try:
adapter.delete('/networks/' + module.params['net_id'])
# pylint: disable=broad-except
except Exception:
module.fail_json(msg='Failed to delete Neutron Network associated to the namespace')

module.exit_json(
changed=True)


if __name__ == '__main__':
main()
22 changes: 22 additions & 0 deletions roles/openshift_openstack/tasks/unprovision.yml
Expand Up @@ -25,6 +25,28 @@
when:
- openshift_use_kuryr|default(false) == true

- name: Get kuryr net CRDs
delegate_to: "{{ groups.oo_first_master.0 }}"
oc_obj:
kind: kuryrnets
state: list
all_namespaces: true
register: svc_output
ignore_errors: true

# NOTE(ltomasbo) This only works for nested deployments.
# Moreover the pods should not have FIPs attached
- name: Detach namespace subnets from router
os_namespace_resources_deletion:
router_id: "{{ item.spec.routerId }}"
subnet_id: "{{ item.spec.subnetId }}"
net_id: "{{ item.spec.netId }}"
with_items: "{{ svc_output.results.results[0]['items'] if 'results' in svc_output else [] }}"
when:
- openshift_use_kuryr|default(false) == true
- openshift_kuryr_subnet_driver|default("default") == 'namespace'
- item.metadata.annotations is defined

- name: Delete the Stack
ignore_errors: False
os_stack:
Expand Down
27 changes: 27 additions & 0 deletions roles/openshift_openstack/templates/heat_stack.yaml.j2
Expand Up @@ -101,6 +101,16 @@ outputs:
description: ID of the subnet the services will be on
value: { get_resource: service_subnet }

pod_router:
description: ID of the router where the pod subnet will be connected
value: { get_resource: router }

{% if openshift_kuryr_subnet_driver|default('default') == 'namespace' %}
pod_subnet_pool:
description: ID of the subnet pool to use for the pod_subnets CIDRs
value: { get_resource: pod_subnet_pool }
{% endif %}

pod_access_sg_id:
description: Id of the security group for services to be able to reach pods
value: { get_resource: pod_access_sg }
Expand Down Expand Up @@ -194,11 +204,28 @@ resources:
params:
cluster_id: {{ openshift_openstack_full_dns_domain }}

{% if openshift_kuryr_subnet_driver|default('default') == 'namespace' %}
pod_subnet_pool:
type: OS::Neutron::SubnetPool
properties:
prefixes: [ {{ openshift_openstack_kuryr_pod_subnet_cidr }} ]
default_prefixlen: 24
name:
str_replace:
template: openshift-ansible-cluster_id-pod-subnet-pool
params:
cluster_id: {{ openshift_openstack_full_dns_domain }}
{% endif %}

pod_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: pod_net }
{% if openshift_kuryr_subnet_driver|default('default') == 'namespace' %}
subnetpool: { get_resource: pod_subnet_pool }
{% else %}
cidr: {{ openshift_openstack_kuryr_pod_subnet_cidr }}
{% endif %}
enable_dhcp: False
name:
str_replace:
Expand Down