|
| 1 | +.. |
| 2 | + This work is licensed under a Creative Commons Attribution 3.0 Unported |
| 3 | + License. |
| 4 | +
|
| 5 | + http://creativecommons.org/licenses/by/3.0/legalcode |
| 6 | + |
| 7 | +==================================================================== |
| 8 | +Routed Provider Networks with Multiple Segments per Host for ML2/OVN |
| 9 | +==================================================================== |
| 10 | + |
| 11 | +https://bugs.launchpad.net/neutron/+bug/2130453 |
| 12 | + |
| 13 | +As implemented by the ML2/OVN driver, routed provider networks are limited to |
| 14 | +one segment per host [1]_ [2]_. This specification proposes to remove such |
| 15 | +limitation, enabling multiple segments per host per network. |
| 16 | + |
| 17 | +Problem Description |
| 18 | +=================== |
| 19 | + |
| 20 | +Originally, all Neutron networks were single L2 broadcast domains. In such |
| 21 | +networks, performance degrades as traffic grows as a result of an increasing |
| 22 | +number of active VMs/ports. Routed provider networks were implemented to |
| 23 | +overcome this limitation and enable users to create "large networks", where a |
| 24 | +large number of VMs/ports can be connected without incuring the performance |
| 25 | +penalty of large single L2 broadcast domains. As shown in the following |
| 26 | +diagram, routed provider networks are constituted by several L2 segments |
| 27 | +(broadcast domains) stitched together by a router into one L3 "large network". |
| 28 | +In this arrangement, each individual segment handles only a portion of the |
| 29 | +total network traffic, improving overall performance. The router is not part |
| 30 | +of Neutron; it is provided by the underlying networking infrastructure [3]_. |
| 31 | + |
| 32 | +.. figure:: ../../images/routed-networks.jpg |
| 33 | + :target: ../../_images/routed-networks.jpg |
| 34 | + |
| 35 | +In routed provider networks each segment is connected to a group of hosts (a |
| 36 | +Nova aggregate), as shown in the following diagram. In the optimal situation, |
| 37 | +the network traffic generated by the workload running in the hosts doesn't |
| 38 | +exceed the capacity of the corresponding segment. |
| 39 | + |
| 40 | +.. figure:: ../../images/routed-networks-aggregates.jpg |
| 41 | + :target: ../../_images/routed-networks-aggregates.jpg |
| 42 | + |
| 43 | +Recently, though, some operators have found this not to be the case. They can |
| 44 | +comfortably accomodate in the hosts workloads that generate network traffic |
| 45 | +that exceeds their segment capacity. In such situations, more than one segment |
| 46 | +per host is necessary if the deployers are going to fully utilize their compute |
| 47 | +resources, while at the same time achieving the benefits of the routed provider |
| 48 | +networks, by effectively limiting the size of the individual L2 broadcast |
| 49 | +domains. This is shown in the following diagram. |
| 50 | + |
| 51 | +.. figure:: ../../images/routed-networks-aggregates-plus.jpg |
| 52 | + :target: ../../_images/routed-networks-aggregates-plus.jpg |
| 53 | + |
| 54 | +Proposed Change |
| 55 | +=============== |
| 56 | + |
| 57 | +This specification proposes to implement each segment in a routed provider |
| 58 | +network as an OVN Logical Switch. Each one of these logical switches will be |
| 59 | +associated with a Logical Switch Port of type localnet, that will map the |
| 60 | +segment to a physical network in the hosts connected to it. As a consequence, |
| 61 | +for routed provider networks, the one-to-one mapping between a Neutron network |
| 62 | +and an OVN Logical Switch will no longer be true. The following diagram |
| 63 | +summarizes the proposed approach, using a Neutron network named `public` and |
| 64 | +segments with `vlan-ids` 42, 43 and 44 as examples: |
| 65 | + |
| 66 | +.. figure:: ../../images/routed-networks-n-segments-per-host-NBDB.jpg |
| 67 | + :target: ../../_images/routed-networks-n-segments-per-host-NBDB.jpg |
| 68 | + |
| 69 | +At the chassis level, this design will be implemented as depicted in the |
| 70 | +following diagram: |
| 71 | + |
| 72 | +.. figure:: ../../images/routed-networks-n-segments-per-host.jpg |
| 73 | + :target: ../../_images/routed-networks-n-segments-per-host.jpg |
| 74 | + |
| 75 | +In this diagram, the key features of the proposed design are: |
| 76 | + |
| 77 | +#. The four bridges `br-ex*` represent the routed provider network named |
| 78 | + `public` depicted in the previous diagram with its three segments. |
| 79 | +#. Each of the bridges `br-ex-42`, `br-ex-43` and `br-ex-44` represents one of |
| 80 | + the segments in the routed provider network. |
| 81 | +#. For each segment, there will be a key-value pair in OVS's |
| 82 | + `external_ids:ovn-bridge-mappings`. In the `public` network example we are |
| 83 | + using in this specification, for the physnet identified as `physnet-43`, the |
| 84 | + corresponding mapping is `physnet-43:br-ex-43`. It is the presence of these |
| 85 | + mappings that triggers the `ovn-controller` to configure the patch ports on |
| 86 | + the `br-int` side of the segment bridges. It is important to note that |
| 87 | + `br-ex` is not part of `external_ids:ovn-bridge-mappings`. The |
| 88 | + `ovn-controller` doesn't interact with that bridge. |
| 89 | +#. There are two alternatives for the creation of the `br-ex*` bridges and the |
| 90 | + configuration of the `br-ex` side of the patch ports. They can be created |
| 91 | + automatically by Neutron or they can be created as a result of system |
| 92 | + administration activities. To select from these two alternatives, a large |
| 93 | + user of routed provider networks with the ML2/OVS driver was asked how |
| 94 | + frequently they have added segments to their hosts. They responded that they |
| 95 | + add segments every month. Based on this information, this specification |
| 96 | + proposes to develop an agent that will create bridges and configure them. It |
| 97 | + is important to note that since `br-ex` provides connectivity to the |
| 98 | + underlay network, it will still be created by the cloud |
| 99 | + operator. |
| 100 | + |
| 101 | +To implement the proposed new functionality, the following changes to the code |
| 102 | +are expected: |
| 103 | + |
| 104 | +#. When a segment is created for a routed provider network, an associated |
| 105 | + Logical Switch and Logical Switch Port of type `localnet` will need to be |
| 106 | + created in the OVN NBDB. Correspondingly, these OVN resources will have to |
| 107 | + be removed when the segment is deleted. |
| 108 | +#. When a port is created for a routed provider network, the creation of the |
| 109 | + associated Logical Switch Port will have to be deferred until the moment |
| 110 | + when the segment to which it is bound is known. Correspondingly, when a port |
| 111 | + is deleted, its associated Logical Switch Port will have to be removed from |
| 112 | + the correct Logical Switch. |
| 113 | +#. The OVN maintenance and DB synchronization periodic jobs must be updated to |
| 114 | + account for the changes described in the previous two points. |
| 115 | +#. For routed provider networks, the logical switch associated to each segment |
| 116 | + will have its own localport to serve metadata to VMs. This means that the |
| 117 | + metadata agent will be updated to provision the datapath in each chassis for |
| 118 | + localports associated to segments. |
| 119 | +#. A `neutron-ovn-agent` extension will be developed that will be responsible |
| 120 | + for creating and configuring the bridges that represent the routed provider |
| 121 | + network segments at the chassis level. When a segment is added, the |
| 122 | + extension will add the corresponding bridge and add it to the OVS |
| 123 | + OpenvSwitch table `ovn-bridge-mappings` attribute. Patch ports will be |
| 124 | + created between the new bridge and `br-int` and `br-ex`. The `br-ex` side of |
| 125 | + the patch port will be configured with the correct `tag` and `trunk` |
| 126 | + attributes. When a segment is removed, these steps will be undone. |
| 127 | + |
| 128 | +The implementation will be carried out in two phases. In the first phase, the |
| 129 | +core functionality described in the first four points above will be |
| 130 | +implemented. In the second phase, the `neutron-ovn-agent` extension to manage |
| 131 | +the segment bridges at the chassis level will be developed. This approach will |
| 132 | +allow us to start testing the core funcionality as soon as possible while |
| 133 | +giving us more time to develop the `neutron-ovn-agent` extension. |
| 134 | + |
| 135 | +References |
| 136 | +========== |
| 137 | + |
| 138 | +.. [1] https://bugs.launchpad.net/neutron/+bug/1865889 |
| 139 | +.. [2] https://docs.openstack.org/neutron/latest/admin/ovn/routed_provider_networks.html |
| 140 | +.. [3] https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html |
0 commit comments