Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Static routes from frontend to "isolated" workload networks #15

Closed
souhradab opened this issue Mar 1, 2021 · 9 comments
Closed

Static routes from frontend to "isolated" workload networks #15

souhradab opened this issue Mar 1, 2021 · 9 comments

Comments

@souhradab
Copy link

version: HA Proxy Load Balancer API v0.1.10

I have a 3 NIC HA Proxy setup.
NIC 1: MGMT: Default Gateway is configured here.
NIC2: Primary Workload
NIC3: Frontend

I have peculiar management network setup. My environment is setup such that the MGMT network where my ESXi hosts, vCenter, Supervisor MGMT, and HA Proxy MGMT all reside, does not have a route to the Workload networks. It's essentially an air gapped management network.

My Tanzu cluster setup contains a Primary, and two additional "isolated" Workload networks. Traffic that enters the HA Proxy Frontend and is destined for backends on the Primary Workload network reaches those backends fine because the HA Proxy NIC2 is directly connected.

However, the issue I run into is that when traffic enters the HA Proxy Frontend, and is forwarded to the destination backends located on the isolated Workload networks it is being sent to the Default Gateway on the management interface, and this network cannot reach the secondary workload networks. I thought by adding some values in the route-tables.cfg for the isolated workload networks I would be able to configure static routes for the Frontend network, but either this does not work the way I was thinking it would, or I am getting the syntax wrong.

In the end I was able to work around my issue by adding static routes into the Frontend network-scripts file (/etc/systemd/network/10-frontend.network).

@brakthehack
Copy link
Collaborator

Hey Bill, thanks for filing the issue.

It's essentially an air gapped management network

This by itself shouldn't be an issue.

two additional "isolated" Workload networks.

Can you elaborate a bit what you mean by isolated here. Is it that these networks are not routable to each other, but they are routable to the primary WL network? We only support a configuration in which workload networks are L3 routable to each other. Apologies if that is the case here and I'm simply misinterpreting your comment.

Do you mind sharing which routes you added as workarounds and why this was sufficient for your needs?

Could you also clarify a bit more what you are asking? Is it support for such a network topology in general or automation around the route configuration?

@souhradab
Copy link
Author

The workload networks are routable to each other.

Perhaps this will help:
tanzu-vsphere-networking-overview

@souhradab
Copy link
Author

Its more or less the section in the VMware documentation on Tanzu with vSphere Networking Topologies document called "Topology with Multiple Isolated Workload Networks":

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-C86B9028-2701-40FE-BA05-519486E010F4.html

And the HA Proxy 3 NIC configuration

@brakthehack
Copy link
Collaborator

brakthehack commented Mar 6, 2021

Thank you for the excellent diagram. It is very helpful in understanding what you're accomplishing.

We have two options.

  1. Make the workload network the default gateway instead of the management network. This means that anything that is routable via the management network will no longer be accessible. At present this is only expected to be vCenter, but if that changes in the future there could be an issue. At present though it's not an issue.
  2. Add logic to issue a NIC per workload network. This is useful because it's clear how workload networks would be routed and would not change existing behavior. However, it comes with downside as the user must enter all of these into the configuration screen. If a user has a large number of workload networks, they could make mistakes. Terminology would also likely have to change which could cause some confusion for users.

@souhradab I want to get your opinion whether you prefer one option or another. Thanks!

@brakthehack
Copy link
Collaborator

@daniel-corbett we would welcome your advice if you're interested as well.

@souhradab
Copy link
Author

Regarding (1), without binding your options maybe just giving the option for custom (static) routes will help solve this for people with special setups like mine. You could even till give the option of setting any interface as default GW.

Regarding (2), NIC per workload network will likely run into VM virtual NIC limits in larger environments.

@brakthehack
Copy link
Collaborator

Regarding (1), without binding your options maybe just giving the option for custom (static) routes will help solve this for people with special setups like mine. You could even till give the option of setting any interface as default GW.

This is true, but we also want to keep things simple. By default, workload networks must be routable to each other, which means defaulting to the workload gateway would be good enough for the majority of users. Allowing users to add specific routes by default may also encourage complex configurations which may increase the cost to debug for both users and VMware/HAProxy.

For more complex configurations, users are free to make edits to route configurations as they wish once the appliance is stood up. In this setup, they are the owners of the appliance. It's unclear if adding passthrough code for routes as a tradeoff for an even more complex configuration is a net benefit over a default gateway.

I agree with your point on NIC limits. If we expect a large number of networks for some users, this is probably not a viable option if we want to trouble that user segment.

@brakthehack
Copy link
Collaborator

After more thought, making workload the default gateway may make it harder for some users to manage the appliance over something like SSH. Routes, as you suggested, might be the way to go here.

@souhradab
Copy link
Author

maybe some sort of menu item: default GW on management or workload network? And then, based on the choice: static routes needed, if necessary, on the interface that is NOT the default GW assigned interface.

brakthehack added a commit to brakthehack/vmware-haproxy that referenced this issue Mar 11, 2021
Previously, HAProxy would not route to additional workload networks
without user customization because we did not give them the option
to provide workload networks interfaces via the UI.

This change implements the ability for the user to provide workload
networks the user wishes to route to. Since we expect workload
networks to be routable to each other, we can program route rules
to user-provided CIDR ranges in which routes will exit via the
workload default gateway. These routes are configurable via a new
file located at /etc/vmware/workload-networks.cfg. This file is
written once just before cloud-init performs the bootstrapping.

This change also fixes a few bugs in route table configuration.

Closes haproxytech#15, haproxytech#11, haproxytech#10
brakthehack added a commit to brakthehack/vmware-haproxy that referenced this issue Mar 11, 2021
Previously, HAProxy would not route to additional workload networks
without user customization because we did not give them the option
to provide workload networks interfaces via the UI.

This change implements the ability for the user to provide workload
networks the user wishes to route to. Since we expect workload
networks to be routable to each other, we can program route rules
to user-provided CIDR ranges in which routes will exit via the
workload default gateway. These routes are configurable via a new
file located at /etc/vmware/workload-networks.cfg. This file is
written once just before cloud-init performs the bootstrapping.

This change also fixes a few bugs in route table configuration.

Closes haproxytech#15, haproxytech#11, haproxytech#10
brakthehack added a commit to brakthehack/vmware-haproxy that referenced this issue Mar 12, 2021
Previously, HAProxy would not route to additional workload networks
without user customization because we did not give them the option
to provide workload networks interfaces via the UI.

This change implements the ability for the user to provide workload
networks the user wishes to route to. Since we expect workload
networks to be routable to each other, we can program route rules
to user-provided CIDR ranges in which routes will exit via the
workload default gateway. These routes are configurable via a new
file located at /etc/vmware/workload-networks.cfg. This file is
written once just before cloud-init performs the bootstrapping.

This change also fixes a few bugs in route table configuration.

Closes haproxytech#15, haproxytech#11, haproxytech#10
brakthehack added a commit to brakthehack/vmware-haproxy that referenced this issue Apr 22, 2021
Previously, HAProxy would not route to additional workload networks
without user customization because we did not give them the option
to provide workload networks interfaces via the UI.

This change implements the ability for the user to provide workload
networks the user wishes to route to. Since we expect workload
networks to be routable to each other, we can program route rules
to user-provided CIDR ranges in which routes will exit via the
workload default gateway. These routes are configurable via a new
file located at /etc/vmware/workload-networks.cfg. This file is
written once just before cloud-init performs the bootstrapping.

This change also fixes a few bugs in route table configuration.

Closes haproxytech#15, haproxytech#11, haproxytech#10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants