Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom VNet for AKS cluster #27

Closed
shayansarkar opened this issue Nov 6, 2017 · 88 comments
Closed

Custom VNet for AKS cluster #27

shayansarkar opened this issue Nov 6, 2017 · 88 comments
Assignees

Comments

@shayansarkar
Copy link

Is there a way to specify a custom VNet for a cluster created via AKS? When this cluster is created, it is in a VNet where the address space is 10.0.0.0/8. This basically means that if I have any other VNets in the 10. space, they cannot be peered with this cluster. With the ACS service, we were able to create custom VNet ranges with acs-engine if necessary.

@rolanddb
Copy link

rolanddb commented Nov 9, 2017

+1,
Agree that this is basic & required functionality.

@sanglt
Copy link

sanglt commented Nov 13, 2017

+1,

We need this functionality too.

@relferreira
Copy link

We need this functionality to start using AKS in our company!

@slack
Copy link
Contributor

slack commented Nov 14, 2017

We are tracking custom VNET support as part of our GA goals.

@slack slack self-assigned this Nov 14, 2017
@slack slack added the roadmap label Nov 14, 2017
@anssrav
Copy link

anssrav commented Dec 22, 2017

HI Slack
is there any update on adding custom Vnet to AKS as we are holding our testing due to this upcoming feature.

@slack
Copy link
Contributor

slack commented Dec 22, 2017

Shooting for existing VNET support Q12018

@jmalobicky
Copy link

With custom vnet, will a WAF resource be able to be implemented for app filtering?

@slack
Copy link
Contributor

slack commented Dec 22, 2017

@jmalobicky you should be able to deploy a WAF today, using the Kubernetes IP or FQDN as the downstream target.

With custom VNET support for AKS, you could provision Application Gateway WAF into a dedicated subnet within the VNET and WAF to AKS node routing should work.

@ejsmith
Copy link

ejsmith commented Dec 22, 2017

Is there any issues with deploying an AKS cluster and then adding other VMs to the AKS VNET that was created?

@slack
Copy link
Contributor

slack commented Jan 4, 2018

@ejsmith you can as long as the VMs are in a separate subnet and availability set. Do note that this is not a supported configuration and no guarantees that scale/upgrade operations won't impact those VMs provisioned out of band.

@cxtawayne
Copy link

This is also gating our use of AKS. We use 10.x.x.x space internally and the inability to customize the range (to another private space/address range) or reduce the network footprint (use of /16's) prevents us at the moment.

@cyrilbkr
Copy link

cyrilbkr commented Feb 8, 2018

any news on this ?

we need this features asap please

@slack
Copy link
Contributor

slack commented Feb 8, 2018

No updates yet. Still a top priority for the team, but we are currently focused on availability and stability fixes behind the scenes.

@glloyd2010f
Copy link

This is also gating our use of AKS. We use 10.x.x.x space internally and the inability to customize the range (to another private space/address range) or reduce the network footprint (use of /16's) prevents us at the moment. @cxtawayne

Excellent point. We have the same issue which is blocking us from doing a full POC on this. Having it switched to a /16 would be immensely helpful. Would reducing the size of the vNet affect performance that much?

@tovern
Copy link

tovern commented Feb 9, 2018

+1
The lack of this functionality combined with the current instability of the platform is preventing us from adopting AKS.

@slack
Copy link
Contributor

slack commented Feb 9, 2018

Appreciate the feedback (and your patience). The work required for custom VNET will also allow customers to specify subnets/ranges for use in the cluster. Will allow for much smaller address ranges, etc.

@rhollins
Copy link

rhollins commented Feb 10, 2018

Same here, need apps on kubernetes to communicate through site-to-site to on-prem servers and also with PAAS solution which is inside vnet, so until this functionality is enabled can't use it.

@hibri
Copy link

hibri commented Feb 13, 2018

Spotted the VnetSubnetID parameter appear in the ARM reference last week.
https://docs.microsoft.com/en-us/azure/templates/microsoft.containerservice/managedclusters
Has this been released? I was able to create an AKS cluster in a custom Vnet with an ARM template.

The API endpoint is still public however.

@hafizullah
Copy link

hafizullah commented Feb 13, 2018

This is starting to be a major blocker for us as well. Any timeline for this feature to be delivered for AKS?

@Dienemy
Copy link

Dienemy commented Feb 15, 2018

For us it is also blocker and we can not use AKS without this feature. So is there any ETA?

@hibri
Copy link

hibri commented Feb 21, 2018

Would the API server address be private when running in a custom Vnet? or would we still have to connect to it externally?

@slack
Copy link
Contributor

slack commented Feb 21, 2018

@hibri the API server address will continue to be public, with a follow-on feature moving the API server into the VNET.

@hibri
Copy link

hibri commented Feb 21, 2018

@slack Thanks. do you have an ETA for the follow on?

@mttocs
Copy link

mttocs commented Feb 21, 2018

@hibri We tried sending over the vnetSubnetId to give this a try and it still insists on creating the VNET and 10.0.0.0/8 subnet (maybe 'masterProfile' is not exposed like it is in ACS?).

Do you have an example of your Microsoft.ContainerService/managedClusters resource section of your ARM template you would be willing to share or have any tips?

@hibri
Copy link

hibri commented Feb 22, 2018

@mttocs AKS still creates the Vnet, but no nodes are attached to it. The nodes are attached to the Vnet/subnet I've specified. Not sure why AKS still needs to create an empty VNet.
The nodes will be in the dynamically created resource group managed by AKS, but the NICS for those nodes will be attached to the custom subnet.

{
	"$schema":
		"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
	"contentVersion": "1.0.0.0",
	"parameters": {},
	"variables": {},
	"resources": [
		{
			"name": "aardvark-3",
			"type": "Microsoft.ContainerService/managedClusters",
			"apiVersion": "2017-08-31",
			"location": "westeurope",
			"tags": {},
			"properties": {
				"dnsPrefix": "aardvark",
				"kubernetesVersion": "1.8.7",
				"agentPoolProfiles": [
					{
						"name": "agentpool01",
						"count": 2,
						"vmSize" : "Standard_D2_v2",
						"vnetSubnetID":
							"/subscriptions/<>/providers/Microsoft.Network/virtualNetworks/aardvark-net/subnets/default"
					}
				],

				"linuxProfile": {
					"adminUserName" : "<>",
				
					"ssh": {
						"publicKeys": [
							{
								"keyData":
									"<>"
							}
						]
					}
				},
				"servicePrincipalProfile": {
					"clientId": "<>",
					"secret": "<>"
				}
			}
		}
	],
	"outputs": {}
}

@rhollins
Copy link

rhollins commented Apr 16, 2018

I've spent some time and finally managed to reach pod as well as kubernetes services from on-premise through Site-2-Site. Here is description with example of some IP's you can use different one of course.

Components:

on-premise network: 192.168.0.18/24

Azure vnet: 172.16.1.0/25

Azure vnet default-subnet: 172.16.1.0/26

Azure vnet gateway subnet: 172.16.1.96/27

aks-vnet: 10.0.0.0/8 - created automatically when you deploy AKS

aks-subnet: 10.240.0.0/16 - created automatically when you deploy AKS, this is where internal load balancers are created

pods network inside kubernetes: 10.244.0.0/24 created automatically when you deploy AKS

aks node (aks-pool): 172.16.1.20 - created automatically, this is Linux VM where kubernetes cluster is running

Steps:

Create Site-to-site connection from on-premise to Azure vnet 192.168.0.18/24 <-> 172.16.1.0/25
Deploy AKS cluster (with one node) into the Azure using ARM template provided by @jasonchester on this thread, in template use "Azure vnet default-subnet" which is subnet in which aks node vm will be created.

To enable traffic from on-premise:
On the the on-premise vpn device add forwarding to the following prefixes so that if someone will try to reach addresses of internal load balancer (which is also kubernetes service external-ip) inside aks-subnet or even a pod inside pod-network he will be directed to Azure.
10.240.0.0/16 (aks-subnet)
10.244.0.0/24 (pods network)
Create Route Table in Azure with following rules:
1:
Address-Prefix: 10.240.0.0/16
Next hop: 172.16.1.20
2:
Address-Prefix: 10.244.0.0/24
Next hop: 172.16.1.20
Then assign this route table to "Azure vnet gateway subnet" and also to "Azure vnet default-subnet".

So if you will now deploy app to kubernetes which is using internal load balancer (https://docs.microsoft.com/en-us/azure/aks/internal-lb)
let say this app kubernetes service external load balancer ip is 10.240.0.5 (which is really a azure internal load balancer front ip configuration IP) and the actual pod address will be 10.244.0.56.

Now it might not be correct as I'm still new to AKS but If you will try to reach this app from on-premise this probably will be the flow:

on-prem network: 192.168.0.18/24 -> Azure vnet gateway subnet ->Azure vnet -> Azure vnet default-subnet -> aks node -> aks-vnet -> aks-subnet -> Internal Load balancer 10.240.0.5 -> aks node -> pod 10.244.0.56

@benbuckland
Copy link

@idelix How did you get the internal load balancer working?

I modified the deployment.yaml adding the following

  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "Acme.Azure.ContainerServices.Dev_QA-VNet/default"

But the subnet annotation appears to be ignored and I get the error
Error creating load balancer (will retry): Failed to ensure load balancer for service acme-ms-dev/service1: ensure(acme-ms-dev/service1): lb(kubernetes-internal) - failed to get subnet: aks-vnet-12345678/aks-subnet

Any help appreciated

@jasonchester
Copy link

jasonchester commented Apr 17, 2018

@benbuckland We haven’t had any luck with internal load balancer annotations in Kubernetes so have resorted to exposing an internal ingress controller with a node port (32443) service in Kubernetes.

In azure We are then manually creating an azure basic internal load balancer and setting up a load balancing rule from 443 to the nodeport on 32443. There is some manual administration to adjust the load balancer config when the cluster is scaled but we are able to minimize the efforts by exposing an ingress controller which can route all the http traffic to internal Kubernetes services

Once @slack and the rest of the aks team offer official internal lb support, switching to Kubernetes service annotations should be fairly seamless.

@rhollins
Copy link

rhollins commented Apr 17, 2018

Hi @benbuckland, In my case internal load balancer was created automatically no manual work needed so this functionality already works. I just created AKS cluster using template provided by @jasonchester and within that ARM template provided reference to Azure vnet default-subnet: 172.16.1.0/26 (from my previous post) this way aks node had IP within my vnet. In this ARM template I also used kubernetes version 1.9.2 and I created my cluster in West Europe.

If you check my yaml file I do not reference subnet in there at all, AKS will use its own subnet for internal load balancer 10.240.0.0/16.

Then I used following yaml file to perform deployment with kubectl:
kubectl apply -f .\deployment.yaml

This automatically creates Azure Load Balancer with name "kubernetes-internal" inside "MC_<...>-<..>_westeurope" resource group with frontend ip configuration that is really external ip in the kubernetes service.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gs-spring-boot-docker
  labels:
    run: gs-spring-boot-docker
spec:
  replicas: 1
  selector:
    matchLabels:
      run: gs-spring-boot-docker
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        run: gs-spring-boot-docker
    spec:
      containers:
      - image: <PUT ACR NAME HERE>/gs-spring-boot-docker:latest
        name: gs-spring-boot-docker
        imagePullPolicy: Always
        livenessProbe:
          httpGet:
            path: /actuator/health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 180        
        ports:
        - containerPort: 8080
      imagePullSecrets:
        - name: mysecrets
---
apiVersion: v1
kind: Service
metadata:
  name: gs-spring-boot-docker
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"    
  labels:
    expose: "true"
spec:
  selector:
    run: gs-spring-boot-docker
  ports:
  - port: 8000
    targetPort: 8080
  type: LoadBalancer

@valorl
Copy link

valorl commented Apr 17, 2018

Hey @idelix, thanks for the detailed write-up. I'm currently in the process of trying to set it up using classic deployment without the custom ARM template and rather peer the created network to our VPN-connected network. If that doesn't work and/or turns out to be too complicated, I will definitely try your steps.

Have you noticed anything that is not supported with this setup? Is it possible to az aks scale it for example?

@rhollins
Copy link

Hi @valorl, Glad it was helpful, so far I'm just starting to explore options with AKS really I just tested it with Helm and it was fine you can deploy, upgrade and rollback.

@jalberto
Copy link

@idelix Helm is agnostic to the cloud provider, so it should work as usual.
@valeriofrediani as far I know this modifications has 2 issues:

  • no more SLA for you cluster
  • aks commands that modify the cluster will fail and can leave your cluster in a inconsistent state

All this time invested by @idelix and other users here, are a clear signal this is a very necessary feature and we still have no words from AKS team about it's progress or roadmap. A bit of transparency will be great.

@valorl
Copy link

valorl commented Apr 17, 2018

@idelix Regarding your write-up, I'm assuming you have set your cluster's node count to 1. Is that correct ?

If that's the case then (as you described) you only have 10.244.0.0/24 for the pods network, but for each additional node another one of these will probably be created (e.g. I set node count to 3 and my route table consists of 10.244.0.0/24, 10.244.1.0/24 and 10.244.2.0/24, each of them configured to point to one of the nodes).

Therefore, it's probably better to configure your on-premise device for the whole 10.244.0.0/16 range instead of just this specific one, if you are planning on trying scaling.

@valorl
Copy link

valorl commented Apr 20, 2018

@jasonchester Hi Jason, was there anything special you had to do when setting up the ILB manually? I can't manage to access a service that is exposed as NodePort. If my service is supposed to be exposed on port 80 for example, I cannot even access it if I ssh into the node and do curl localhost:80.

@hafizullah
Copy link

I can now see the option to specify subnet id using the terraform resource BUT AKS is apparently only available in certain regions, the environment I am working with is on uksouth and this is not supported.
* azurerm_kubernetes_cluster.test: containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="LocationNotAvailableForResourceType" Message="The provided location 'uksouth' is not available for resource type 'Microsoft.ContainerService/managedClusters'. List of available regions for the resource type is 'eastus,westeurope,centralus,canadacentral,canadaeast'."

The joys of Azure!

@jasonchester
Copy link

@valorl 80 is not a valid node port.

If you set the type field to "NodePort", the Kubernetes master will allocate a port from a flag-configured range (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service.

Here's an example of a service we are exposing over an internally configured LB.

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: '2018-04-10T14:19:50Z'
  labels:
    app: kong
    chart: kong-0.2.3
    heritage: Tiller
    release: api-gateway
  name: api-gateway-kong-proxy
  namespace: default
  resourceVersion: '11099979'
  selfLink: /api/v1/namespaces/default/services/api-gateway-kong-proxy
  uid: 3af03f2d-3cca-11e8-a913-0a58ac1f10cf
spec:
  clusterIP: 10.0.227.124
  externalTrafficPolicy: Cluster
  ports:
    - name: kong-proxy
      nodePort: 32080
      port: 80
      protocol: TCP
      targetPort: 80
    - name: kong-proxy-ssl
      nodePort: 32443
      port: 443
      protocol: TCP
      targetPort: 443
  selector:
    app: kong
    release: api-gateway
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

@nsbradley88
Copy link

nsbradley88 commented May 5, 2018

This was enabled today in preview it seems. However a couple of caveats I've noticed thus far. There are no fields for adminusername or adminpassword/sshrsakey even when I pulled the deployment json files and added those in it returned "Property change not allowed". Also if you I've been struggling with once a route table is applied to the custom subnet, the proxy session you kick off from az cli to connect to the dashboard fails to connect.
screen shot 2018-05-04 at 11 31 44 am

@bramvdklinkenberg
Copy link

bramvdklinkenberg commented May 7, 2018

I tried deploying a new cluster twice to test the custom vnet but the deployment failed twice.

"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details." "The resource operation completed with terminal provisioning state 'Failed'." "ControlPlaneAddOnsNotReady, Pods not in Running status"

@cxtawayne
Copy link

cxtawayne commented May 7, 2018

I tried this through the exposed deployment variable over the weekend. It appears the internal Load Balancer is created on the MC VNET (instead of the custom VNET provided during cluster deployment) when deploying an internal service. Because of this, the orchestration can't provision an internal service because the agents and internal LB are on different VNETs.

Error:

Network interface 
/subscriptions/9..8/resourceGroups/MC_..._centralus/providers/Microsoft.Network/networkInterfaces/aks-agentpool-3...1-nic-3 
uses internal load balancer 
/subscriptions/9...8/resourceGroups/MC_..._centralus/providers/Microsoft.Network/loadBalancers/kubernetes-internal 
but does not use the same VNET 
(/subscriptions/9...8/resourceGroups/MC_..._CENTRALUS/providers/Microsoft.Network/virtualNetworks/AKS-VNET-3...1)
 as the load balancer.

Any chance anyone found a workaround? The best I can come up with now is to use an ingress controller instead.

@msdotnetclr
Copy link

msdotnetclr commented May 11, 2018

After seeing custom VNET support in Azure Portal, I created a new AKS cluster with these parameters (I am only listing those relevant to this message):

  • Resource Group: RG_AKS_CUSTOMVNET_TEST
  • Cluster name: SSS-AKS-CUSTOMVNET-TEST
  • Custom VNET name: AKS_CUSTOMVNET_TEST-vnet
  • Custom VNET address space: 10.201.102.0/23
  • Custom VNET subnet address range: 10.201.102.0/24 <== I initially used something like 10.201.102.0/25, which got the cluster stuck in "Failed" status after provisioning. Microsoft support found it was an address range issue, and told me I needed to change it to /24. So I did and the problem went away.
  • Kubernetes service address range: 10.244.16.0/24
  • Kubernetes DNS service IP address: 10.244.16.127
  • Docker bridge address: 172.17.0.1/16

The cluster was created successfully. In resource group RG_AKS_CUSTOMVNET_TEST, I see two items:

  • AKS_CUSTOMVNET_TEST-vnet (Virtual Network)
  • SSS-AKS-CUSTOMVNET-TEST (Kubernetes service)

And I see a new resource group MC_RG_AKS_CUSTOMVNET_TEST_SSS-AKS-CUSTOMVNET-TEST_eastus. Within the MC_* RG, there are the worker node VMs, disks, NIC's, NSG's, and a virtual network aks-vnet-#####.

This aks-vnet-### confuses me.

It has address space of 10.0.0.0/8, no connected services, and a subnet "aks-subnet" with address range 10.240.0.0/16. The aks-subnet has the NSG attached. What is it for?

@msdotnetclr
Copy link

msdotnetclr commented May 11, 2018

Another problem arises. I attempted to run the azure-vote example from https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough. It was able to do everything except ensuring load balancer. The error message indicates a permission issue:

Error creating load balancer (will retry): failed to ensure load balancer for service default/azure-vote-front: ensure(default/azure-vote-front): lb(kubernetes) - failed to ensure host in pool: "network.InterfacesClient#CreateOrUpdate: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="LinkedAuthorizationFailed" Message="The client '(masked)-7f67-4deb-af78-266823639750' with object id '(masked)-7f67-4deb-af78-266823639750' has permission to perform action 'Microsoft.Network/networkInterfaces/write' on scope '/subscriptions/(masked)-6fcc-447d-9952-acd4aeebf764/resourceGroups/MC_RG_AKS_CUSTOMVNET_TEST_SSS-AKS-CUSTOMVNET-TEST_eastus/providers/Microsoft.Network/networkInterfaces/aks-agentpool-(masked)-nic-0'; however, it does not have permission to perform action 'Microsoft.Network/virtualNetworks/subnets/join/action' on the linked scope(s) '/subscriptions/(masked)-6fcc-447d-9952-acd4aeebf764/resourceGroups/RG_AKS_CUSTOMVNET_TEST/providers/Microsoft.Network/virtualNetworks/AKS_CUSTOMVNET_TEST-vnet/subnets/default'.""

@nphmuller
Copy link

@msdotnetclr
Ran into this myself. See #357 for the workaround.

@valorl
Copy link

valorl commented May 11, 2018

@msdotnetclr I think the feature is still not entirely finished. The aks-vnet is automatically created with the 10.240.0.0/16 pod network. The custom user-selected subnet seems to be only for nodes . The error you are running into is probably because the service principal you created the cluster with, does not have enough permissions to the auto-generated aks-vnet in which kubernetes is trying to create the load balancer.

@cdhunt
Copy link

cdhunt commented May 11, 2018

Yes, adding the SPN to the vNet helps, but it still get's stuck for me with the failed to get subnet: aks-vnet-12345678/aks-subnet @benbuckland mentioned.

@msdotnetclr
Copy link

msdotnetclr commented May 11, 2018

@nphmuller thanks, the workaround indeed did the trick, the sample now works as expected, It'd be great not having to do this manually - hopefully this will be fixed before GA.

A new issue though, when I run

$ az aks scale --node-count 3 --resource-group RG_AKS_CUSTOMVNET_TEST --name SSS-AKS-CUSTOMVNET-TEST

It returns error

Operation failed with status: 'Bad Request'. Details: Changing property 'networkProfile.networkPlugin' is not allowed.

I updated Azure CLI to the latest version (2.0.32-1~wheezy) and tried again, still the same. Interestingly enough, the Scale function in Azure Portal worked fine, I was able to scale up/down without any issue.

@msdotnetclr
Copy link

msdotnetclr commented May 11, 2018

@valorl thanks but I don't understand why a VNET subnet is needed for the 10.240.0.0/16 pod network.

@slack
Copy link
Contributor

slack commented May 11, 2018

@valorl There is a "vestigial" VNet due to a bug that will no longer show up in the MC_ resource group.

We do have a documentation gap regarding the required SP permissions to update the VNet, that will come along shortly (and documented at #353)

@msdotnetclr Today, you can only create AKS + existing VNET using ARM template deployments or Portal, CLI support hasn't been added yet (#353)

@msdotnetclr subnet address range does need to be /24 or larger, we have some pre-flight validations that will be added for the advanced networking scenarios

@bramvdklinkenberg
Copy link

bramvdklinkenberg commented May 15, 2018

Hi @slack , I deployed an aks cluster with custom vnet with your example arm template. I use a vnet that has backend connectivity to a on premise datacenter. And I thought that I had to configure a configmap to set the stubDomain (see
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/).
But on my newly deployed cluster I deployed a pod and from that pod I can connect to a machine in the on-premise datacenter. When I do a nslookup I just get the dns server from the cluster.
When I ssh into the agent node I can see the 2 customer dns servers we have in the resolv.conf, is somehow that setting inherited by the cluster? If so then the configmap is not necessary anymore right?

@mbrand
Copy link

mbrand commented May 16, 2018

@bramvdklinkenberg I'm having the same error message. I tried to create a cluster with custom vnet multiple times, but it never worked. Always getting the "ControlPlaneAddOnsNotReady" after a long time.

Did you get it to work?

@bramvdklinkenberg
Copy link

@mbrand , yes I did :). My issue was that I used a /25 subnet but it has to be at least /24.

@benbuckland
Copy link

Hi All! Today I came across an issue when one of the nodes in the cluster died. I have three nodes in the cluster so you think that everything would be okay but it wasn't. It took me a while to figure it out. Remember I have a VPN back to an on premises DB. In the GatewaySubnet I have created a route for traffic destined for the Kuber Services network in my case 10.244.1.0/24 to 10.150.1.4 which is one of VM's in the Agent Pool so the traffic from the on premises network can make it back to the Kuber Services network. 10.150.1.4 NIC had IP forwarding enabled so traffic to other VM's in the pool work as expected, except for when that node died. I presume the LoadBalancer directed traffic to the operational nodes but traffic was no longer routable from the on premises network to the still working nodes in the cluster.

Any suggestion as to how to create some resilience in this route?

@seanmck
Copy link
Collaborator

seanmck commented Jun 16, 2018

I'm going to close this issue as the original feature has been delivered. Please open new issues for specific problems you're hitting with the custom vnet support.

@seanmck seanmck closed this as completed Jun 16, 2018
@ghost ghost locked as resolved and limited conversation to collaborators Aug 13, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests