Explain why not subnets are found for creating an ELB on AWS #29298

sdouche opened this Issue Jul 20, 2016 · 30 comments
sdouche commented Jul 20, 2016 edited

I created a Kubernetes cluster from coreos-aws (with existing VPC and subnets). I can't create an ELB on it.

The file:

apiVersion: v1
kind: Service
  name: nginxservice
    name: nginxservice
    - port: 80
    app: nginx
  type: LoadBalancer

The command:

kubectl kubeconfig=kubeconfig describe svc nginxservice
Name:           nginxservice
Namespace:      default
Labels:         name=nginxservice
Selector:       app=nginx
Type:           LoadBalancer
Port:           <unnamed>   80/TCP
NodePort:       <unnamed>   31870/TCP
Session Affinity:   None
  FirstSeen             LastSeen            Count   From            SubobjectPath   Reason          Message
  Wed, 20 Jul 2016 18:41:15 +0200   Wed, 20 Jul 2016 18:41:20 +0200 2   {service-controller }           CreatingLoadBalancer    Creating load balancer
  Wed, 20 Jul 2016 18:41:15 +0200   Wed, 20 Jul 2016 18:41:20 +0200 2   {service-controller }           CreatingLoadBalancerFailed  Error creating load balancer (will retry): Failed to create load balancer for service default/nginxservice: could not find any suitable subnets for creating the ELB

I added manually the missing tag KubernetesCluster on the subnet w/o result. Can you add an clear message about what is missing?

@apelisse apelisse added the team/ux label Jul 20, 2016

cc @pwittrock Not sure if this is support or an actual issue?

@sdouche sdouche referenced this issue in coreos/coreos-kubernetes Jul 20, 2016

Production Quality Deployment #340

4 of 18 tasks complete
colhom commented Jul 20, 2016

@sdouche are you perhaps running out of free IP addresses in your subnet? Each ELB will also need to a separate network interface on (each) target subnet, and I believe the rule is either 5 or 8 free addresses in the subnet for ELB creation to be allowed.

@cgag ran into this recently during some operational work here at coreos and told me about it.

sdouche commented Jul 20, 2016 edited

Hi @colhom,
I've 240 free IPs on the subnet. The issue is to find a network, not a free IP.

colhom commented Jul 20, 2016

@sdouche could I see your diff that allows you to deploy to an existing subnet? I've been curious to see how folks are doing this- we use route tables and vpc peering heavily, so in our case we have no need to deploy to the same subnet.

colhom commented Jul 20, 2016

Or are you just modifying the stack-template.json after render but prior to up?

sdouche commented Jul 20, 2016 edited

Just modified the stack-template.json and removed the creation of network items (more details here: coreos/coreos-kubernetes#340)

qqshfox commented Jul 21, 2016

@sdouche Are those subnets private? A public ELB can't be created in private subnets. K8s will get all subnets tagged with correct KubernetesCluster, then ignore private subnets for public ELB.

You can try to tag a public subnet with correct KubernetesCluster, then wait k8s to retry to create the ELB in that subnet.

sdouche commented Jul 21, 2016

@qqshfox good point, it's a private subnet. Why private subnets are ignored? How to create a private cluster?

qqshfox commented Jul 21, 2016 edited

You can create an internal ELB by using some magic k8s metadata tag.

sdouche commented Jul 21, 2016

"some magic k8s metadata tag"? What are they?


@sdouche You have to tag your subnet with the "KubernetesCluster" tag. I see you used kube-aws before, you can look at that for inspiration on how to to properly create your subnets. Also note that making a loadbalancer in a private subnet doesn't make much sense if you want to expose a service to the world (can't route in)

sdouche commented Jul 21, 2016 edited

Hi @pieterlange. Ok so, if I want a private cluster, how to expose services and pods w/o ELB? Do I need to route the 2 overlay networks? How to do that? I suppose with Flannel's aws backend.


I do not understand what you're trying to accomplish so it's a little bit difficult to help. Issues like these (this is starting to look like a support request) are better solved through slack chat or stackoverflow as there's no actionable material for the developers here. I suggest closing the ticket and trying over there.

sdouche commented Jul 21, 2016 edited

You're right, sorry. Back to the initial request: I think it would be better to write: "could not find any public subnets for creating the ELB" (for public ELB of course, which is the default option). What do you think?

@justinsb justinsb was assigned by pwittrock Jul 21, 2016

@justinsb WDYT?

manojlds commented Oct 7, 2016

How to create a private ELB with private subnets?

@pieterlange pieterlange referenced this issue in coreos/kube-aws Nov 11, 2016

Creating cluster with an existing subnet #52


Some information in #17620 about private ELBs.

druidsbane commented Nov 15, 2016 edited

Has anyone gotten this to work recently? I can get it to create the internal/private ELB but none of the node machines are added to the ELB. If I manually add them everything works fine, so it is set up properly except for adding the ASG for the nodes or adding the nodes themselves.

@justinsb Is there some annotation I need to use possibly to allow it to find the nodes it needs to add to the private ELB? I'm creating the cluster with kubeadm to join the nodes and the AWS cloud provider integration. The subnets, vpcs and autoscaling groups are all tagged with "KubernetesCluster" and a name. That does propagate to the ELB, but none of the node instances are picked up. I don't see anything specific in the code to add the node ASG to the ELB based on annotation...


I have the same problem. I've got Kubernetes running in a private subnet. To explain it a bit further (this is AWS specific). Our infrastructure team has created specific requirements regarding security. We need to have three layers (subnets) in one VPC zone. Diagram:

type connection components
public subnet internet gateway ELB
private subnet 1 nat gateway kubernetes (master/nodes)
private subnet 2 direct connect proxy for on premise server access

For this to work I had to manually create a ELB in layer 1 (public subnet) and point them to the master nodes in layer 2 (private subnet 1). I also installed the dashboard and this works fine together with the kubectl command line tool. (Both are exposed to the internet)

However when I deploy an app (e.g. nginx) I get the following error:

Error creating load balancer (will retry): Failed to create load balancer for service default/my-nginx: could not find any suitable subnets for creating the ELB

The Kubernetes dashboard says the service-controller is the source of this. And when I run:

 $ kubectl get services

it outputs:

    kubernetes   100.xx.x.1      <none>        443/TCP   3h
    my-nginx     100.xx.xx.99    <pending>     80/TCP    1h

Is there a way to tell the controller which subnet it should use to create the load balancer for the service?

rokka-n commented Jan 13, 2017 edited
  1. Still the same problem with provisioning ELBs for ingress for instances in private subnets.

But no worries, kubernetes is built upon 15 years of experience of running production workloads at Google. Amazon will fix their ELBs sometimes soon.

cemo commented Jan 17, 2017

@cyberroadie How did you solve your problem? I am in the same situation and no idea to resolve problem.

cyberroadie commented Jan 17, 2017 edited

Manually creating the routes via the AWS web interface.

JayBee6 commented Jan 17, 2017

@cemo @rokka-n @cyberroadie i was able to fix this by tagging the subnets in aws

cemo commented Jan 17, 2017 edited

@JayBee6 @cyberroadie

I had given a try today and found that two errors of mine. I prepared my environment by Terraform and kubeadm.

  1. In kubeadm I missed add to aws provider configuration.
  2. I have checked aws.go source codes and found that it has some parts related to subnet configurations. It seems that I missed adding some tags to subnets.

After that I have successfully created an ELB but there were not instances attached this configuration. Maybe I need to add some tags to instance as well. Any idea about this @JayBee6 ?

JayBee6 commented Jan 17, 2017

@cemo did you create the loadbalancers manually? if yes then you might have to. I have the following tags in my instances.

KubernetesCluster : clustername
kz8s : clustername

cemo commented Jan 17, 2017

@JayBee6 I have exposed my kubernete service as a loadBalancer. It created an AWS ELB but instances were not attached to it. I manually attached instances and everything started to work.


Error creating load balancer (will retry): Failed to create load balancer for service default/my-nginx: could not find any suitable subnets for creating the ELB

@JayBee6 @cemo @rokka-n @cyberroadie @druidsbane @sdouche YMMV, but below is what I do and is my understanding about targeting AWS ELBs created by k8s. I'm using clusters created with kube-aws (so AWS CloudFormation) to deploy to private subnets on existing VPCs. This sounds like a similar situation to the discussion above. I don't do anything manually (๐Ÿ’ฉ๐Ÿ’ฉ๐Ÿ’ฉ), so the below might be useful.

When you are deploying into an existing AWS VPC they can be dozens of existing subnets and k8s has no way to work out which one should house the external load balancers. AFAIK k8s only checks the subnets the cluster is actually housed in, so if none of those subnets are public (i.e. have an Internet gateway) then you'll get this error above.

This is easily fixed by tagging the public subnet(s) that you want k8s to use. Pick the public subnet you what and add both the following tags (the value of the second tag is blank). I usually tag one DMZ subnet in each AZ the cluster occupies.

Name Value
KubernetesCluster your-cluster-name

@manojlds Likewise for internal ELBs. You add the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "" to your Ingress, and tag the preferred subnet(s) for internal load balancers with both of the following tags.

Name Value
KubernetesCluster your-cluster-name

These tags are great having the KubernetesCluster, as it makes it easy to have the ELBs turn up in the right places when you have multiple k8s clusters in the same VPC. So I just tag the target subnets for ELBs as part of the cluster creation, and everything is automagic after that.

Real documentation for this is indeed lacking, but 'go' is highly readable. I found the stuff above by surfing the source code for interesting tags and following those constants around to see what they did.

rokka-n commented Jan 23, 2017

Well that just some tags hidden in the go code, put it in production man
What possibly could go wrong with it?


I actually still have trouble getting this all setup.... I have to set it up manually as we already have quite an infrastructure and assets that we have to conform to.

At this time, I've gotten to the point where I can get K8 to create an internal ELB pointing to the right subnets and what not... But... it does not attach any instances where the pods have launched... I have my Kube minions tagged properly KubernetesCluster : clustername... but I dunno what else is missing.


I currently have implemented in AWS a single VPC with multiple k8s clusters each in their own respective private subnet. I have a public subnet too that I want all the individual clusters to use for deploying external ELBs.

If I don't set the "KubernetesCluster" tag, everything works as expected with the exception of not really knowing which k8s created resources belong to which k8s cluster.

Of course, setting the "KubernetesCluster" tag for all the relevant resources respective of the cluster name means I can easily identify which k8s resources belong to which k8s cluster, but how do I tag the shared public subnets?

Simply omitting the tag results in no ELB being created for a service, as there is not suitable place to put one. And adding the tag results in the public subnets being locked to a single k8s cluster.

Is there anyway to "share" the public subnet??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment