Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

forked Ingress addon works in scenario where upstream addon doesn't #921

Closed
donaldguy opened this issue Dec 14, 2016 · 4 comments
Closed

Comments

@donaldguy
Copy link

donaldguy commented Dec 14, 2016

First off, thanks for your help so far! In general, getting k8s running my workload has gone smoother and easier than I expected and has been pretty fun. I mostly wanted to post this to help out anyone else who has any confusion in the meantime.

This is maybe the wrong place for this bug. But as a k8s beginner it gave me the most frustration so far, and since minikube is likely the entrypoint for beginners going forward, this seems like a reasonable place to file it. Feel free to indicate where (else) I should file, if anywhere. With the transition between https://github.com/kubernetes/ingress and
https://github.com/kubernetes/contrib/tree/master/ingress/controllers (and https://github.com/nginxinc/kubernetes-ingress still hanging around and of ambiguous affiliation), I ended up a bit confused about ecosystem as well as behavior here.

I'm gonna also (try to, if I can find the right place) file a docs bug suggesting better clarification of "single-service" ingress in the docs, as I think that contributed to my confusion.

Minikube version: minikube version: v0.13.1 (probably slightly newer, same build I was using in #909 )

What happened:
After getting ingress-controller working in minikube, I looked at examples in the docs, and maybe confused between http://kubernetes.io/docs/user-guide/ingress/#name-based-virtual-hosting and http://kubernetes.io/docs/user-guide/ingress/#single-service-ingress, and/or very possibly being bad at indenting yaml properly, I wrote a (inadvertently?) valid Ingress object as (using helm version both as a redact and cause its true):

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ template "fullname" . }}
spec:
  rules:
  - host: {{ template "hostname" . }}
  backend:
    serviceName: {{ template "fullname" . }}
    servicePort: 8080

It worked! and I thought I was done writing that for now (other than revisiting to add tls)

Then when I tried to deploy it to a cluster in AWS (using the ingress config from https://github.com/kubernetes/kops/blob/1020214f879ef7f9d1528f89860497f40a685e43/addons/ingress-nginx/v1.4.0.yaml ) it did not

In particular looking at the resulting nginx config (by kubectl --namespace kube-system execing into the pod), my original yaml resulted in a server block with the appropriate server_name but still doing proxy_pass http://upstream-default-backend; It seemed an awful lot like my ingress was right, but that the selector wasn't matching the service properly...

I spent a couple hours debugging it badly, thinking it was something about either the new complexities of multi-node/VPC routing or the limitations of minikube services. Finally I decided to debug by rewrite and went back to http://kubernetes.io/docs/api-reference/extensions/v1beta1/definitions/#_v1beta1_ingress and discovered my error

What you expected to happen:

It turns out this shouldn't work by the spec as I read it / the behavior of my AWS cluster. What I needed was more like

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ template "fullname" . }}
spec:
  rules:
    - host: {{ template "hostname" . }}
      http:
         paths:
           - backend:
                serviceName: {{ template "fullname" . }}
                servicePort: 8080

I know you are currently running a slight fork of ingress (gcr.io/k8s-minikube/nginx-ingress-controller:0.8.4 vs gcr.io/google_containers/nginx-ingress-controller:0.8.3) and I see the version numbers differ as well, so this may be an essentially solved issue (though as above with 2-3 repos running around its currently unclear where to look for changes that might have solved my problem). But it may also be an accidental emergent behavior, so I wanted to make you aware.

@r2d4
Copy link
Contributor

r2d4 commented Dec 14, 2016

I don't see why there should be any differences with minikube vs. production yamls for ingress.

Can you tell us what k8s server versions are running in AWS and minikube? Is the AWS cluster multi-node or single node?

According to the spec, both are valid for Ingress

At least one of backend or rules must be specified.

Its also important to note that the docs on kubernetes.io are for >=v1.5.0 now in case something might have changed between what you're running and the current version.

As far as the ingress controller images. The 0.8.4 image is build from #611 (comment)
The official place for ingress is now at https://github.com/kubernetes/ingress, however there hasn't been a release yet. You might find more information on the slack channel

@donaldguy
Copy link
Author

So definitely both are valid ingress resources, but it would seem that the top level backend doesn't associate with any host or path and that the nginx ingress controller summarily ignores it (possibly because --default-backend-service is specified?)

It seems like it is maybe intended for default routes of some sort, redundant to the sort launched separately/ passed by --default-backend-service both here in minikube and in the kops .yaml?

Its also possible that its semantics make more sense with the GCE ingress controller...

kubectl --context minikube version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.6", GitCommit:"e569a27d02001e343cb68086bc06d47804f62af6", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl --context aws version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.6", GitCommit:"e569a27d02001e343cb68086bc06d47804f62af6", GitTreeState:"clean", BuildDate:"2016-11-12T05:16:27Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

multi-node HA cluster


NAME                             STATUS         AGE       EXTERNAL-IP
ip-172-20-116-0.ec2.internal     Ready,master   6h        54.x.x.x
ip-172-20-126-147.ec2.internal   Ready          6h        52.x.x.x
ip-172-20-150-174.ec2.internal   Ready,master   6h        52.x.x.x
ip-172-20-167-224.ec2.internal   Ready,master   6h        52.x.x.x
ip-172-20-170-27.ec2.internal    Ready          6h        54.x.x.x

Here is vastly more details about the nodes than you could want, albeit scrubbed of identity and proprietary image names

kubectl --context aws get node -o yaml | sed -e '/54\./d' -e '/52\./d' -e '/tulip/d' -e '/i-/d'
apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      scheduler.alpha.kubernetes.io/taints: '[{"key":"dedicated","value":"master","effect":"NoSchedule"}]'
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: 2016-12-14T19:23:15Z
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: m3.medium
      beta.kubernetes.io/os: linux
      failure-domain.beta.kubernetes.io/region: us-east-1
      failure-domain.beta.kubernetes.io/zone: us-east-1c
      kubernetes.io/hostname: ip-172-20-116-0.ec2.internal
      kubernetes.io/role: master
    name: ip-172-20-116-0.ec2.internal
    namespace: ""
    resourceVersion: "45727"
    selfLink: /api/v1/nodes/ip-172-20-116-0.ec2.internal
    uid: c29e3c8d-c232-11e6-8b20-0af64579e224
  spec:
    podCIDR: 100.96.0.0/24
  status:
    addresses:
    - address: 172.20.116.0
      type: InternalIP
    - address: 172.20.116.0
      type: LegacyHostIP
      type: ExternalIP
    allocatable:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "1"
      memory: 3857324Ki
      pods: "110"
    capacity:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "1"
      memory: 3857324Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:23:15Z
      message: kubelet has sufficient disk space available
      reason: KubeletHasSufficientDisk
      status: "False"
      type: OutOfDisk
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:23:15Z
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:23:15Z
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:23:15Z
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    - lastHeartbeatTime: 2016-12-15T01:41:39Z
      lastTransitionTime: 2016-12-15T01:41:39Z
      message: RouteController created a route
      reason: RouteCreated
      status: "False"
      type: NetworkUnavailable
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - b.gcr.io/kops-images/protokube:1.4.1
      sizeBytes: 296349701
    - names:
      - kope/dns-controller:1.4.1
      sizeBytes: 205839533
    - names:
      - gcr.io/google_containers/kube-proxy:v1.4.6
      sizeBytes: 202280808
    - names:
      - gcr.io/google_containers/kube-apiserver:v1.4.6
      sizeBytes: 152095108
    - names:
      - gcr.io/google_containers/kube-controller-manager:v1.4.6
      sizeBytes: 142121932
    - names:
      - gcr.io/google_containers/kube-scheduler:v1.4.6
      sizeBytes: 81295020
    - names:
      - gcr.io/google_containers/etcd:2.2.1
      sizeBytes: 28191895
    - names:
      - gcr.io/google_containers/pause-amd64:3.0
      sizeBytes: 746888
    nodeInfo:
      architecture: amd64
      bootID: e649ad3c-467d-4934-bf59-7f8c913eeb4c
      containerRuntimeVersion: docker://1.11.2
      kernelVersion: 4.4.26-k8s
      kubeProxyVersion: v1.4.6
      kubeletVersion: v1.4.6
      machineID: 7b75e055b2ac43ba8fc4a6c79e30692a
      operatingSystem: linux
      osImage: Debian GNU/Linux 8 (jessie)
      systemUUID: EC293E10-D39B-2E08-CC80-0B27582A7EBB
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: 2016-12-14T19:25:41Z
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: m4.large
      beta.kubernetes.io/os: linux
      failure-domain.beta.kubernetes.io/region: us-east-1
      failure-domain.beta.kubernetes.io/zone: us-east-1c
      kubernetes.io/hostname: ip-172-20-126-147.ec2.internal
    name: ip-172-20-126-147.ec2.internal
    namespace: ""
    resourceVersion: "45725"
    selfLink: /api/v1/nodes/ip-172-20-126-147.ec2.internal
    uid: 1a1423ec-c233-11e6-8b20-0af64579e224
  spec:
    podCIDR: 100.96.4.0/24
  status:
    addresses:
    - address: 172.20.126.147
      type: InternalIP
    - address: 172.20.126.147
      type: LegacyHostIP
      type: ExternalIP
    allocatable:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "2"
      memory: 8178120Ki
      pods: "110"
    capacity:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "2"
      memory: 8178120Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:25:41Z
      message: kubelet has sufficient disk space available
      reason: KubeletHasSufficientDisk
      status: "False"
      type: OutOfDisk
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:25:41Z
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:25:41Z
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:26:11Z
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    - lastHeartbeatTime: 2016-12-15T01:41:39Z
      lastTransitionTime: 2016-12-15T01:41:39Z
      message: RouteController created a route
      reason: RouteCreated
      status: "False"
      type: NetworkUnavailable
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      sizeBytes: 677443659
    - names:
      - b.gcr.io/kops-images/protokube:1.4.1
      sizeBytes: 296349701
    - names:
      - gcr.io/google_containers/kube-proxy:v1.4.6
      sizeBytes: 202280808
    - names:
      - gcr.io/google_containers/nginx-ingress-controller:0.8.3
      sizeBytes: 146818590
    - names:
      - gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
      sizeBytes: 86267953
    - names:
      - gcr.io/kubernetes-helm/tiller:v2.0.2
      sizeBytes: 68929771
    - names:
      - gcr.io/google_containers/defaultbackend:1.0
      sizeBytes: 7510068
    - names:
      - gcr.io/google_containers/pause-amd64:3.0
      sizeBytes: 746888
    nodeInfo:
      architecture: amd64
      bootID: 074c3ba9-9bb2-4e82-bb71-ba12b4200e73
      containerRuntimeVersion: docker://1.11.2
      kernelVersion: 4.4.26-k8s
      kubeProxyVersion: v1.4.6
      kubeletVersion: v1.4.6
      machineID: 6e13f8600f044a3985dcc9bf7472d9e0
      operatingSystem: linux
      osImage: Debian GNU/Linux 8 (jessie)
      systemUUID: EC2D3850-90FE-7856-2B87-BCCEEB3B89FE
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      scheduler.alpha.kubernetes.io/taints: '[{"key":"dedicated","value":"master","effect":"NoSchedule"}]'
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: 2016-12-14T19:23:51Z
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: m3.medium
      beta.kubernetes.io/os: linux
      failure-domain.beta.kubernetes.io/region: us-east-1
      failure-domain.beta.kubernetes.io/zone: us-east-1d
      kubernetes.io/hostname: ip-172-20-150-174.ec2.internal
      kubernetes.io/role: master
    name: ip-172-20-150-174.ec2.internal
    namespace: ""
    resourceVersion: "45720"
    selfLink: /api/v1/nodes/ip-172-20-150-174.ec2.internal
    uid: d86d7bfe-c232-11e6-b077-0e890beb5374
  spec:
    podCIDR: 100.96.2.0/24
  status:
    addresses:
    - address: 172.20.150.174
      type: InternalIP
    - address: 172.20.150.174
      type: LegacyHostIP
      type: ExternalIP
    allocatable:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "1"
      memory: 3857324Ki
      pods: "110"
    capacity:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "1"
      memory: 3857324Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: 2016-12-15T01:41:39Z
      lastTransitionTime: 2016-12-14T19:23:51Z
      message: kubelet has sufficient disk space available
      reason: KubeletHasSufficientDisk
      status: "False"
      type: OutOfDisk
    - lastHeartbeatTime: 2016-12-15T01:41:39Z
      lastTransitionTime: 2016-12-14T19:23:51Z
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: 2016-12-15T01:41:39Z
      lastTransitionTime: 2016-12-14T19:23:51Z
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: 2016-12-15T01:41:39Z
      lastTransitionTime: 2016-12-14T19:23:51Z
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    - lastHeartbeatTime: 2016-12-15T01:41:39Z
      lastTransitionTime: 2016-12-15T01:41:39Z
      message: RouteController created a route
      reason: RouteCreated
      status: "False"
      type: NetworkUnavailable
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - b.gcr.io/kops-images/protokube:1.4.1
      sizeBytes: 296349701
    - names:
      - gcr.io/google_containers/kube-proxy:v1.4.6
      sizeBytes: 202280808
    - names:
      - gcr.io/google_containers/kube-apiserver:v1.4.6
      sizeBytes: 152095108
    - names:
      - gcr.io/google_containers/kube-controller-manager:v1.4.6
      sizeBytes: 142121932
    - names:
      - gcr.io/google_containers/kube-scheduler:v1.4.6
      sizeBytes: 81295020
    - names:
      - gcr.io/google_containers/etcd:2.2.1
      sizeBytes: 28191895
    - names:
      - gcr.io/google_containers/pause-amd64:3.0
      sizeBytes: 746888
    nodeInfo:
      architecture: amd64
      bootID: 7b6eed49-f2db-432c-ab93-ca18207f3610
      containerRuntimeVersion: docker://1.11.2
      kernelVersion: 4.4.26-k8s
      kubeProxyVersion: v1.4.6
      kubeletVersion: v1.4.6
      machineID: cd1e8f72b78c4470b9664e9a585fef02
      operatingSystem: linux
      osImage: Debian GNU/Linux 8 (jessie)
      systemUUID: EC2815DC-CBFE-8A5A-CD1D-85078D6E4221
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      scheduler.alpha.kubernetes.io/taints: '[{"key":"dedicated","value":"master","effect":"NoSchedule"}]'
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: 2016-12-14T19:23:34Z
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: m3.medium
      beta.kubernetes.io/os: linux
      failure-domain.beta.kubernetes.io/region: us-east-1
      failure-domain.beta.kubernetes.io/zone: us-east-1e
      kubernetes.io/hostname: ip-172-20-167-224.ec2.internal
      kubernetes.io/role: master
    name: ip-172-20-167-224.ec2.internal
    namespace: ""
    resourceVersion: "45721"
    selfLink: /api/v1/nodes/ip-172-20-167-224.ec2.internal
    uid: ce3d08f0-c232-11e6-ac63-0660ca7a56fa
  spec:
    podCIDR: 100.96.1.0/24
  status:
    addresses:
    - address: 172.20.167.224
      type: InternalIP
    - address: 172.20.167.224
      type: LegacyHostIP
      type: ExternalIP
    allocatable:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "1"
      memory: 3857324Ki
      pods: "110"
    capacity:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "1"
      memory: 3857324Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: 2016-12-15T01:41:35Z
      lastTransitionTime: 2016-12-14T19:23:34Z
      message: kubelet has sufficient disk space available
      reason: KubeletHasSufficientDisk
      status: "False"
      type: OutOfDisk
    - lastHeartbeatTime: 2016-12-15T01:41:35Z
      lastTransitionTime: 2016-12-14T19:23:34Z
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: 2016-12-15T01:41:35Z
      lastTransitionTime: 2016-12-14T19:23:34Z
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: 2016-12-15T01:41:35Z
      lastTransitionTime: 2016-12-14T19:23:34Z
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    - lastHeartbeatTime: 2016-12-15T01:41:39Z
      lastTransitionTime: 2016-12-15T01:41:39Z
      message: RouteController created a route
      reason: RouteCreated
      status: "False"
      type: NetworkUnavailable
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - b.gcr.io/kops-images/protokube:1.4.1
      sizeBytes: 296349701
    - names:
      - gcr.io/google_containers/kube-proxy:v1.4.6
      sizeBytes: 202280808
    - names:
      - gcr.io/google_containers/kube-apiserver:v1.4.6
      sizeBytes: 152095108
    - names:
      - gcr.io/google_containers/kube-controller-manager:v1.4.6
      sizeBytes: 142121932
    - names:
      - gcr.io/google_containers/kube-scheduler:v1.4.6
      sizeBytes: 81295020
    - names:
      - gcr.io/google_containers/etcd:2.2.1
      sizeBytes: 28191895
    - names:
      - gcr.io/google_containers/pause-amd64:3.0
      sizeBytes: 746888
    nodeInfo:
      architecture: amd64
      bootID: 000df1e8-3358-41f1-9d86-ce00da596ac7
      containerRuntimeVersion: docker://1.11.2
      kernelVersion: 4.4.26-k8s
      kubeProxyVersion: v1.4.6
      kubeletVersion: v1.4.6
      machineID: 21ad7ed878d749c3a6d0969ea420e61c
      operatingSystem: linux
      osImage: Debian GNU/Linux 8 (jessie)
      systemUUID: EC29D283-AA97-84D6-FE8D-C7BF558AD3CA
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: 2016-12-14T19:25:01Z
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: m4.large
      beta.kubernetes.io/os: linux
      failure-domain.beta.kubernetes.io/region: us-east-1
      failure-domain.beta.kubernetes.io/zone: us-east-1e
      kubernetes.io/hostname: ip-172-20-170-27.ec2.internal
    name: ip-172-20-170-27.ec2.internal
    namespace: ""
    resourceVersion: "45726"
    selfLink: /api/v1/nodes/ip-172-20-170-27.ec2.internal
    uid: 01e76a0d-c233-11e6-8b20-0af64579e224
  spec:
    podCIDR: 100.96.3.0/24
  status:
    addresses:
    - address: 172.20.170.27
      type: InternalIP
    - address: 172.20.170.27
      type: LegacyHostIP
      type: ExternalIP
    allocatable:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "2"
      memory: 8178120Ki
      pods: "110"
    capacity:
      alpha.kubernetes.io/nvidia-gpu: "0"
      cpu: "2"
      memory: 8178120Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:25:01Z
      message: kubelet has sufficient disk space available
      reason: KubeletHasSufficientDisk
      status: "False"
      type: OutOfDisk
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:25:01Z
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:25:01Z
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: 2016-12-15T01:41:40Z
      lastTransitionTime: 2016-12-14T19:25:11Z
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    - lastHeartbeatTime: 2016-12-15T01:41:39Z
      lastTransitionTime: 2016-12-15T01:41:39Z
      message: RouteController created a route
      reason: RouteCreated
      status: "False"
      type: NetworkUnavailable
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      sizeBytes: 677443659
    - names:
      - b.gcr.io/kops-images/protokube:1.4.1
      sizeBytes: 296349701
    - names:
      - gcr.io/google_containers/kube-proxy:v1.4.6
      sizeBytes: 202280808
    - names:
      - gcr.io/google_containers/kubedns-amd64:1.8
      sizeBytes: 57892132
    - names:
      - gcr.io/google_containers/exechealthz-amd64:1.2
      sizeBytes: 8374840
    - names:
      - gcr.io/google_containers/kube-dnsmasq-amd64:1.4
      sizeBytes: 5126001
    - names:
      - gcr.io/google_containers/pause-amd64:3.0
      sizeBytes: 746888
    nodeInfo:
      architecture: amd64
      bootID: ab14b641-13ae-4818-aec0-68a412e0703d
      containerRuntimeVersion: docker://1.11.2
      kernelVersion: 4.4.26-k8s
      kubeProxyVersion: v1.4.6
      kubeletVersion: v1.4.6
      machineID: 253d2ab609914c0db4861e528551fbb6
      operatingSystem: linux
      osImage: Debian GNU/Linux 8 (jessie)
      systemUUID: EC2F8BC1-3239-C035-8682-976A1431FA3F
kind: List
metadata: {}
resourceVersion: ""
selfLink: ""

And though I linked the yamls for the controller and deployment above, here they are with their inherited fields as well

kubectl --context minikube --namespace kube-system get rc nginx-ingress-controller -o yaml one you wrote but here anyway
apiVersion: v1
kind: ReplicationController
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: '{"kind":"ReplicationController","apiVersion":"v1","metadata":{"name":"nginx-ingress-controller","namespace":"kube-system","creationTimestamp":null,"labels":{"app":"nginx-ingress-lb","kubernetes.io/cluster-service":"true"}},"spec":{"replicas":1,"selector":{"app":"nginx-ingress-lb","kubernetes.io/cluster-service":"true"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx-ingress-lb","kubernetes.io/cluster-service":"true","name":"nginx-ingress-lb"}},"spec":{"containers":[{"name":"nginx-ingress-lb","image":"gcr.io/k8s-minikube/nginx-ingress-controller:0.8.4","args":["/nginx-ingress-controller","--default-backend-service=$(POD_NAMESPACE)/default-http-backend","--nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf"],"ports":[{"hostPort":80,"containerPort":80},{"hostPort":443,"containerPort":443},{"hostPort":18080,"containerPort":18080}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{},"livenessProbe":{"httpGet":{"path":"/ingress-controller-healthz","port":80,"scheme":"HTTP"},"initialDelaySeconds":10,"timeoutSeconds":1},"readinessProbe":{"httpGet":{"path":"/ingress-controller-healthz","port":80,"scheme":"HTTP"}},"imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":60}}},"status":{"replicas":0}}'
  creationTimestamp: 2016-12-12T21:47:16Z
  generation: 1
  labels:
    app: nginx-ingress-lb
    kubernetes.io/cluster-service: "true"
  name: nginx-ingress-controller
  namespace: kube-system
  resourceVersion: "155664"
  selfLink: /api/v1/namespaces/kube-system/replicationcontrollers/nginx-ingress-controller
  uid: 8c466a9a-c0b4-11e6-8576-463433cd9861
spec:
  replicas: 1
  selector:
    app: nginx-ingress-lb
    kubernetes.io/cluster-service: "true"
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-ingress-lb
        kubernetes.io/cluster-service: "true"
        name: nginx-ingress-lb
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: gcr.io/k8s-minikube/nginx-ingress-controller:0.8.4
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /ingress-controller-healthz
            port: 80
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: nginx-ingress-lb
        ports:
        - containerPort: 80
          hostPort: 80
          protocol: TCP
        - containerPort: 443
          hostPort: 443
          protocol: TCP
        - containerPort: 18080
          hostPort: 18080
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /ingress-controller-healthz
            port: 80
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 60
status:
  fullyLabeledReplicas: 1
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
kubectl --context aws --namespace kube-system get deployment ingress-nginx -o yaml from kops repo
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2016-12-14T19:37:29Z
  generation: 2
  labels:
    k8s-addon: ingress-nginx.addons.k8s.io
  name: ingress-nginx
  namespace: kube-system
  resourceVersion: "1900"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/ingress-nginx
  uid: c00bb08f-c234-11e6-8b20-0af64579e224
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-nginx
      k8s-addon: ingress-nginx.addons.k8s.io
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ingress-nginx
        k8s-addon: ingress-nginx.addons.k8s.io
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
        - --nginx-configmap=$(POD_NAMESPACE)/ingress-nginx
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: ingress-nginx
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 60
status:
  availableReplicas: 1
  observedGeneration: 2
  replicas: 1
  updatedReplicas: 1

let me know what other info could be helpful.

@donaldguy
Copy link
Author

If I read http://kubernetes.io/docs/user-guide/ingress/#single-service-ingress it seems kinda like that the intention of top-level backend declarations is to be able to (presumably in GKE) allocate public IPs for a Service without any associated host/path map?

As the text says "There are existing Kubernetes concepts that allow you to expose a single service (see alternatives), however you can do so through an Ingress as well, by specifying a default backend with no rules.". This didn't really seem to do anything appropriate in AWS though.

@dlorenc
Copy link
Contributor

dlorenc commented Jan 5, 2017

It looks like this is getting tracked by the upstream bug. I'm going to close this for now.

@dlorenc dlorenc closed this as completed Jan 5, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants