New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
forked Ingress addon works in scenario where upstream addon doesn't #921
Comments
I don't see why there should be any differences with minikube vs. production yamls for ingress. Can you tell us what k8s server versions are running in AWS and minikube? Is the AWS cluster multi-node or single node? According to the spec, both are valid for Ingress
Its also important to note that the docs on kubernetes.io are for >=v1.5.0 now in case something might have changed between what you're running and the current version. As far as the ingress controller images. The 0.8.4 image is build from #611 (comment) |
So definitely both are valid ingress resources, but it would seem that the top level It seems like it is maybe intended for default routes of some sort, redundant to the sort launched separately/ passed by Its also possible that its semantics make more sense with the GCE ingress controller...
multi-node HA cluster
Here is vastly more details about the nodes than you could want, albeit scrubbed of identity and proprietary image names kubectl --context aws get node -o yaml | sed -e '/54\./d' -e '/52\./d' -e '/tulip/d' -e '/i-/d'
And though I linked the yamls for the controller and deployment above, here they are with their inherited fields as well kubectl --context minikube --namespace kube-system get rc nginx-ingress-controller -o yaml one you wrote but here anyway
kubectl --context aws --namespace kube-system get deployment ingress-nginx -o yaml from kops repo
let me know what other info could be helpful. |
If I read http://kubernetes.io/docs/user-guide/ingress/#single-service-ingress it seems kinda like that the intention of top-level As the text says "There are existing Kubernetes concepts that allow you to expose a single service (see alternatives), however you can do so through an Ingress as well, by specifying a default backend with no rules.". This didn't really seem to do anything appropriate in AWS though. |
It looks like this is getting tracked by the upstream bug. I'm going to close this for now. |
First off, thanks for your help so far! In general, getting k8s running my workload has gone smoother and easier than I expected and has been pretty fun. I mostly wanted to post this to help out anyone else who has any confusion in the meantime.
This is maybe the wrong place for this bug. But as a k8s beginner it gave me the most frustration so far, and since minikube is likely the entrypoint for beginners going forward, this seems like a reasonable place to file it. Feel free to indicate where (else) I should file, if anywhere. With the transition between https://github.com/kubernetes/ingress and
https://github.com/kubernetes/contrib/tree/master/ingress/controllers (and https://github.com/nginxinc/kubernetes-ingress still hanging around and of ambiguous affiliation), I ended up a bit confused about ecosystem as well as behavior here.
I'm gonna also (try to, if I can find the right place) file a docs bug suggesting better clarification of "single-service" ingress in the docs, as I think that contributed to my confusion.
Minikube version: minikube version: v0.13.1 (probably slightly newer, same build I was using in #909 )
What happened:
After getting ingress-controller working in minikube, I looked at examples in the docs, and maybe confused between http://kubernetes.io/docs/user-guide/ingress/#name-based-virtual-hosting and http://kubernetes.io/docs/user-guide/ingress/#single-service-ingress, and/or very possibly being bad at indenting yaml properly, I wrote a (inadvertently?) valid Ingress object as (using helm version both as a redact and cause its true):
It worked! and I thought I was done writing that for now (other than revisiting to add tls)
Then when I tried to deploy it to a cluster in AWS (using the ingress config from https://github.com/kubernetes/kops/blob/1020214f879ef7f9d1528f89860497f40a685e43/addons/ingress-nginx/v1.4.0.yaml ) it did not
In particular looking at the resulting nginx config (by
kubectl --namespace kube-system exec
ing into the pod), my original yaml resulted in a server block with the appropriateserver_name
but still doingproxy_pass http://upstream-default-backend;
It seemed an awful lot like my ingress was right, but that the selector wasn't matching the service properly...I spent a couple hours debugging it badly, thinking it was something about either the new complexities of multi-node/VPC routing or the limitations of minikube services. Finally I decided to debug by rewrite and went back to http://kubernetes.io/docs/api-reference/extensions/v1beta1/definitions/#_v1beta1_ingress and discovered my error
What you expected to happen:
It turns out this shouldn't work by the spec as I read it / the behavior of my AWS cluster. What I needed was more like
I know you are currently running a slight fork of ingress (
gcr.io/k8s-minikube/nginx-ingress-controller:0.8.4
vsgcr.io/google_containers/nginx-ingress-controller:0.8.3
) and I see the version numbers differ as well, so this may be an essentially solved issue (though as above with 2-3 repos running around its currently unclear where to look for changes that might have solved my problem). But it may also be an accidental emergent behavior, so I wanted to make you aware.The text was updated successfully, but these errors were encountered: