Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress addresses not set unless using a LoadBalancer #2539

Open
al45tair opened this issue May 19, 2020 · 8 comments
Open

Ingress addresses not set unless using a LoadBalancer #2539

al45tair opened this issue May 19, 2020 · 8 comments
Labels
area/deployment Issues or PRs related to deployment tooling or infrastructure. area/httpproxy Issues or PRs related to the HTTPProxy API. area/ingress Issues or PRs related to the Ingress API. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/accepted Denotes an issue that has been triaged and determined to be valid.

Comments

@al45tair
Copy link
Contributor

al45tair commented May 19, 2020

What steps did you take and what happened:
Installed Contour and Envoy in a cluster that doesn't have LoadBalancer support. Per the instructions, I configured Envoy to run in a DaemonSet on all the nodes and to use host networking, which is basically the same thing that the community ingress-nginx controller does in the same situation.

Then I created an Ingress to test it.

What did you expect to happen:
I expected the Ingress to list the cluster IPs. Instead I saw

status:
  loadBalancer: {}

Anything else you would like to add:
The problem appears to be that Contour watches the "envoy" service to get the addresses to apply to ingresses. If it can't find the "envoy" service, then unless you either add annotations to your Ingress or use the --ingress-status-address option, Contour sets an empty LoadBalancerStatus.

Adding annotations is annoying (you don't need to do that with ingress-nginx, after all), and the --ingress-status-address option only supports a single address (so, for a multi-node cluster where more than one node can serve for the ingress, you're stuck using a DNS name, which would make External DNS provision a CNAME record, which is undesirable because CNAMEs clash with everything). I'll file a separate issue about --ingress-status-address only supporting a single address.

Environment:

  • Contour version: e196186
  • Kubernetes version: (use kubectl version): Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.5", GitCommit:"e0fccafd69541e3750d460ba0f9743b90336f24f", GitTreeState:"clean", BuildDate:"2020-04-16T11:35:47Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/arm64"}
  • Kubernetes installer & version: kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.5", GitCommit:"e0fccafd69541e3750d460ba0f9743b90336f24f", GitTreeState:"clean", BuildDate:"2020-04-16T11:41:38Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/arm64"}
  • Cloud provider or hardware configuration: 4 x Raspberry Pi 4 (4GB)
  • OS (e.g. from /etc/os-release): Ubuntu 20.04 LTS
@youngnick
Copy link
Member

Hi @al45tair, thanks for this issue. You're right, we only supported watching the service and basic addressing in the first cut of this feature.

Do you know how nginx-ingress builds the list of IPs that it puts in there? We used the Envoy service-watching method because it's what we use in our example deployment, and I hoped that the --ingress-status-address parameter would meet most use cases until we could come back around to do something else.

@al45tair
Copy link
Contributor Author

@youngnick As it happens, I do know how ingress-nginx (sorry, got the name the wrong way around; very confusing that there are two) does that, because I had to file a bug against Rancher because the latter breaks the mechanism ingress-nginx uses, which is ironic because ingress-nginx is what Rancher installs by default.

If you've got the ingress-nginx source code handy, if you look in internal/ingress/status, there is a function runningAddresses(), which gets a list of all of the Pods in the same namespace as the running ingress-nginx instance that have the same labels. Having obtained that, it looks up the nodes using their names and grabs the IP addresses from there.

If you were going to do something similar, you'd want to avoid the label problem (or it'll break on Rancher), and I suppose you'd want to look for the envoy Pods.

@al45tair
Copy link
Contributor Author

@youngnick A couple more thoughts on this: while ingress-nginx just does this at a point in time (so if the set of nodes changes, you'd have to kick it to get it to update the ingresses), I think the right way to do this is to explicitly watch the Envoy Pods for NodeName changes and then watch the resulting set of Nodes for IP address changes. That could feed into the existing mechanism in Contour that updates ingresses when the LoadBalancer address changes.

@youngnick
Copy link
Member

Yes, I think if we're going to take addresses from the node, we should make sure it's updated whenever the Envoy pods are. It's quite a bit of additional complexity, but at least the Service status watcher gives us a pattern for it. I'll have a think about how we would build that and put something in here.

@youngnick youngnick added area/deployment Issues or PRs related to deployment tooling or infrastructure. area/httpproxy Issues or PRs related to the HTTPProxy API. area/ingress Issues or PRs related to the Ingress API. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/accepted Denotes an issue that has been triaged and determined to be valid. labels May 28, 2020
@youngnick youngnick added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jun 9, 2020
@michmike michmike added this to Parking Lot 3 in Contour Project Board Jun 9, 2020
@youngnick
Copy link
Member

I've added the help-wanted label here. For anyone who wants to pick this up, the todo is:

  • Understand how the nginx-ingress version of this works.
  • Make a proposal here for how a Contour version would work.
  • Start working on the code.

@xaleeks xaleeks moved this from P3 to P1 in Contour Project Board Aug 16, 2021
@nefelim4ag
Copy link

https://github.com/kubernetes/ingress-nginx/blob/3c86f838d4a833ee8905cce6d5ef06d4b534f786/internal/ingress/status/status.go#L174

They simply have more complex logic for pod discovery and service parsing:
https://github.com/kubernetes/ingress-nginx/blob/38c73233f3db84866892c68845fa188802550d0e/internal/ingress/status/status.go#L339

By default ingress-nginx without any options just describe itself: https://github.com/kubernetes/ingress-nginx/blob/38c73233f3db84866892c68845fa188802550d0e/internal/ingress/status/status.go#L188
Get labels, and search for other ingress pods in same namespace.

So for Contour that is not so simple, because we need a way to specify how to find envoy pods (by service?)
And depends on that do something, like:

if service is headless:
  Get nodes:
    Get externalIPs
    If empty:
      Get InternalIPs
...
And so on

@youngnick
Copy link
Member

Thanks for those notes @nefelim4ag, that's excellent.

@nefelim4ag
Copy link

Something like that:

kubectl -n contour get endpoints contour-envoy -o json | jq '.subsets[].addresses[].nodeName' -r | xargs -I{} kubectl get node {} -o json | jq '.status.addresses[] | select(.type=="ExternalIP") | .address' -r
34.244.212.1
34.224.131.105
34.252.211.5

So just for your information
Currently, we use contour for our production clusters for almost 1.5 years, and it handles about 4000 req/s (TLS on ARM!) at peak every day.
We deploy it by a chart from bitnami, I send some patches to them, to fix corner cases.
Deployment + hostNetwork + HPA/PDB and Amazon Spot instances somewhere.

For publishing, dirty hack used, external-dns NodePort annotation with CNAME.

contour:
  extraArgs:
    - "--ingress-status-address=${local.contour_dns_alias}"
envoy:
  service:
    type: NodePort
    externalTrafficPolicy: Local
    annotations:
      external-dns.alpha.kubernetes.io/ttl: "300"
      external-dns.alpha.kubernetes.io/hostname: ${local.contour_dns_alias}

So, external-dns create A record with ExternalIP addresses of envoy nodes and contour just set name of A record as CNAME for ingress/HTTPProxy resources.

Also, we disable the shutdown manager, use TerminationGraceTimeout like 600s, because we expect the migration to other nodes by DNS drain, and we try to avoid traffic spikes, because nodes, while slow shutdown, still serve traffic requests from clients.
(also that fixes the case when someone accidentally moves all pods to a terminating state - an almost bulletproof solution..)

So, generally speaking, anyone can already run contour in the same setup as Nginx Ingress.
Only 2 restrictions of that solution:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/deployment Issues or PRs related to deployment tooling or infrastructure. area/httpproxy Issues or PRs related to the HTTPProxy API. area/ingress Issues or PRs related to the Ingress API. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/accepted Denotes an issue that has been triaged and determined to be valid.
Projects
No open projects
Development

No branches or pull requests

3 participants