-
Notifications
You must be signed in to change notification settings - Fork 668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
document how to run multiple Contours per cluster (e.g. internal and external) #855
Comments
without using |
Ok -- here are the ports I see in the original manifest
From first blush, it looks like there are no overlapping ports on the things that are running as host networking and other ports look to be wired up fine. @davecheney -- do you know if envoy is opening any other ports that aren't specified? Is there a way to make it dump more verbose logging to debug why it is exiting? @so0k -- might be useful to ssh into the node and use |
running I will try this again tomorrow with a fresh mind as I spent quite a lot of time breaking my head on this already |
Hey @so0k can you try adding this argument to one of the Envoy daemonset manifests?
ref: https://www.envoyproxy.io/docs/envoy/v1.8.0/operations/cli.html#cmdoption-base-id |
great find! Adding base-id: new gist - didn't have time to test it yet |
@stevesloka / @so0k, confirmed <3! |
@davecheney - should I do a PR with documentation about adding |
Documentation issue, I’m assigning this to beta.1. @so0k feel free to send a docs PR before then. |
@so0k ping |
Here's a working configuration that uses kustomize to deploy internal and external Contours: https://github.com/jpeach/kustomize/tree/master/knative/configurations/contour |
Sorry, I moved company and I'm afraid I won't be able to finish this documentation task until I have a use case to switch ingress over to Contour |
Move the example deployment to Kustomize. This breaks the YAML documents in the example deployment into 4 components located in `config/components` - types, contour, envoy and certgen. These are all included in the default deployments, but operators have the option of creating deployments that dont't include all the components. Deployments to various Kubernetes infrastructure are in the `deployment` directory. The base deployment pulls in all the components and sets the namespace to `projectcontour`. The `kind` deployment updates the Envoy Daemonset to use a `NodePort` service, and the `aws` deployment enables TCP load balancing with PROXY protocol support. No special options are needed for `gke` as far as I know, but it is included for completeness. The traditional quickstart YAML is now located at `config/quickstary.yaml` and is just a rendering of the base deployment. The netlify redirect can't be updated until after a release because it points to a release branch. This updates projectcontour#855, projectcontour#1190, projectcontour#2088, projectcontour#2544. Signed-off-by: James Peach <jpeach@vmware.com>
Move the example deployment to Kustomize. This requires the `kustomize` tool, since the version of Kustomize vendored in `kubectl apply -k` is too old to support. The YAML documents in the example deployment are broken into 4 components located in `config/components` - types, contour, envoy and certgen. These are all included in the default deployments, but operators have the option of creating deployments that dont't include all the components. The `types-v1` component contains the Contour CRDs suitable for Kubernetes 1.16 or later. Deployments to various Kubernetes infrastructure are in the `deployment` directory. The base deployment pulls in all the components and sets the namespace to `projectcontour`. The `kind` deployment updates the Envoy Daemonset to use a `NodePort` service, and the `aws` deployment enables TCP load balancing with PROXY protocol support. No special options are needed for `gke` as far as I know, but it is included for completeness. The traditional quickstart YAML is now located at `config/quickstary.yaml` and is just a rendering of the base deployment. The netlify redirect can't be updated until after a release because it points to a release branch. This updates projectcontour#855, projectcontour#1190, projectcontour#2088, projectcontour#2544. Signed-off-by: James Peach <jpeach@vmware.com>
Hi, guys, I'm having the a similar issue and I haven't been able to solve it with the suggestions in this thread. Basically, I need to deploy two instances of Contour on the same cluster. What is happening is that the first one that gets deployed works, Ingresses created with its class get the address assigned. After the second instance is deployed, ingresses deployed with its class do not get the address assigned. I couldn't find anything weird or wrong on either of the Contour or Envoy's logs. These are the Helm chart version and values I'm using:
Values for
|
Hi @fernandodeperto, are both Envoy daemonsets starting up okay? I can't remember if it's this way in the Helm chart or not, but in our example YAMLs, the Envoy daemonset uses a host port, which, if it's being defaulted to something, would mean that the second Envoy deployment wouldn't start, which might produce this outcome? |
Hi, @youngnick, thank you for the response. Indeed that is one of the things that I've checked, and both the Contour Deployment and the Envoy DaemonSet are coming up normally. I can actually report that I've managed to fix the issue by deploying the two Contour instances on different namespaces. I couldn't find any differences on the logs between the two setups, but somehow in separate namespaces they work, meaning they both manage to assign ELB addresses to Ingress resources. |
Oh, yes, we should really say that you should only do the two instances in separate namespaces. If the two deployments were in the same namespace, there are a few things that could conflict, like the configuration configmap and the leader election. For other people finding this thread: Contour strongly recommends that, for multiple installations of Contour, they are in separate namespaces. |
Just if anybody is interested, using the bitnam helm chart, deploying a internal and external contour would look like this
|
@OrlinVasilev I think this one is a very good candidate for some updated docs - @EugenMayer has got some great tips here that could cover the Helm usage, but I think a page that covered "how to run multiple Contour instances in a cluster" would help a lot of people. |
@youngnick - so new Guide or to include it somewhere into the documentation ? |
I think a new Guide is probably best. I'd like to see the Guides section renamed to "cookbook" or something later, but a Guide will get this info in there, we are getting more requests for this. I think that @EugenMayer was contributing notes, not offering to write the full thing, happy to be proved wrong though. |
I would rather handing over to you to put it into a format of your linking - just take it as it were yours and put it where people can use it :) |
I'll try to get this documented soon, we're asked fairly regularly about this. Will cover deploying into separate namespaces (easier) and the same namespace (harder, but still possible). |
Closes projectcontour#855. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes #855. Signed-off-by: Steve Kriss <krisss@vmware.com>
Objective
I need to configure internal and external ingress controllers on AWS with NLB
Internal Ingress is served by an internal NLB (only accessible from within the VPC)
External Ingress is served by an external NLB (publicly accessible)
What steps did you take and what happened:
I'm using
deployment/ds-hostnet-split
as a guideline.full final manifest
contour-external
for deploy / svc (clusterIP) manifestthe arguments:
contour-internal
for deploy / svc (clusterIP) manifestthe arguments:
hostNetwork: true
:envoy-external
for ds / svc (LoadBalancer:NLB) manifestenvironment:
8002
preStop uses
9001
envoy-internal
for ds / svc (LoadBalancer:NLB) manifestenvironment is the same as above, using fieldRef for spec.NodeName
args:
8004
preStop uses
9002
What did you expect to happen:
I expect both envoy ds to run side by side, but they exit(1) without any logs, I verified that they do not use the same hostPorts and I reviewed and validated the configuration created by contour bootstrap..
If I delete one ds the other starts without issue and vice versa.
SSH to the node and running a docker container without host network with the same envoy allows both containers to run side by side.
Anything else you would like to add:
here is a gist for full manifests:
https://gist.github.com/so0k/2cb93c405cf05fcb8ecb73fa665efbd0
Environment:
/etc/os-release
): CoreOSThe text was updated successfully, but these errors were encountered: