Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress status #1976

Closed
wants to merge 2 commits into from
Closed

Conversation

stevesloka
Copy link
Member

Fixes #403 by updating Ingress resources with Envoy's service type=LoadBalancer status information.

This also adds two new args to contour:

  • envoy-service-name: Envoy service name (default: envoy)
  • envoy-service-namespace: Envoy service namespace (default: where Contour is running)

Signed-off-by: Steve Sloka <slokas@vmware.com>
configuring how to setup Envoy service reference

Signed-off-by: Steve Sloka <slokas@vmware.com>
@jpeach
Copy link
Contributor

jpeach commented Dec 3, 2019

@stevesloka FYI, this will have significant merge conflicts with #1964

Copy link
Contributor

@davecheney davecheney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for working on this. I have a concern about tying the updating of the external IP address on k8s records to the dag generation cycle. I was hoping that the implementation would be more like the way the nginx controller does it, as a separate goroutine which is started by leader election and runs autonomously. I think this is the right way to do this because the inputs into updating the status of k8s Ingress objects are only the service record for Envoy itself.

@@ -99,7 +99,17 @@ func main() {
}
}

func newClient(kubeconfig string, inCluster bool) (*kubernetes.Clientset, *clientset.Clientset, *coordinationv1.CoordinationV1Client) {
// Returns the current namespace in which the controller is running
// If error returns empty string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/returns the empty string/returns "projectcontour"

// Returns the current namespace in which the controller is running
// If error returns empty string
func getCurrentNamespace() string {
ns, _, err := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(clientcmd.NewDefaultClientConfigLoadingRules(), &clientcmd.ConfigOverrides{}).Namespace()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow, that's a function name and a half :)

How does this determine where its loading the kubeconfig from? If its magic, can we use the same magic for contour's --kubeconfig flag?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's using client-go to determine which namespace Contour is deployed into. I was trying to avoid folks having to add extra args to deployments by automatically finding this information.

It can still be overridden by args if the user would like. I also didn't want to read the /var/run/secrets/kubernetes.io/serviceaccount/namespace path in the pod which seems like hack when we have a k8s client (via client-go).

@@ -112,6 +122,7 @@ func newClient(kubeconfig string, inCluster bool) (*kubernetes.Clientset, *clien

client, err := kubernetes.NewForConfig(config)
check(err)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe use \n on line 127 and onwards for consistency?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I don't follow your comment. I didn't change any of this short of moving from kubernetes.ClientSet to kubernetes.Interface. The check(err) stops execution of the program.

@@ -27,7 +27,7 @@ import (

// newLeaderElector creates a new leaderelection.LeaderElector and associated
// channels by which to observe elections and depositions.
func newLeaderElector(log logrus.FieldLogger, ctx *serveContext, client *kubernetes.Clientset, coordinationClient *coordinationv1.CoordinationV1Client) (*leaderelection.LeaderElector, chan struct{}, chan struct{}) {
func newLeaderElector(log logrus.FieldLogger, ctx *serveContext, client kubernetes.Interface, coordinationClient *coordinationv1.CoordinationV1Client) (*leaderelection.LeaderElector, chan struct{}, chan struct{}) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like an unrelated change, maybe pull it out into its own PR to make this easier to review.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to add a test which actually tested that I could set the status of an Ingress object. In doing that I can use the client-go fake objects which gives me a fake k8s api. In order to use this I need to pass around the interface which I changed up in (https://github.com/projectcontour/contour/pull/1976/files/952c08508fffd3c71a7c0030f6a8e41511a3ba13#diff-b62d9e38d59686d41f8f41098ff436f9R112) to return an interface. That change bubbles down through all the other methods.

@@ -125,6 +127,11 @@ func doServe(log logrus.FieldLogger, ctx *serveContext) error {
// step 1. establish k8s client connection
client, contourClient, coordinationClient := newClient(ctx.Kubeconfig, ctx.InCluster)

// step 1b. get current namespace that Contour is running inside if not defined
if ctx.envoyServiceNamespace == "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setting ctx.envoyServiceName and ctx.envoyServiceNamespace are handled differently, is there a way to make them more similar?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name come from a namespace, I'm using the check for empty to determine if a user hasn't already set the flag. If they haven't then I look up what namespace Contour is running in.

@@ -164,6 +170,7 @@ func newServeContext() *serveContext {
Name: "leader-elect",
},
UseExtensionsV1beta1Ingress: false,
envoyServiceName: "envoy",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can envoyServiceNamespace default to projectcontour?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does if it cannot be looked up. I like the idea of requiring fewer flags to set on Contour, so I opted to look up the namespace which Contour is running in. If we take that out, we force users to set these flags and I can remove the lookup.

@@ -75,7 +75,7 @@ func WriteSecretsYAML(outputDir, namespace string, certdata map[string][]byte) e

// WriteSecretsKube writes all the keypairs out to Kube Secrets in the
// passed Kube context.
func WriteSecretsKube(client *kubernetes.Clientset, namespace string, certdata map[string][]byte) error {
func WriteSecretsKube(client kubernetes.Interface, namespace string, certdata map[string][]byte) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like an unrelated change, I think this file and the one above can be pulled out into their own PR to make this simpler to review.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the previous comment about testing with fakes.

@davecheney
Copy link
Contributor

I think this is the right way to do this because the inputs into updating the status of k8s Ingress objects are only the service record for Envoy itself.

I'm sorry, I was not correct. k8s Ingress status depends on the Envoy pod's service record and the ingress class setting of contour. With that said, I think this feature would be better implemented as stand alone as possible because it takes different inputs to the dag, and runs at a different timescale.

@youngnick
Copy link
Member

Also, I think that it's important that our design here allow for other deployment methods - nginx gets around this by allowing you to either specify the service, or just IPs or FQDNs directly. Being able to cover 'monitor this service' and 'just let me specify the IPs or FQDNs' hits all the deployment scenarios I can think of, in cloud and otherwise.

If there's a concurrency-safe struct somewhere that stores the IPs or FQDNs to go into the Ingress status stanza, then we can either poke a value in there directly at startup (for statically specified IPs), or have a goroutine pull them from an Informer.

@davecheney
Copy link
Contributor

davecheney commented Dec 3, 2019 via email

@stevesloka
Copy link
Member Author

stevesloka commented Dec 3, 2019

Also, I think that it's important that our design here allow for other deployment methods - nginx gets around this by allowing you to either specify the service, or just IPs or FQDNs directly. Being able to cover 'monitor this service' and 'just let me specify the IPs or FQDNs' hits all the deployment scenarios I can think of, in cloud and otherwise.

@youngnick The implementation allows for this. Right now I just implemented the service lookup. To allow users to specify their own, we need to add additional args.

@stevesloka
Copy link
Member Author

I'm sorry, I was not correct. k8s Ingress status depends on the Envoy pod's service record and the ingress class setting of contour. With that said, I think this feature would be better implemented as stand alone as possible because it takes different inputs to the dag, and runs at a different timescale.

Doing it in the dag build gave me the ingress class, service info, etc all for free. I can throw this all away and re-implement in a goroutine. Thanks for the feedback!

@stevesloka
Copy link
Member Author

@davecheney do you have thoughts on how we could get the events copied to this new goroutine? I put the update in the dag update because we have access to the services/ingress objects via the watch+event handler.

However, moving the Ingress status updates outside will need to hook into the Event Handler or get a copy of the services, ingress events from the k8s watcher.

So to do this in a new routine we'd need something like this:

  1. Get a new stream of updates from the informers to get access to service/ingress changes from the cluster
  2. Have a new cache to store the set of ingress objects
  3. Watch for changes to services, when we find one check for lb info as well as controller class info and update Ingress objects from cache

@stevesloka
Copy link
Member Author

@davecheney is going to pick this up, closing since he'll have a new PR.

@stevesloka stevesloka closed this Jan 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Address not set on Ingress resource
4 participants