-
Notifications
You must be signed in to change notification settings - Fork 91
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed, please reply here (e.g.
|
@zcahana could you please sign the cla? |
@rshriram I applied to join the ibm-contributors group, waiting for it to get approved. |
CLAs look good, thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zcahana thanks a lot for the comments in the code. See my review comments. Also for first pass, we can skip SSL/TLS. We don't have support for that in the manager yet (i.e., pulling it out of kube secrets, etc.).
With regard to the TODOs
- Represent ingress rules in the Istio model, so that proxy/envoy isn't coupled with k8s objects.
- This is a WIP. See PR [wip] Proxy cfg validation & route generation #68
- Support for path regex in ingress (under the limitations of what Envoy supports)
- We can skip this. Envoy doesn't support it yet. A TODO should suffice.
- Constructs envoy clusters using SDS + pod endpoints.
- Yes to this. And CDS as well.
- Update ingress status with the proxy's "public" IP address (node IP?).
- Depends on the provider isn't it? local cluster, it could be nodeIP. In GCE, I have seem something like a load balancer.
- TLS termination.
- Skip for now. May be take in the configs(?). Needs TLS config support in envoy config generation (pulling secrets out of k8s. Long term, a different solution is needed, but thats long term).
- Testing
- yes. And in this case, we probably need an end to end integration test with the go version of bookinfo example in istio/istio, on top of other unit tests, etc.
One more comment (a future todo): refactor config generation code such that its reused between both internal proxy and ingress as well.
cmd/manager/main.go
Outdated
"Discovery service DNS address") | ||
ingressCommand.PersistentFlags().IntVarP(&flags.proxy.AdminPort, "admin_port", "a", 5000, | ||
"Envoy admin port") | ||
ingressCommand.PersistentFlags().StringVarP(&flags.proxy.BinaryPath, "envoy_path", "b", "/envoy_esp", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Envoy's admin port is of little use to end user. It is a readonly port. I don't even think this needs to be exposed.
Could you also add a TODO here saying that the hardcoded use of envoy must be changed to something more generic? e.g.,
``//TODO: Change to proxy_type: envoy|nginx|haproxy, and then make the rest (SDS, etc) as sub flags```
model/BUILD
Outdated
@@ -4,6 +4,7 @@ go_library( | |||
name = "go_default_library", | |||
srcs = [ | |||
"controller.go", | |||
"ingress.go", | |||
"registry.go", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove ingress from the model. Model is platform agnostic representation of resources. Mesos may not even have ingress.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, we need something in the model (RoutingRule
?) that the envoy ingress watcher can consume, and the kube ingresses converted to. Is there already any work in that direction?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, there is no model object for the ingress. I think it is likely going to be a superset of k8s ingress resource, perhaps, even reusing RoutingRule
. Can we create a custom struct for Ingress that is just a duplicate of k8s ingress for now? I'm not happy with the dependency on k8s here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. Model package is completely platform independent.
model/controller.go
Outdated
@@ -34,6 +36,9 @@ type Controller interface { | |||
// for a service. | |||
AppendInstanceHandler(f func(*ServiceInstance, Event)) error | |||
|
|||
// AppendIngressHandler notifies about changes to ingress resources | |||
AppendIngressHandler(f func(*v1beta1.Ingress, Event)) error | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be add a type to the Config struct to detect if its an ingress or not. This way, ingress becomes kube specific.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand how the Config
is designed to be worked with. Any reference or explanation can help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Config is untyped, since when controller was designed when we did not have any config resources defined. There are two options here - you can define a proto for Ingress and reuse AppendConfigHandler with a cast to your ingress proto-generated struct. I'm also OK with having a dedicated ingress handler for now assuming we get the config ball rolling later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wont the ingress config be the same or a superset of the ProxyConfig, with some additional stuff, that literally configures Envoy? i.e., the meshConfig object in code is the thing that needs to be generalized and surfaced right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ingress config seems to be a subset of the model's RouteRule
(that's what you mean by ProxyConfig
, right)? So, we can have the controller convert ingresses to such rules, and have the proxy watcher subscribe to these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Please take a look at the Generate function in proxy/envoy/config.go. You can provide an empty array for upstreams and rules. It should give you an envoy configuration for services as hostnames and / as the default prefix. Along with SDS functionality. In next iteration (or if you want to hack the utility functions), you could generate Envoy route entries under a virtual host with specific path or prefixes.
Bottom line. There is opportunity for code reuse. The config.go takes care of generating all the right listener blocks (with use_orig_dst, etc).
model/ingress.go
Outdated
type IngressRegistry interface { | ||
// ListIngresses all ingress rules currently defined. | ||
ListIngresses() []*v1beta1.Ingress | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please (re)move was you see fit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If Ingress becomes a subset of Configs, this will go away. We can treat this as an adaptor interface from a config registry to an ingress registry.
proxy/envoy/ingress.go
Outdated
type ingressWatcher struct { | ||
agent Agent | ||
mesh *MeshConfig | ||
ingress model.IngressRegistry |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one favor: could you add a place holder or a TODO, so that #32 can be incorporated here easily as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand that comment...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the idea is to merge ingress watcher and sidecar (+config) watcher since they are almost same. We can do that later, to parallelize work for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest to postpone merging the two watchers until we get a clearer picture of how much logic they share. At least for now, they are generating quite a different envoy configuration. The sidecar watcher shouldn't care / handle ingress-based configurations, whereas the ingress watcher shouldn't allow non-ingressed traffic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me. We should share code for config generation but the controllers themselves should be independent of one another. Let's see how it works out and extract proper module interfaces later.
time.Sleep(256 * time.Millisecond) | ||
} | ||
|
||
func (w *ingressWatcher) generateConfig() (*Config, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a way to de-duplicate this functionality by moving code out of proxy/envoy/watcher.go(?) that generates Envoy configuration, and reusing it in both places? After all, that code just expects a list of services, pods, and rules. in the absence of rules, it behaves just like a user-space kube proxy (except operating on per-pod basis). Whatever is missing will not be configured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On second thought, lets add a TODO here as a reminder that we need to refactor this code. We could do that in a later PR.
Kind of similar to the IdentityProvider stuff in A8, in principle. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, there's some de-duplication and code reuse that can be done. Will be refactored in next iterations on this.
proxy/envoy/ingress.go
Outdated
} | ||
|
||
// TODO: Validate no duplicate path/routes within this virtualhost | ||
// (e.g., due to 2 default backends defined in 2 ingresses in different namespaces) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
silly question: is ingress common to all namespaces? or specific to a namespace? IF latter, we don't need this right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ingresses are defined within namespaces, but are normally realized by a single namespace-neutral ingress controller, which serves incoming traffic to all namespaces. This TODO refers to the need to make sure there are no collision between ingresses in different namespaces, which might yield invalid envoy configuration (e.g., a virtualhost with 2 similar routes, or 2 prefix-similar routes, etc).
proxy/envoy/ingress.go
Outdated
listeners = append(listeners, Listener{ | ||
Port: 80, | ||
BindToPort: true, | ||
UseOriginalDst: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Set to false please. This does not apply to ingress. Its only for internal envoy instances.
proxy/envoy/ingress.go
Outdated
Type: "strict_dns", // TODO: Change to SDS | ||
LbType: DefaultLbType, | ||
ConnectTimeoutMs: DefaultTimeoutMs, | ||
Hosts: clusterHosts(namespace, backend), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't we have to set the cluster "features": "http2"
, since we would have Envoy backends? If this is for istio only, then by default (and for first cut) I think we should set to features to "http2", since all http communication happens over Envoy to Envoy. If not specified, Envoy falls back to http/1.1, nullifying a lot of benefits that we get out of http/2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is Istio-only for now. Will set "http2".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to refactor current config to make it more re-usable. It handles things like enabling http2.
proxy/envoy/ingress.go
Outdated
Listeners: listeners, | ||
ClusterManager: ClusterManager{ | ||
Clusters: clusters, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please set the "local_cluster_name": "ingress"
. Local cluster name is needed for a variety of features in envoy (CDS, tracing, possibly RDS, etc.).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading here, it seems that local_cluster_name
is only needed for enabling zone aware routing. This is probably out of the scope of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like you are making good progress.
Regarding testing: I have set-up a basic e2e infrastructure for in-cluster routing in test/integration
. To test ingress, you probably want to deploy ingress proxies, then add ingress resource, and send a request directly to ingress proxies to check that it gets routed to both ingress and sidecar proxy by checking access log for the request ID.
cmd/manager/main.go
Outdated
"Envoy binary location") | ||
ingressCommand.PersistentFlags().StringVarP(&flags.proxy.ConfigPath, "config_path", "e", "/etc/envoy", | ||
"Envoy config root location") | ||
rootCmd.AddCommand(ingressCommand) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you do proxyCmd.AddCommand(ingressComand)
then you inherit all flags from proxy command, and make ingress a subcommand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but should ingress be a subcommand of proxy? In essence, these are two different modes of operation (in-cluster service sidecar proxy vs. edge ingress proxy).
Most (all?) of the commonalities stem from the fact that both commands manage envoy, so maybe both should be a child command of an "envoy" command?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, good point. I think we need a shared root, and then run "sidecar", "ingress", and "egress" as a subcommand. Actually, the flag parameter need to be turned into ProxyMeshConfig in the proto, but that's for later.
model/BUILD
Outdated
@@ -4,6 +4,7 @@ go_library( | |||
name = "go_default_library", | |||
srcs = [ | |||
"controller.go", | |||
"ingress.go", | |||
"registry.go", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, there is no model object for the ingress. I think it is likely going to be a superset of k8s ingress resource, perhaps, even reusing RoutingRule
. Can we create a custom struct for Ingress that is just a duplicate of k8s ingress for now? I'm not happy with the dependency on k8s here.
model/controller.go
Outdated
@@ -34,6 +36,9 @@ type Controller interface { | |||
// for a service. | |||
AppendInstanceHandler(f func(*ServiceInstance, Event)) error | |||
|
|||
// AppendIngressHandler notifies about changes to ingress resources | |||
AppendIngressHandler(f func(*v1beta1.Ingress, Event)) error | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Config is untyped, since when controller was designed when we did not have any config resources defined. There are two options here - you can define a proto for Ingress and reuse AppendConfigHandler with a cast to your ingress proto-generated struct. I'm also OK with having a dedicated ingress handler for now assuming we get the config ball rolling later.
model/ingress.go
Outdated
type IngressRegistry interface { | ||
// ListIngresses all ingress rules currently defined. | ||
ListIngresses() []*v1beta1.Ingress | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If Ingress becomes a subset of Configs, this will go away. We can treat this as an adaptor interface from a config registry to an ingress registry.
platform/kube/controller.go
Outdated
// AppendIngressHandler notifies about changes to ingress resources | ||
func (c *Controller) AppendIngressHandler(f func(*v1beta1.Ingress, model.Event)) error { | ||
c.ingresses.handler.append(func(obj interface{}, event model.Event) error { | ||
f(obj.(*v1beta1.Ingress), event) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where you would need to convert k8s ingress to something platform-neutral.
proxy/envoy/ingress.go
Outdated
type ingressWatcher struct { | ||
agent Agent | ||
mesh *MeshConfig | ||
ingress model.IngressRegistry |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the idea is to merge ingress watcher and sidecar (+config) watcher since they are almost same. We can do that later, to parallelize work for now.
proxy/envoy/ingress.go
Outdated
return []Host{ | ||
{ | ||
URL: fmt.Sprintf("tcp://%s.%s.svc.cluster.local:%d", | ||
backend.ServiceName, namespace, backend.ServicePort.IntValue()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use SDS here? The service key for the cluster includes the port name, so you would need to fetch model.Service by hostname first, then find the port. That would work for both named and int ports.
proxy/envoy/ingress.go
Outdated
clustersMap := make(map[string]Cluster) | ||
|
||
// Function to add a rule for a particular host, path and backend. | ||
// This will later be invoked for each ingress rule, as well as ingress default backend |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to provide a service for a 404 backend. The service can be implemented by Envoy or default-http-backend from kubernetes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This functionality is already exposed by k8s' ingresses, which allows the user to specify a default backend (service) for requests that don't match any ingress rule. This is implemented as an Envoy VirtualHost with a wildcard domain ("*"). If the user doesn't specify a default backend in its ingresses, then envoy we don't produce such wildcard VirtualHost, and fallback to Envoy's own 404.
proxy/envoy/ingress.go
Outdated
Type: "strict_dns", // TODO: Change to SDS | ||
LbType: DefaultLbType, | ||
ConnectTimeoutMs: DefaultTimeoutMs, | ||
Hosts: clusterHosts(namespace, backend), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to refactor current config to make it more re-usable. It handles things like enabling http2.
proxy/envoy/ingress.go
Outdated
} | ||
|
||
// Iterate on all ingresses, and create corresponding rule | ||
for _, ingress := range w.ingress.ListIngresses() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make the order of rules or ingresses canonical. Otherwise, the watcher will keep reloading Envoy due to some permutation in the config file.
More comments for the TODOs:
|
e564de2
to
28039ff
Compare
I've rewritten the ingress controller implementation such that Ingress API objects are converted to Istio model rules (currently, same Note that due to the major changes that already went to master, I've rebased my branch and restructured the commits so that they're more reviewable. The commits from the first round are thus gone :). |
8175098
to
42ffc8b
Compare
Jenkins' presubmit build quickly fails with the following error (link):
Not sure what's causing this... any chance it's some recent changes to istio-testing repo? @sebastienvas |
42ffc8b
to
21b459c
Compare
Please rebase your PR. |
21b459c
to
67c131b
Compare
Jenkins job manager/presubmit started |
Jenkins job manager/presubmit failed |
Jenkins job manager/presubmit started |
Jenkins job manager/presubmit passed |
thanks @sebastienvas , build now passes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for first pass. Can you add a test script please? Don't bother with driver.go. If you could add this ingress spec to istio/istio#8 and verify that basic calls to bookinfo work (no need for automation. There are some bugs in the automation framework), then I am fine with this pr.
// to the proxy configuration generator. This can be improved by using | ||
// a dedicated model object for IngressRule (instead of reusing RouteRule), | ||
// which exposes the necessary target port field within the "Route" field. | ||
Tags: map[string]string{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we do that anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean refer to ports in the route rule
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is necessary for envoy routes which are generated from ingresses - we need to route traffic to the cluster corresponding with the service:port defined by the user in the ingress.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, looking further into this, cluster names include the port name, and not the port number.
So, what happens when there's no port name specified?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That can only happen if there is only one port, and the upstream cluster becomes just the service name ("a.svc....") or service-tag combination ("a.svc...::env=prod")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a TODO item here? This needs to be in the next iteration of routing rules, we've been assuming rules apply per port, but ingress needs an explicit port. Moreover, this is a port name, so this has some implication for the route rule model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be captured in #151.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to merge when you deem it fit. As it doesn't conflict with existing functionality. I would rather have the code merged sooner instead of rebasing again and again.
There's at least one bugfix on the way, and potentially another one. I'm currently running/debugging this locally. |
Jenkins job manager/presubmit started |
Jenkins job manager/presubmit passed |
Jenkins job manager/presubmit started |
Jenkins job manager/presubmit passed |
Jenkins job manager/presubmit started |
Jenkins job manager/presubmit passed |
Jenkins job manager/presubmit started |
Jenkins job manager/presubmit passed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Feel free to merge when you feel we can have an example ingress working. We can revisit the route rule model which seems to be causing issues for you.
@@ -298,6 +399,22 @@ func (c *Controller) List(kind, namespace string) (map[model.Key]proto.Message, | |||
return out, errs | |||
} | |||
|
|||
func (c *Controller) listIngresses(kind, namespace string) (map[model.Key]proto.Message, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is it that's not needed here? The listIngresses
method gets called from Controller.List(kind, namespace)
according to the actual kind. In practice, this is being used by the envoy's ingress watcher.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops my bad, I was referring to "kind" parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, you're right that the "kind" parameter isn't really needed here. I left it for uniformity with the other listX methods.
// to the proxy configuration generator. This can be improved by using | ||
// a dedicated model object for IngressRule (instead of reusing RouteRule), | ||
// which exposes the necessary target port field within the "Route" field. | ||
Tags: map[string]string{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a TODO item here? This needs to be in the next iteration of routing rules, we've been assuming rules apply per port, but ingress needs an explicit port. Moreover, this is a port name, so this has some implication for the route rule model.
Jenkins job manager/presubmit passed |
Jenkins job manager/presubmit passed |
Jenkins job manager/presubmit passed |
With the latest round of commits, I've verified that basic ingress function works (i.e., ingress controller forwards ingress traffic to correct backend service according to defined routes). In practice, the backend service's reverse proxy is still blocking the request from hitting the actual service (due to #150), so this is still not usable end-to-end, but probably solid enough for first-pass --> merging. |
This is a first-pass on Envoy-based ingress controller (addressing #6).
I'm PR'ing this while still WIP to start gathering feedback and set this in the right direction.
What we have so far:
manager ingress
command).platform/kube/Controller
to sync ingress resources.What's still missing: