Skip to content
This repository has been archived by the owner on Jun 14, 2018. It is now read-only.

Envoy-based ingress controller #71

Merged
merged 8 commits into from
Feb 16, 2017
Merged

Conversation

zcahana
Copy link
Member

@zcahana zcahana commented Feb 2, 2017

This is a first-pass on Envoy-based ingress controller (addressing #6).
I'm PR'ing this while still WIP to start gathering feedback and set this in the right direction.

What we have so far:

  • Ingress controller running in the context of the manager (via manager ingress command).
  • Extending the platform/kube/Controller to sync ingress resources.
  • Adding ingress watcher that reconfigures Envoy based on the given ingresses.
  • Envoy clusters made up of single upstream (service's DNS name).

What's still missing:

  • Represent ingress rules in the Istio model, so that proxy/envoy isn't coupled with k8s objects.
  • Support for path regex in ingress (under the limitations of what Envoy supports)
  • Constructs envoy clusters using SDS + pod endpoints.
  • Update ingress status with the proxy's "public" IP address (node IP?).
  • TLS termination.
  • Testing.

@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed, please reply here (e.g. I signed it!) and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
  • If you signed the CLA as a corporation, please let us know the company's name.

@rshriram
Copy link
Member

rshriram commented Feb 2, 2017

@zcahana could you please sign the cla?

@rshriram rshriram changed the title Envoy-based ingress controller [WIP] Envoy-based ingress controller Feb 2, 2017
@zcahana
Copy link
Member Author

zcahana commented Feb 2, 2017

@rshriram I applied to join the ibm-contributors group, waiting for it to get approved.

@googlebot
Copy link

CLAs look good, thanks!

Copy link
Member

@rshriram rshriram left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zcahana thanks a lot for the comments in the code. See my review comments. Also for first pass, we can skip SSL/TLS. We don't have support for that in the manager yet (i.e., pulling it out of kube secrets, etc.).

With regard to the TODOs

  • Represent ingress rules in the Istio model, so that proxy/envoy isn't coupled with k8s objects.
  • Support for path regex in ingress (under the limitations of what Envoy supports)
    • We can skip this. Envoy doesn't support it yet. A TODO should suffice.
  • Constructs envoy clusters using SDS + pod endpoints.
    • Yes to this. And CDS as well.
  • Update ingress status with the proxy's "public" IP address (node IP?).
    • Depends on the provider isn't it? local cluster, it could be nodeIP. In GCE, I have seem something like a load balancer.
  • TLS termination.
    • Skip for now. May be take in the configs(?). Needs TLS config support in envoy config generation (pulling secrets out of k8s. Long term, a different solution is needed, but thats long term).
  • Testing
    • yes. And in this case, we probably need an end to end integration test with the go version of bookinfo example in istio/istio, on top of other unit tests, etc.

One more comment (a future todo): refactor config generation code such that its reused between both internal proxy and ingress as well.

"Discovery service DNS address")
ingressCommand.PersistentFlags().IntVarP(&flags.proxy.AdminPort, "admin_port", "a", 5000,
"Envoy admin port")
ingressCommand.PersistentFlags().StringVarP(&flags.proxy.BinaryPath, "envoy_path", "b", "/envoy_esp",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Envoy's admin port is of little use to end user. It is a readonly port. I don't even think this needs to be exposed.
Could you also add a TODO here saying that the hardcoded use of envoy must be changed to something more generic? e.g.,
``//TODO: Change to proxy_type: envoy|nginx|haproxy, and then make the rest (SDS, etc) as sub flags```

model/BUILD Outdated
@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"controller.go",
"ingress.go",
"registry.go",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove ingress from the model. Model is platform agnostic representation of resources. Mesos may not even have ingress.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, we need something in the model (RoutingRule?) that the envoy ingress watcher can consume, and the kube ingresses converted to. Is there already any work in that direction?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, there is no model object for the ingress. I think it is likely going to be a superset of k8s ingress resource, perhaps, even reusing RoutingRule. Can we create a custom struct for Ingress that is just a duplicate of k8s ingress for now? I'm not happy with the dependency on k8s here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1. Model package is completely platform independent.

@@ -34,6 +36,9 @@ type Controller interface {
// for a service.
AppendInstanceHandler(f func(*ServiceInstance, Event)) error

// AppendIngressHandler notifies about changes to ingress resources
AppendIngressHandler(f func(*v1beta1.Ingress, Event)) error

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be add a type to the Config struct to detect if its an ingress or not. This way, ingress becomes kube specific.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand how the Config is designed to be worked with. Any reference or explanation can help.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Config is untyped, since when controller was designed when we did not have any config resources defined. There are two options here - you can define a proto for Ingress and reuse AppendConfigHandler with a cast to your ingress proto-generated struct. I'm also OK with having a dedicated ingress handler for now assuming we get the config ball rolling later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wont the ingress config be the same or a superset of the ProxyConfig, with some additional stuff, that literally configures Envoy? i.e., the meshConfig object in code is the thing that needs to be generalized and surfaced right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ingress config seems to be a subset of the model's RouteRule (that's what you mean by ProxyConfig, right)? So, we can have the controller convert ingresses to such rules, and have the proxy watcher subscribe to these.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Please take a look at the Generate function in proxy/envoy/config.go. You can provide an empty array for upstreams and rules. It should give you an envoy configuration for services as hostnames and / as the default prefix. Along with SDS functionality. In next iteration (or if you want to hack the utility functions), you could generate Envoy route entries under a virtual host with specific path or prefixes.

Bottom line. There is opportunity for code reuse. The config.go takes care of generating all the right listener blocks (with use_orig_dst, etc).

model/ingress.go Outdated
type IngressRegistry interface {
// ListIngresses all ingress rules currently defined.
ListIngresses() []*v1beta1.Ingress
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please (re)move was you see fit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Ingress becomes a subset of Configs, this will go away. We can treat this as an adaptor interface from a config registry to an ingress registry.

type ingressWatcher struct {
agent Agent
mesh *MeshConfig
ingress model.IngressRegistry
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one favor: could you add a place holder or a TODO, so that #32 can be incorporated here easily as well?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand that comment...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the idea is to merge ingress watcher and sidecar (+config) watcher since they are almost same. We can do that later, to parallelize work for now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest to postpone merging the two watchers until we get a clearer picture of how much logic they share. At least for now, they are generating quite a different envoy configuration. The sidecar watcher shouldn't care / handle ingress-based configurations, whereas the ingress watcher shouldn't allow non-ingressed traffic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me. We should share code for config generation but the controllers themselves should be independent of one another. Let's see how it works out and extract proper module interfaces later.

time.Sleep(256 * time.Millisecond)
}

func (w *ingressWatcher) generateConfig() (*Config, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a way to de-duplicate this functionality by moving code out of proxy/envoy/watcher.go(?) that generates Envoy configuration, and reusing it in both places? After all, that code just expects a list of services, pods, and rules. in the absence of rules, it behaves just like a user-space kube proxy (except operating on per-pod basis). Whatever is missing will not be configured.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On second thought, lets add a TODO here as a reminder that we need to refactor this code. We could do that in a later PR.
Kind of similar to the IdentityProvider stuff in A8, in principle. What do you think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there's some de-duplication and code reuse that can be done. Will be refactored in next iterations on this.

}

// TODO: Validate no duplicate path/routes within this virtualhost
// (e.g., due to 2 default backends defined in 2 ingresses in different namespaces)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

silly question: is ingress common to all namespaces? or specific to a namespace? IF latter, we don't need this right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ingresses are defined within namespaces, but are normally realized by a single namespace-neutral ingress controller, which serves incoming traffic to all namespaces. This TODO refers to the need to make sure there are no collision between ingresses in different namespaces, which might yield invalid envoy configuration (e.g., a virtualhost with 2 similar routes, or 2 prefix-similar routes, etc).

listeners = append(listeners, Listener{
Port: 80,
BindToPort: true,
UseOriginalDst: true,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set to false please. This does not apply to ingress. Its only for internal envoy instances.

Type: "strict_dns", // TODO: Change to SDS
LbType: DefaultLbType,
ConnectTimeoutMs: DefaultTimeoutMs,
Hosts: clusterHosts(namespace, backend),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't we have to set the cluster "features": "http2", since we would have Envoy backends? If this is for istio only, then by default (and for first cut) I think we should set to features to "http2", since all http communication happens over Envoy to Envoy. If not specified, Envoy falls back to http/1.1, nullifying a lot of benefits that we get out of http/2.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is Istio-only for now. Will set "http2".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll try to refactor current config to make it more re-usable. It handles things like enabling http2.

Listeners: listeners,
ClusterManager: ClusterManager{
Clusters: clusters,
},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please set the "local_cluster_name": "ingress". Local cluster name is needed for a variety of features in envoy (CDS, tracing, possibly RDS, etc.).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading here, it seems that local_cluster_name is only needed for enabling zone aware routing. This is probably out of the scope of this PR.

Copy link
Contributor

@kyessenov kyessenov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like you are making good progress.
Regarding testing: I have set-up a basic e2e infrastructure for in-cluster routing in test/integration. To test ingress, you probably want to deploy ingress proxies, then add ingress resource, and send a request directly to ingress proxies to check that it gets routed to both ingress and sidecar proxy by checking access log for the request ID.

"Envoy binary location")
ingressCommand.PersistentFlags().StringVarP(&flags.proxy.ConfigPath, "config_path", "e", "/etc/envoy",
"Envoy config root location")
rootCmd.AddCommand(ingressCommand)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you do proxyCmd.AddCommand(ingressComand) then you inherit all flags from proxy command, and make ingress a subcommand.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but should ingress be a subcommand of proxy? In essence, these are two different modes of operation (in-cluster service sidecar proxy vs. edge ingress proxy).

Most (all?) of the commonalities stem from the fact that both commands manage envoy, so maybe both should be a child command of an "envoy" command?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, good point. I think we need a shared root, and then run "sidecar", "ingress", and "egress" as a subcommand. Actually, the flag parameter need to be turned into ProxyMeshConfig in the proto, but that's for later.

model/BUILD Outdated
@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"controller.go",
"ingress.go",
"registry.go",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, there is no model object for the ingress. I think it is likely going to be a superset of k8s ingress resource, perhaps, even reusing RoutingRule. Can we create a custom struct for Ingress that is just a duplicate of k8s ingress for now? I'm not happy with the dependency on k8s here.

@@ -34,6 +36,9 @@ type Controller interface {
// for a service.
AppendInstanceHandler(f func(*ServiceInstance, Event)) error

// AppendIngressHandler notifies about changes to ingress resources
AppendIngressHandler(f func(*v1beta1.Ingress, Event)) error

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Config is untyped, since when controller was designed when we did not have any config resources defined. There are two options here - you can define a proto for Ingress and reuse AppendConfigHandler with a cast to your ingress proto-generated struct. I'm also OK with having a dedicated ingress handler for now assuming we get the config ball rolling later.

model/ingress.go Outdated
type IngressRegistry interface {
// ListIngresses all ingress rules currently defined.
ListIngresses() []*v1beta1.Ingress
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Ingress becomes a subset of Configs, this will go away. We can treat this as an adaptor interface from a config registry to an ingress registry.

// AppendIngressHandler notifies about changes to ingress resources
func (c *Controller) AppendIngressHandler(f func(*v1beta1.Ingress, model.Event)) error {
c.ingresses.handler.append(func(obj interface{}, event model.Event) error {
f(obj.(*v1beta1.Ingress), event)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where you would need to convert k8s ingress to something platform-neutral.

type ingressWatcher struct {
agent Agent
mesh *MeshConfig
ingress model.IngressRegistry
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the idea is to merge ingress watcher and sidecar (+config) watcher since they are almost same. We can do that later, to parallelize work for now.

return []Host{
{
URL: fmt.Sprintf("tcp://%s.%s.svc.cluster.local:%d",
backend.ServiceName, namespace, backend.ServicePort.IntValue()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use SDS here? The service key for the cluster includes the port name, so you would need to fetch model.Service by hostname first, then find the port. That would work for both named and int ports.

clustersMap := make(map[string]Cluster)

// Function to add a rule for a particular host, path and backend.
// This will later be invoked for each ingress rule, as well as ingress default backend
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to provide a service for a 404 backend. The service can be implemented by Envoy or default-http-backend from kubernetes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This functionality is already exposed by k8s' ingresses, which allows the user to specify a default backend (service) for requests that don't match any ingress rule. This is implemented as an Envoy VirtualHost with a wildcard domain ("*"). If the user doesn't specify a default backend in its ingresses, then envoy we don't produce such wildcard VirtualHost, and fallback to Envoy's own 404.

Type: "strict_dns", // TODO: Change to SDS
LbType: DefaultLbType,
ConnectTimeoutMs: DefaultTimeoutMs,
Hosts: clusterHosts(namespace, backend),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll try to refactor current config to make it more re-usable. It handles things like enabling http2.

}

// Iterate on all ingresses, and create corresponding rule
for _, ingress := range w.ingress.ListIngresses() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make the order of rules or ingresses canonical. Otherwise, the watcher will keep reloading Envoy due to some permutation in the config file.

@kyessenov
Copy link
Contributor

More comments for the TODOs:

  • Constructs envoy clusters using SDS + pod endpoints.
    Yes, we should avoid using kube DNS since we have SDS. Don't worry about CDS for now, it's easy to refactor for that later.
  • Update ingress status with the proxy's "public" IP address (node IP?).
    GCE LB ingress does this because it manages firewall rules and IP allocation. NGINX ingress does not AFAIK and you have to create a separate service of type LoadBalancerIP pointing to NGINX deployment. Let's do what NGINX ingress does.
  • TLS termination.
    Don't worry about it. We will do one big swoop to enable TLS termination.
  • Testing.
    Take a look at test/integration/driver.go. I think it has all the pieces to reuse.

@zcahana
Copy link
Member Author

zcahana commented Feb 13, 2017

I've rewritten the ingress controller implementation such that Ingress API objects are converted to Istio model rules (currently, same config.RouteRule, but with a different kind - IngressRule), which are then transformed to Envoy configuration. There's still only a little re-use of existing Envoy configuration code. This can be refactored later when the package stabilizes and the common parts become clearer.

Note that due to the major changes that already went to master, I've rebased my branch and restructured the commits so that they're more reviewable. The commits from the first round are thus gone :).

@zcahana zcahana force-pushed the ingress_controller branch 2 times, most recently from 8175098 to 42ffc8b Compare February 14, 2017 10:01
@zcahana
Copy link
Member Author

zcahana commented Feb 14, 2017

Jenkins' presubmit build quickly fails with the following error (link):

+ go get ./...
/var/jenkins_home/jobs/manager/jobs/presubmit/workspace/src/main@tmp/durable-3d3f8166/script.sh: 2: /var/jenkins_home/jobs/manager/jobs/presubmit/workspace/src/main@tmp/durable-3d3f8166/script.sh: go: not found
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE

Not sure what's causing this... any chance it's some recent changes to istio-testing repo? @sebastienvas

@sebastienvas
Copy link
Contributor

Please rebase your PR.

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit started

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit failed

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit started

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit passed

@zcahana
Copy link
Member Author

zcahana commented Feb 15, 2017

thanks @sebastienvas , build now passes.

Copy link
Member

@rshriram rshriram left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for first pass. Can you add a test script please? Don't bother with driver.go. If you could add this ingress spec to istio/istio#8 and verify that basic calls to bookinfo work (no need for automation. There are some bugs in the automation framework), then I am fine with this pr.

// to the proxy configuration generator. This can be improved by using
// a dedicated model object for IngressRule (instead of reusing RouteRule),
// which exposes the necessary target port field within the "Route" field.
Tags: map[string]string{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we do that anymore.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean refer to ports in the route rule

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is necessary for envoy routes which are generated from ingresses - we need to route traffic to the cluster corresponding with the service:port defined by the user in the ingress.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, looking further into this, cluster names include the port name, and not the port number.
So, what happens when there's no port name specified?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That can only happen if there is only one port, and the upstream cluster becomes just the service name ("a.svc....") or service-tag combination ("a.svc...::env=prod")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a TODO item here? This needs to be in the next iteration of routing rules, we've been assuming rules apply per port, but ingress needs an explicit port. Moreover, this is a port name, so this has some implication for the route rule model.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be captured in #151.

Copy link
Member

@rshriram rshriram left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel free to merge when you deem it fit. As it doesn't conflict with existing functionality. I would rather have the code merged sooner instead of rebasing again and again.

@zcahana
Copy link
Member Author

zcahana commented Feb 15, 2017

There's at least one bugfix on the way, and potentially another one. I'm currently running/debugging this locally.

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit started

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit passed

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit started

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit passed

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit started

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit passed

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit started

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit passed

Copy link
Contributor

@kyessenov kyessenov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
Feel free to merge when you feel we can have an example ingress working. We can revisit the route rule model which seems to be causing issues for you.

@@ -298,6 +399,22 @@ func (c *Controller) List(kind, namespace string) (map[model.Key]proto.Message,
return out, errs
}

func (c *Controller) listIngresses(kind, namespace string) (map[model.Key]proto.Message, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: not needed

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is it that's not needed here? The listIngresses method gets called from Controller.List(kind, namespace) according to the actual kind. In practice, this is being used by the envoy's ingress watcher.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops my bad, I was referring to "kind" parameter.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, you're right that the "kind" parameter isn't really needed here. I left it for uniformity with the other listX methods.

// to the proxy configuration generator. This can be improved by using
// a dedicated model object for IngressRule (instead of reusing RouteRule),
// which exposes the necessary target port field within the "Route" field.
Tags: map[string]string{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a TODO item here? This needs to be in the next iteration of routing rules, we've been assuming rules apply per port, but ingress needs an explicit port. Moreover, this is a port name, so this has some implication for the route rule model.

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit passed

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit passed

@istio-testing
Copy link
Contributor

Jenkins job manager/presubmit passed

@zcahana zcahana changed the title [WIP] Envoy-based ingress controller Envoy-based ingress controller Feb 16, 2017
@zcahana
Copy link
Member Author

zcahana commented Feb 16, 2017

With the latest round of commits, I've verified that basic ingress function works (i.e., ingress controller forwards ingress traffic to correct backend service according to defined routes). In practice, the backend service's reverse proxy is still blocking the request from hitting the actual service (due to #150), so this is still not usable end-to-end, but probably solid enough for first-pass --> merging.

@zcahana zcahana merged commit 194066a into istio:master Feb 16, 2017
@zcahana zcahana deleted the ingress_controller branch February 16, 2017 15:07
@elevran
Copy link

elevran commented Feb 23, 2017

Together with #177 - fixes #6

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants