-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Filter on istio gateways based on ingress annotations #1137
feat: Filter on istio gateways based on ingress annotations #1137
Conversation
Welcome @pastequo! It looks like this is your first PR to knative-sandbox/net-istio 🎉 |
Hi @pastequo. Thanks for your PR. I'm waiting for a knative-sandbox member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
hey @pastequo , thanks for this contribution!
I'm not sure I'm following exactly, could you elaborate a bit more on the difference between intersection and non-intersection? In both cases, it seems like it could only use whatever was defined in the configmap. You wouldn't want to have it reconciled by a gateway that knative didn't know about, right? Sorry if I'm misinterpreting things.
I may be misunderstanding, but my instinct would be to fall back to the current default behavior of applying every gateway that is listed in the configmap. That way it wouldn't really change anything for people not using this annotation/use case.
Maybe we could update the status on the Kingress with a condition explaining this? Though idk if this is too ingress-specific to make its way up into Knative core stuff? Also, one of things I was thinking about with this feature is that, if you use different gateways, I think you also need to consider what domains are being used. I think in @rblaine95's case, they wanted to use the same domain for both gateways, so it is less of a problem. But could there be a scenario where DNS is set up to map one domain to gateway A and another to gateway B? Then you'd need to make sure the Ksvc with the gateway A annotation has the right labels so that the domain it is given is correct. In this case, I'm thinking this might be a hard error to discover. Idk if there would be a way to make net-istio warn you that the domain on a ksvc isn't served by the gateway it is tied to, since that info kinda lies outside of Knative. |
Thanks for your feedbacks ! Check user annotationnon-intersection vs intersection was not very clear, let me rephrase it: The TLS part of the code required gateways to be defined in the Invalid value managementLet's enumerate the different possibilities and merge into it the "logging question". A value in the annotations can be invalid for 2 reasons:
I'm assuming the code should respond the same way if a value is invalid, whatever the reason. From my understanding, when reconciling an ingress, the code is merging the "gateways" with the "local-gateways" and put the result in the virtual service named All values are valid (well-formatted & present in
|
I pushed a new commit that handles error differently, in a more protective way. Regarding my previous post:
So the actual error is surfaced by the logs (the status of the ingress resource being generic) |
/ok-to-test |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1137 +/- ##
==========================================
+ Coverage 81.66% 81.79% +0.13%
==========================================
Files 19 19
Lines 1680 1780 +100
==========================================
+ Hits 1372 1456 +84
- Misses 220 234 +14
- Partials 88 90 +2 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR - I appreciate you tackling this use case.
I don't think annotations is the right mechanic since it gives too much control to users and I wonder if we should be cautious here. I don't know whether letting users run services that are 'unexpectedly' accessible in intranet networks is a risk. cc @nak3 @skonto for perspective.
Thus I think what I would consider a 'safer' mechanic is for the operator to perform this setup/configuration - potentially in our net-istio config map.
Though I'm mixed on what the operator mechanic could be - using a label selector would still allow users to control what gateways knative services attach too.
Maybe it's better for operators to map certain namespaces to gateways.
/cc @ReToCode 🙏 |
I would argue that as-of-today the default behavior is to expose the service to all gateways defined in the configmap. Therefore this PR can be viewed as a way to define where you don't want to expose the service, meaning it filtered what the admin defined, not overwrite. I understand your point about
But letting users run services that are 'unexpectedly' accessible from internet (and more generally from any unexpected networks) is a bigger risk imho. I think the right approach here is to not split by "public/private" but generalize to "different networks" and let the admin, thru the ingress gateway definition, put any meaning on them. Then I think it's up to the service owner to know & specify from which network their service should be accessible Regarding the namespaces approach, I, as a service owner, am personally used to have the flexibility to define how to split my namespace. which is convenient for RBAC, networking policies, etc. If I was asked to use a static list of namespace, that would probably imply more problem. Your proposal didn't state it has to be static but if the list is dynamic (for example deploy on namespace Just my 2 cents. |
I think the gateway api sets a good precendent here (long term we'll want to consolidate around it) They make this an operator concern and the gateway resource defines which routes can attach to itself https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.AllowedRoutes Thus the operator specifies which namespaces are allowed to use certain gateways and then the user can specify the gateways that it wants |
Ok, if I understand you correctly, there are 2 different functionalities to consider:
The current status is that all gateways are always used. This PR allows to achieve the 1st functionality without, I think, putting constraint on the 2nd functionality. Is there a reason to achieve the 2 functionalities simultaneously? And even in the same PR? |
Following up from our discussion in the Serving WG group. We identified this problem sorta fits into the Currently we associate
I think we shouldn't expose more to the user than what's necessary - it's the wrong level of abstraction for the user. Hence I think a mechanic for the operator to define a new visibility in A rough idea could be gateway.knative-serving.knative-ingress-gateway: "istio-ingressgateway.istio-system.svc.cluster.local"
gateway-visibility.vpc.knative-serving.knative-ingress-gateway: "istio-ingressgateway.istio-system.svc.cluster.local"
gateway-visibility.vpc.knative-serving.knative-vpc-gateway: "istio-ingressgateway.istio-system.svc.cluster.local" Thus if the user specifies the annotation Thus the template is gateway-visibility.{visibility}.{gateway-ns}.{gateway-name}: "{target-service-for-probing}"
I think you're right about me introducing an additional constraint. We can always add this later. cc @nak3 @ReToCode @skonto for thoughts/ideas on alternative approaches? |
/test latest-mesh |
/hold release is tomorrow - will unhold afterwards |
Hi @pastequo I just tried this to understand the implementation but the status becomes
|
I don't reproduce locally what you observed
(On a side note, it wasn't easy to setup a local environment to replay e2e... It might be worth documenting how you proceed. I saw some weird things too that might require clean up. For example github workflows references |
Thank you. I understand that the |
Wouldn't it be more confusing ? By reading |
/retest |
Yes, and users will take a look at their LB (=Istio Gateway) and realize that the LB does not exist due to their wrong configuration, won't they? By the way, I realized that the designe has some problems: (copied from #1137 (comment)) apiVersion: v1
kind: ConfigMap
metadata:
name: config-istio
namespace: knative-serving
data:
gateway.ns1.gtw1: "ing1.istio-system.svc.cluster.local"
gateway.ns2.gtw2: "ing2.istio-system.svc.cluster.local"
... You assume that |
BTW, there is a subtlety between the istio ingress gateways and the istio gateways. The LBs (exposing Istio ingress gateway, aka envoy) are ready and healthy. But none of them are used because there is no istio gateways (a gathering of configuration) for this ksvc. So it could be confusing for the user, because he shouldn't look at the LB nor the istio ingress gateways
|
I still think it is not the best idea to mix the current visibility with the new gateway selection (#1137 (comment)). Similarly, this goes the same direction. We do have to types of gateways (external and cluster-local). We can have multiple gateways for each type and with this new feature we want to limit the selection to a subset of those. But it should not be possible to expand the gateway selection (a cluster-local domain should not be allowed to target a external visible gateway). With the current design, that is (or seams) possible. |
Thanks for the reply @ReToCode. I can confirm you that I took into account your remarks: the 2 notions (visibility & exposition) are not mixed. I will edit the design to state it clearly (done) |
Funny enough @nak3's issue (#718) made me realize there's a lot more work to support this feature properly - so that's definitely something we'd want to address. Secondly, I wonder if it would make sense to take inspiration from config-domain changes that are likely to land - knative/serving#14543 Since it seems there's still a bit of contention on how to structure config-istio. What I like about that Serving PR is that it supports a new config format and the old one. Given that it might be best rethink what we want config-istio to look like instead of trying to shoe horn it into the existing format. eg. we could have apiVersion: v1
kind: ConfigMap
metadata:
name: config-istio
namespace: knative-serving
data:
internal-gateways:
- service-namespace: istio-system
service-name: knative-cluster-local
external-gateways:
- service-namespace: istio-system
service-name: ing2
- service-namespace: istio-system
service-name: ing1 and then introduce the concept of selectors for services and even namespace selectors apiVersion: v1
kind: ConfigMap
metadata:
name: config-istio
namespace: knative-serving
data:
internal-gateways: |
- namespace: istio-system
name: knative-cluster-local
external-gateways: |
- namespace: istio-system
name: ing2
namespaceSelector: |
kubernetes.io/metadata.name: blah
- service-namespace: istio-system
service-name: ing1
selector: |
app: foo We could even extend this to support matching gateways depending on domain so you don't have to repeat the namespace selector in two configs Thoughts on this approach @ReToCode & @nak3 ? @pastequo would the above give you the flexibility you require? It might make sense to move this proposal to a shared google document rather than a PR - @pastequo can you open an issue regarding this incase you haven't done so? |
@dprotaso I like that approach, it is closer to what we actually want to configure and does not mix with the visibility (as per #1137 (comment)). So you'd add this as an additional config to keep being reverse compatible with the new structure to have precedence? Side-note: probably we also want to rename the |
I don't have the history you have on this project, but it seemed to me that there wasn't a lot of work for that. All the code is plugged to work with a list of gateway except for the LB Status methods that already accept an array.
Should it block this PR ? Reworking the config structure seems independent to me. Tho I understand that might be a good opportunity.
It seems so but some remarks:
I was referencing another user issue: #1124 |
Based on the Serving WG discussion (Nov 29th) I believe we're all in agreement that we'll close out this PR and turn this into a feature track doc The template is here We should place the document here You'll get access by joining the google group - https://groups.google.com/g/knative-dev If you have issues ping me on slack |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Changes
Add capability to restrict the list of istio gateways that will expose a knative service
exposition
, to istio gatewaysistio.knative.dev/exposition
on knative Ingressexposition
value) are usedexposition
filters gateways that will be used for the givenvisibility
History
networking.knative.dev/istio-exposition
->istio.knative.dev/exposition
Example
New configuration example
Which leads to the following according to the annotation value
(empty*)
in the code, for backward compatibility, the list could not be empty -> https://github.com/knative-extensions/net-istio/blob/release-1.12/pkg/reconciler/ingress/ingress.go#L180-L185Notes
If an exposition references an unknown gateway, an error is returned. In the example below,
exposition.ns-unknown.gtw-unknown
is problematicvisibility
filter/kind enhancement
Fixes #1124
Release Note
Docs
Initial Version
Changes
serving.knative.dev/istio-gateways
&serving.knative.dev/istio-local-gateways
Example
2 ingress gateway have been created while installing istio, in
istio-system
namespaceistio-private-ingressgateway
: This ingress is exposed on a private networkistio-public-ingressgateway
: This ingress is exposed on internetThe split public/private is purely informative, any gateway topology is supported
Both gateways are defined in net-istio
config-istio
ConfigMapWhile creating a knative service, the user can add an annotation to specify on which gateway the service should be exposed. Local gateways can also be filtered (not illustrated on this example)
will result for the creation of the virtual service
my-private-service-ingress
:will result for the creation of the virtual service
my-public-service-ingress
:or
will result for the creation of the virtual service
my-service-ingress
:Notes
/kind enhancement
Fixes #1124
Release Note
Docs