Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support using NetworkPolicy with Routes #13780

Open
evankanderson opened this issue Mar 9, 2023 · 6 comments
Open

Support using NetworkPolicy with Routes #13780

evankanderson opened this issue Mar 9, 2023 · 6 comments
Assignees
Labels
kind/feature Well-understood/specified features, ready for coding. triage/accepted Issues which should be fixed (post-triage)

Comments

@evankanderson
Copy link
Member

evankanderson commented Mar 9, 2023

The Problem

Today, Routes include a cluster-internal URL which is implemented by creating a headless Kubernetes Service with the same name as the Route, and with endpoints selected (subsetted) from the set of HTTP routers supplied by the networking plugin. Callers are expected (as with externally-visible routes) to include a Host: HTTP header which indicates which Knative Route they are dialing; callers without the appropriate HTTP/1.1 header will be dropped as not matching any rules.

Unfortunately, this approach means that traffic to Knative Routes in different namespaces cannot be distinguished at an L4 level by most CNIs, and it means that callers MUST use a correct HTTP Host header to reach the Knative Route. The former (L4 traffic is identical at the CNI level) means that developers implementing Knative services cannot use Kubernetes NetworkPolicy to restrict access to their applications (unless using a service mesh or L7-aware CNI). This also violates the principle of least surprise for security and network policy admins, and takes away a simple and handy tool. The latter problem with Host headers mostly affects HTTP gateway implementations which may simply pass along the existing Host header unless explicitly programmed to rewrite it.

See this drawing for a summary of the current situation.

The Solution

This Feature Track document proposes a solution, which has been implemented in net-kourier.

knative-extensions/net-kourier#852 implements per-namespace port allocation on Envoy and changes to the per-Route Kubernetes Service management such that each namespace is allocated a unique port on the Envoy proxy, and each Kubernetes Service representing an internal Route uses independent service Endpoints which use the namespace-specific port rather than a universal port. (This could also be implemented with route-specific rather than namespace-specific ports.)

Scalability

While scalabality may appear to be an issue, there should be at least 8k ports available on each envoy instance, which is likely to result in at least 3*8k Kubernetes Services being created given current Knative usage (one for the Route, and two for each Revision). This is likely to hit other Kubernetes scaling limits before it hits ports-per-Envoy limits.

@evankanderson evankanderson added the kind/feature Well-understood/specified features, ready for coding. label Mar 9, 2023
@ReToCode ReToCode added the triage/accepted Issues which should be fixed (post-triage) label Mar 10, 2023
@keshavcodex
Copy link

/assign

@keshavcodex
Copy link

hey @evankanderson can you help my with some resourses to work with this issue.

@evankanderson
Copy link
Member Author

Have you looked at the drawing and the feature track already?

Background Context

If so, next steps are probably to get a Knative cluster available for testing, and do some observations. With a large cluster, it should be possible to install multiple networking plugins and ingress implementations at the same time; if you're working off a smaller installed (such as kn quickstart on a laptop), you may need to install only one networking plugin at a time.

Non-Knative Kubernetes Environment Validation

You may also want to make sure that you have a CNI implementation on your cluster which enforces NetworkPolicy -- the easiest way to do this is by setting up an in-cluster network connection (e.g. client pod to server pod), and then try creating a NetworkPolicy to block access and see if it works. Once that is reproducible, you can also try setting up a NetworkPolicy that only allows connections from a certain Namespace and test that as well.

Understanding The Current Implementation

To get your bearings, you should try enabling the per-port isolation in net-kourier, and see how that changes the network flows. Try using NetworkPolicy to control those flows, and see what happens. Then try the same tests with net-contour or net-istio, and see whether NetworkPolicy is effective.

Planning The Change

I'd start by looking at the net-istio plugin. Contour is currently in progress on implementing additional listeners on a single envoy process, so it might make more sense to wait for the 1.25 release which will may implement https://github.com/projectcontour/contour/blob/main/design/multiple-listeners-design.md. (It looks like this may be Gateway API specific, so that may align with working on net-gateway-api, rather than net-contour.)

@keshavcodex
Copy link

keshavcodex commented Mar 24, 2023

Have you looked at the drawing and the feature track already?

@evankanderson
yes, i did but was only able to understand half of the things.
and i got issue while setting up kourier for which I have opened a thread in knative-serving, so that may be you or some other folks can help.

@viveksahu26
Copy link

Hi @keshavcodex, hope you're doing well! Just wanted to check in and see if you happen to be working on this issue at the moment? Thank you so much!

@keshavcodex
Copy link

Nothing much, I only had setup calico.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Well-understood/specified features, ready for coding. triage/accepted Issues which should be fixed (post-triage)
Projects
None yet
Development

No branches or pull requests

4 participants