diff --git a/keps/sig-network/2104-reworking-kube-proxy-architecture/README.md b/keps/sig-network/2104-reworking-kube-proxy-architecture/README.md new file mode 100644 index 00000000000..d8110366de4 --- /dev/null +++ b/keps/sig-network/2104-reworking-kube-proxy-architecture/README.md @@ -0,0 +1,1370 @@ + +# KEP-2104: rework kube-proxy architecture + +# Index + + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [How we calculate deltas: The DiffStore](#how-we-calculate-deltas-the-diffstore) + - [User Stories (Optional)](#user-stories-optional) + - [Story 1](#story-1) + - [Story 2](#story-2) + - [Story 3](#story-3) + - [Story 4](#story-4) + - [Story 5](#story-5) + - [Story 6](#story-6) + - [Story 7](#story-7) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [API](#api) + - [Server](#server) + - [Client](#client) + - [Backends](#backends) + - [Fullstate Logic](#fullstate-logic) + - [Filterreset logic](#filterreset-logic) + - [Test Plan](#test-plan) + - [Automation for the standard service proxy scenarios](#automation-for-the-standard-service-proxy-scenarios) + - [Manual verification of complex scenarios](#manual-verification-of-complex-scenarios) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests for meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + +At the beginning, `kube-proxy` was designed to handle the translation of Service objects to OS-level +resources. Implementations included userspace, followed by iptables, and ipvs. With the growth of the +Kubernetes project, more implementations came to life, for instance with eBPF, and often in relation +to other goals (Calico to manage the network overlay, Cilium to manage app-level security, metallb +to provide an external LB for bare-metal clusters, etc). + +Along this cambrian explosion of third-party software, the Service object itself received new +concepts to improve the abstraction, for instance to express topology. Thus, third-party +implementations are expected to update and become more complex over time, even if their core doesn't +change (ie, the eBPF translation layer is not affected). + +This KEP is born from the conviction that more decoupling of the Service object and the actual +implementations is required, by introducing an intermediate, node-level abstraction provider. This +abstraction is expected to be the result of applying Kubernetes' `Service` semantics and business +logic to a simpler, more stable API. + +## Motivation + +There have been several presentations and issues regarding the motivation for a new kube-proxy implementation. + +- *Scaling* the kube proxy requires scaling the number of watches on the APIServer, and having customizable deployments around this benefits niche use cases for larger scale / small nodes. +- *Extending* the kube-proxy is difficult because it needs to be "copied", and code needs to be written against a non-explicit data model. +- *Testing* the kube-proxy is difficult because of the way it is coupled to the APIServer and the internal, in-memory data model of the kube-proxy codebase. +- *Coupling* of kube-proxy logic to CNI provider implementation is difficult because of the inability to easily extend and test the kube-proxy. +- *Choosing* the data model for how service proxying is implemented (i.e. via events vs network state) isn't easy. + +We thus propose a modular implementation of the kube proxy which can be implemented in different ways, but which is backwards compatible with +the Kube proxies which exist in-tree. + +### Goals + +Please note that the kube-proxy working group now has a [working implementation](https://github.com/kubernetes-sigs/kpng) +for most of the below goals. The implementation has CI, conformance testing, +and fine grained sig-network tests passing for the most critical use cases, including +both linux and windows. + +- Design a new architecture for service proxy implementations + consisting of: + + - A "core" service proxy process that models the networking statespace + of Kubernetes, and thus all the non-backend-technology-specific aspects of service proxying + (e.g., determining which endpoints should be available on a + given node, given traffic policy, topology, and pod readiness + constraints) + + - A proper subset of "proxy backends" mapping to the current upstream proxy, + which communicate with the "brain" to acquire it's local networking state space + to implement technical details of writing backend routing rules + from services to pods (eg. iptables, ipvs, Windows kernel). + + - A gRPC API for optimal communication between the core logic + daemon and the backend implementations, which can be run in memory + or externalized (so that the brain can run with a one-many relationship + with "proxy backends" + + - A set of golang packages for working with the gRPC API, which third parties + can use to create their own proxy backend implementations out of tree. + +- Provide an implementation of the core service proxy logic daemon. + (This implementation will be used, unmodified, by all core and + third-party proxy backend implementations.) + +- Provide a golang library that consumes the gRPC API from the above + daemon and exports the information to the proxy backend + implementation in one of two forms: + + - a "full state" model, in which the library keeps track of the + full state of Kubernetes network statespace on this node, and + sends the proxy implementation a new copy of the full state on every + update. The library will also include a package called + "diffstore" that is designed to make it very easy for + implementations to generate incremental updates based on the + full state information. + + - an "incremental" model, in which the library simply passes on + the updates from the gRPC API so the proxy implementation can + update its own model, which allows for similar behavior to the + current upstream kube-proxy. + +- Provide additional reusable library elements for some shared backend logic, + such as conntrack cleaning, which might be called from different + proxy implementations. + +- Provide new implementations of the existing "standard" proxy + implementations (iptables, ipvs, and Windows kernel), based on the + new daemon and client library. At least one of these will use the + "full state" model and at least one will use the "incremental" model. + +- Deprecate and eventually remove the existing proxy implementations + in `k8s.io/kubernetes`, in favor of the new implementations. Also + remove the associated support packages in `k8s.io/kubernetes` that + are only used by kube-proxy (eg, `pkg/util/ipvs`, + `pkg/util/conntrack`, `pkg/util/netsh`). + +- Provide initial material that demonstrates how to run this decoupled proxy implementation on +separate nodes (i.e. with the "core service proxy brain" on *one* node, and a backend(s) on +other nodes, where all of the K8s networking state space is sent, remotely over GRPC). + +### Non-Goals + +- We Won't necessarily provide bulletproof NFT, eBPF, Userspace backends with parity to the core Windows kernel, IPTAbles, IPVS implementations. +- We Won't Require KPNG to run in a mode where the kpng "core logic brain" is a separate process from the kpng "proxy backends", we state this as non goal because it's been a bit of a red herring debate in the past. Although KPNG support this, it's not required. +- We won't Require all proxiers to use the fullstate model. However, this is ideal for new implementations, we think because it's easier to read and understand. + +## Proposal + +Decoupling of the KPNG core logic from networking backends, with the option to run these components in completely separate processes, or, together with the same shared memory. In cases where they are decoupled at the process level, one may, for example, use "localhost" as the gRPC API provider that will be accessible as usual via TCP (`127.0.0.1:12345`) and/or via a socket (`unix:///path/to/proxy.sock`). In cases where they run together in memory, the same communication will happen, but the GRPC calls will just be local. + +- it will connect to the API server and watch resources, like the current proxy; +- then, it will process them, applying Kubernetes specific business logic like topology computation + relative to the local host; +- finally, provide the result of this computation to client via a gRPC watchable API. + +We assume that some backend's will benefit from running the "core" Kubernetes logic in a way that is aligned to releases of Kubernetes , while rapidly upgrading their backend logic out-of-band from this. This implementation allows that if one so chooses, although we expect most "stable" proxying implementations won't be upgrading at a more frequent clip then Kubernetes itself. + +In this implementation, we either: +- send the *full state* of the kubernetes networking state-space to a client, every time anything needs to change. Since this is done over GRPC or in memory, the bandwidth costs are low (anecdotally this has been measured, and it works for 1000s of services and pods - we can attach specific results to this KEP as needed). An example of this can be seen in the ebpf and nft proxies in the KPNG project (https://github.com/kubernetes-sigs/kpng/blob/master/backends/nft/nft.go). Some initial performance data is here https://github.com/kubernetes-sigs/kpng/blob/master/doc/proposal.md. +``` +// the entire statespace of the Kubernetes networking model is embedded in this struct +type ServiceEndpoints struct { + Service *localnetv1.Service + Endpoints []*localnetv1.Endpoint +} +``` + +- send the *incremental state* of the kubernetes networking state-space whenever an event occurs. We then allow backend clients to implement "SetService" and "SetEndpoint" methods, which allow them to use a similar API structure to that of upstream Kubernetes current IPTables Kube-proxy. An example of this is in how KPNG currently implements the iptables proxy (https://github.com/kubernetes-sigs/kpng/blob/master/backends/iptables/sink.go). + +``` +func (s *Backend) SetService(svc *localnetv1.Service) { + for _, impl := range IptablesImpl { + // since iptables is a commonly used kubernetes proxying implementation + // we kept the serviceChanges cache and just wrapped it under SetService + impl.serviceChanges.Update(svc) + } +} + +``` + +Thus, we send the full state to a backend "client", such that the backend won't have to do +diff-processing and maintain a full cache of proxy data. This should provides simpler backend implementations, +reliable results and still be quite optimal, since many kernel network-level objects are +updated via atomic replace APIs. It's also a protection from slow readers, since no stream has to +be buffered. + +Since the node-local state computed by the new proxy will be simpler and node-specific, it will +only change when the result for the current node is actually changed. Since there's less data in +the local state, change frequency is reduced compared to cluster state. Testing on actual clusters +showed a frequency reduction of change events by 2 orders of magnitude. + +#### How we calculate deltas: The DiffStore + +One fundamentally important part of building a service proxy in kubernetes is calculating "diffs", for example, if at time 1 we have + +``` +Service A -> Pod A1 , Pod A2 +Service B -> Pod B1 +``` +and at time 2, we have +``` +Service A -> Pod A1 , Pod A2 +Service B -> Pod B1, Pod B2 +``` + +We need to add *One* new networking rule: the fact that there is service B which can be loadbalanced to pod B2. Any other networking rules already exist and need not be processed (this is more true for some backends then others, i.e. for IPVS or the windows kernel, which don't require rewriting of all rules every time there's a change). + +KPNG provides a "DiffStore" library, which allows arbitrary, generic go objects to be diffed in memory by a backend. This can be viewed at https://github.com/kubernetes-sigs/kpng/tree/master/client/diffstore. The overall usage of this store is relatively intuitive: Write to it continuously, and only register "differences" when looking at the Diffs. The "Reset()" function causes the second "wave" in a series of writes to take place, such that a subsequent call to see the diff at a later time will reveal differences between the first and second series of writes. Note that the `Get` call here will write a key if empty. + +``` +func ExampleStore() { + store := NewBufferStore[string]() + { + fmt.Fprint(store.Get("a"), "hello a") + store.Done() + store.printDiff() + } + { + store.Reset() + fmt.Fprint(store.Get("a"), "hello a") + store.Done() + store.printDiff() + } +``` +The entire unit test for the diffstore which is used to cache and update the network state space on the backend side, is shown in the above `/diffstore/diffstore_test.go` file. + + +### User Stories (Optional) + + + +#### Story 1 + +As a networking technology startup I want to make my own kube-proxy implementation but don't want to maintain the logic of talking to the APIServer, caching its data, or calculating an abbreviated/proxy-focused representation of the Kubernetes networking state space. I'd like a wholesale framework I can simply plug my logic into. + +#### Story 2 + +As a Kubernetes maintainer, I don't want to have to understand the internals of a networking backend in order to simulate or write core updates to the logic of the kube-proxy locally. + +#### Story 3 + +As a Kubernetes maintainer, I'd like to add new proxies to kubernetes-sigs repositories which aren't in-tree, but are community maintained and developed/licensed according to CNCF standards + +#### Story 4 + +As an end user, I'd like to be able to easily test a Kubernetes backend's networking logic without plugging it into a real Kubernetes cluster, or maybe even use it to write networking rules that aren't directly provided by the Kubernetes API. + +#### Story 5 + +As a developer I'd like to implement a backend proxy implementation without being dependent on the K8s API, and without creating any load on the Kubernetes API - either in edge networking scenarios, or in high scale scenarios. + +#### Story 6 + +As a user of Kubernetes at large scales, I want more ways to offload APIServer strain, then simply reducing the amount of Endpoints allowed in an EndpointSlice. + +#### Story 7 + +As a developer I'd like to write a kube proxy implementation in a new language like C or Rust, which doesn't require a active connection to the Kubernetes APIServer. + +### Notes/Constraints/Caveats (Optional) + +- sending the full-state could be resource consuming on big clusters, but it should still be O(1) to + the actual kernel definitions (the complexity of what the node has to handle cannot be reduced + without losing functionality or correctness). + +### Risks and Mitigations + +- There's a performance risk when it comes to large scales, we've proposed a new issue https://github.com/kubernetes-sigs/kpng/issues/325 as a community wide, open scale testing session on a large cluster that we can run manually to inspect in real time and see any major deltas. + +- There may be magic functionality that is unpublished in the kube-proxy that we don't know about which we lose when doing this. + +Mitigations are - falling back to the in-tree proxy, or simply titrating logic over piece by piece if we find holes . We don't think there are many of these those because there are 100s of networking tests, many of which test specific items like udp proxying, avoiding blackholes, service updating, scaling of pods, local routing logic for things like service topologies, and so on. + +- Story 5, while implementable from a development standpoint to make it easy to hack on new backends, hasn't been broadly tested in a production +context and might need tooling like mTLS and so on, in order to be production ready for clouds and other user facing environments. + +## Design Details + +A [draft implementation] exists and some [performance testing] has been done. + +[draft implementation]: https://github.com/kubernetes-sigs/kpng/ +[performance testing]: https://github.com/kubernetes-sigs/kpng/blob/master/doc/proposal.md + +### API + +The watchable API will be long polling, taking a "last known state info" and returning a stream of +objects. + +Proposed definition is found here: https://github.com/kubernetes-sigs/kpng/blob/master/api/localnetv1/services.proto + +The main types composing the GRPC API are: +``` +message Service +message IPFilter +message ServiceIPs +message Endpoint +message IPSet +message Port +message ClientIPAffinity +message ServiceInfo +message EndpointInfo +message EndpointConditions +message NodeInfo +message Node +``` + +### Server + +The KPNG server is responsible for watching the Kubernetes API for +changes to Service and Endpoint objects and translating to listening +clients via the aforementioned API. + +When the proxy server starts, it will generate a random InstanceID, and have Rev at 0. So, a client +(re)connecting will get the new state either after a proxy restart or when an actual change occurs. +The proxy will never send a partial state, only full states. This means it waits to have all its +Kubernetes watchers sync'ed before going to Rev 1. + +The first OpItem in the stream will be the state info required for the next polling call, and any +subsequent item will be an actual state object. The stream is closed when the full state has been +sent. + + +### Client + +The client library abstracts those details away and interprets the kpng api events following +each state change. It includes a default Run function, sets up default flags, parses them and runs +the client, allowing very simple clients like this: + +```golang +package main + +import ( + "fmt" + "os" + "time" + + "github.com/mcluseau/kube-proxy2/pkg/api/localnetv1" + "github.com/mcluseau/kube-proxy2/pkg/client" +) + +func main() { + client.Run(printState) +} + +func printState(items []*localnetv1.ServiceEndpoints) { + fmt.Fprintln(os.Stdout, "#", time.Now()) + for _, item := range items { + fmt.Fprintln(os.Stdout, item) + } +} +``` + +The currently proposed interface for the lower-level client is as follows: + +```golang +package client // import "github.com/mcluseau/kube-proxy2/pkg/client" + +type EndpointsClient struct { + // Target is the gRPC dial target + Target string + + // InstanceID and Rev are the latest known state (used to resume a watch) + InstanceID uint64 + Rev uint64 + + // ErrorDelay is the delay before retrying after an error. + ErrorDelay time.Duration + + // Has unexported fields. +} + + +// DefaultFlags registers this client's values to the standard flags. +func (epc *EndpointsClient) DefaultFlags(flags FlagSet) { + flags.StringVar(&epc.Target, "api", "127.0.0.1:12090", "API to reach (can use multi:///1.0.0.1:1234,1.0.0.2:1234)") + + flags.DurationVar(&epc.ErrorDelay, "error-delay", 1*time.Second, "duration to wait before retrying after errors") + + flags.IntVar(&epc.MaxMsgSize, "max-msg-size", 4<<20, "max gRPC message size") + + epc.TLS.Bind(flags, "") +} + +// Next sends the next diff to the sink, waiting for a new revision as needed. +// It's designed to never fail, unless canceled. +func (epc *EndpointsClient) Next() (canceled bool) { + if epc.watch == nil { + epc.dial() + } + +retry: + if epc.ctx.Err() != nil { + canceled = true + return + } + + // say we're ready + nodeName, err := epc.Sink.WaitRequest() + if err != nil { // errors are considered as cancel + canceled = true + return + } + + err = epc.watch.Send(&localnetv1.WatchReq{ + NodeName: nodeName, + }) + if err != nil { + epc.postError() + goto retry + } + + for { + op, err := epc.watch.Recv() + + if err != nil { + // klog.Error("watch recv failed: ", err) + epc.postError() + goto retry + } + + // pass the op to the sync + epc.Sink.Send(op) + + // break on sync + switch v := op.Op; v.(type) { + case *localnetv1.OpItem_Sync: + return + } + } +} + +// Cancel will cancel this client, quickly closing any call to Next. +func (epc *EndpointsClient) Cancel() { + epc.cancel() +} + +// CancelOnSignals make the default termination signals to cancel this client. +func (epc *EndpointsClient) CancelOnSignals() { + epc.CancelOn(os.Interrupt, os.Kill, syscall.SIGTERM) +} + +// CancelOn make the given signals to cancel this client. +func (epc *EndpointsClient) CancelOn(signals ...os.Signal) { + go func() { + c := make(chan os.Signal, 1) + signal.Notify(c, signals...) + + sig := <-c + klog.Info("got signal ", sig, ", stopping") + epc.Cancel() + + sig = <-c + klog.Info("got signal ", sig, " again, forcing exit") + os.Exit(1) + }() +} + +func (epc *EndpointsClient) Context() context.Context { + return epc.ctx +} + +func (epc *EndpointsClient) DialContext(ctx context.Context) (conn *grpc.ClientConn, err error) { + klog.Info("connecting to ", epc.Target) + + opts := append( + make([]grpc.DialOption, 0), + grpc.WithMaxMsgSize(epc.MaxMsgSize), + ) + + tlsCfg := epc.TLS.Config() + if tlsCfg == nil { + opts = append(opts, grpc.WithInsecure()) + } else { + opts = append(opts, grpc.WithTransportCredentials(credentials.NewTLS(tlsCfg))) + } + + return grpc.DialContext(epc.ctx, epc.Target, opts...) +} + +func (epc *EndpointsClient) Dial() (conn *grpc.ClientConn, err error) { + if ctxErr := epc.ctx.Err(); ctxErr == context.Canceled { + err = ctxErr + return + } + + return epc.DialContext(epc.ctx) +} +``` + +### Backends + +The backends make use of the provided client infrastructure to actually program +networking rules into the kernel datapath. In KPNG the backends interact with the +client library by first implementing the `backendcmd.Cmd` interface + +```golang +type Cmd interface { + BindFlags(*pflag.FlagSet) + Sink() localsink.Sink +} +``` + +This interface allows each backend to register their own specific set of CLI +flags, and to define what type of sink they would like to use. Usually +the interface is implemented in a file called `register.go` and returned +via an `init()` function within each backend. + +```golang +type Backend struct { + localsink.Config +} + +func init() { + backendcmd.Register("to-iptables", func() backendcmd.Cmd { return &Backend{} }) +} + +func (s *Backend) BindFlags(flags *pflag.FlagSet) { +} + +func (s *Backend) Sink() localsink.Sink { + return filterreset.New(pipe.New(decoder.New(s), decoder.New(conntrack.NewSink()))) +} +``` + +As shown above, a backend's methods define all the functionality (at a high +level) it needs to function, however they all must implement the `Sink()`, +which returns a `localsink.Sink` interface, and `BindFlags()` methods. + +```golang +type Sink interface { + // Setup is called once, when the job starts + Setup() + + // WaitRequest waits for the next diff request, returning the requested node name. If an error is returned, it will cancel the job. + WaitRequest() (nodeName string, err error) + + // Reset the state of the Sink (ie: when the client is disconnected and reconnects) + Reset() + + localnetv1.OpSink +} +``` + +The sink interface is implemented by two different packages, the `filterreset` +package, which provides methods to give incremental change data to the backends, or +the `fullstate` package, which simply passes the full-state of the current Services +and Endpoints to the backend. + +In the scope of kubernetes it may be easier to think of the `fullstate` library +as the package used by implementations who wish to follow level driven controller +constructs, and the `filterreset` library as the package used by implementations +who wish to follow event driven controller constructs. + +#### Fullstate Logic + +![Alt text](kpng-fullstate-syncer.png?raw=true) + +The fullstate library implements the `sink` interface via +a custom `Sink` struct: + +```golang +// EndpointsClient is a simple client to kube-proxy's Endpoints API. +type Sink struct { + Config *localsink.Config + Callback Callback + SetupFunc Setup + + data *btree.BTree +} +``` + +The POC EBPF implementation is a good example of utilizing the fullstate package +to interact with the KPNG client. Specifically the backend implements three main methods +`Sink()` to actually create the fullstate sink, and the fullstate +sink's `Callback` and `Setup` functions. The setup function is called once upon KPNG +client startup, while the callback function is called anytime the state of +kubernetes services and endpoints changes. + +```golang +func (s *backend) Setup() { + ebc = ebpfSetup() + klog.Infof("Loading ebpf maps and program %+v", ebc) +} + +func (b *backend) Sink() localsink.Sink { + sink := fullstate.New(&b.cfg) + + sink.Callback = fullstatepipe.New(fullstatepipe.ParallelSendSequenceClose, + ebc.Callback, + ).Callback + + sink.SetupFunc = b.Setup + + return sink +} +``` + +The `Sink()` method shown above creates a new fullstate sinker via the fullstatepipe +package. The fullstatepipe can be configured to send events to the backend in +three ways: + +```golang +const ( + // Sequence calls to each pipe stage in sequence. Implies storing the state in a buffer. + Sequence = iota + // Parallel calls each pipe stage in parallel. No buffering required, but + // the stages are not really stages anymore. + Parallel + // ParallelSendSequenceClose calls each pipe entry in parallel but closes + // the channel of a stage only after the previous has finished. No + // buffering required but still a meaningful sequencing, especially when + // using the diffstore. + ParallelSendSequenceClose +) +``` + +The `ebc.Callback`(ebpf controller Callback) function resembles the following: + +```golang +func (ebc *ebpfController) Callback(ch <-chan *client.ServiceEndpoints) { + // Reset the diffstore before syncing + ebc.svcMap.Reset(lightdiffstore.ItemDeleted) + + // Populate internal cache based on incoming fullstate information + for serviceEndpoints := range ch { + klog.V(5).Infof("Iterating fullstate channel, got: %+v", serviceEndpoints) + + ... + // Abbrev. BUSINESS LOGIC + ... + + // Reconcile what we have in ebc.svcInfo to internal cache and ebpf maps + // The diffstore will let us know if anything changed or was deleted. + if len(ebc.svcMap.Updated()) != 0 || len(ebc.svcMap.Deleted()) != 0 { + ebc.Sync() + } +} +``` + +And is what is responsible for programing the actual datapath rules to handle +service proxying. + +#### Filterreset logic + +![Alt text](kpng-filterreset-syncer.png?raw=true) + + +The `filterreset` library implements the `sink` interface as follows: + +```golang + +type Sink struct { + sink localsink.Sink + filtering bool + memory map[string]memItem + seen map[string]bool +} +``` + +This definition allows the implementations to construct their own custom sinks +(see the `sink` field). A great example of +utilizing the `filterreset` library can be found in the iptables implementation. + +```golang +func (s *Backend) Sink() localsink.Sink { + return filterreset.New(pipe.New(decoder.New(s), decoder.New(conntrack.NewSink()))) +} +``` + +Here the bakend initializes a new filterreset sink which receives events from the +shared client via four main methods `SetService`, `DeletService`, `SetEndpoint`, +`DeleteEndpoint`. + +```golang +func (s *Backend) SetService(svc *localnetv1.Service) { + for _, impl := range IptablesImpl { + impl.serviceChanges.Update(svc) + } +} + +func (s *Backend) DeleteService(namespace, name string) { + for _, impl := range IptablesImpl { + impl.serviceChanges.Delete(namespace, name) + } +} + +func (s *Backend) SetEndpoint(namespace, serviceName, key string, endpoint *localnetv1.Endpoint) { + for _, impl := range IptablesImpl { + impl.endpointsChanges.EndpointUpdate(namespace, serviceName, key, endpoint) + } + +} + +func (s *Backend) DeleteEndpoint(namespace, serviceName, key string) { + for _, impl := range IptablesImpl { + impl.endpointsChanges.EndpointUpdate(namespace, serviceName, key, nil) + } +} +``` + +These methods are ultimately where the iptables rules are programed by +the backend. The main use case for such a sink design was to more easly integrate +with existing kube proxy backends (iptables, ipvs, etc) which already relied +on such methods. + +### Test Plan + +#### Automation for the standard service proxy scenarios + +Upstream Kubernetes has a large set of 100s of tests which leverage service proxies, on different clouds, running +in prow. By "pring" into Kubernetes, we'll get these tests, for free... + +For each of our "completed" backends (iptables, ipvs, nft) KPNG currently runs + +- all sig-network tests which involve service proxying +- all Conformance tests + +We of course must ensure we pass all the scalability tests which run in PROW default CI, +and we must manually verify KPNG on all standard clouds, and especially, this is +important since cloud kube proxy configurations my leverage command line options/configurations +which arent needed in our CI/kind clusters. + +#### Manual verification of complex scenarios + +We assert that some level of performance testing, manually, should be done since this is a +significant architectural change, but we will iterate the details of that later on. + +[ x ] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + +- ``: `` - `` + +##### Integration tests + + + +- : + +##### e2e tests + + + +- : + +### Graduation Criteria + + + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [ ] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: + - Components depending on the feature gate: +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? (Do not assume `Dynamic Kubelet Config` feature is enabled). + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + +*Having an all in one library to help people make kube proxys which are packaged outside of K/K* + +This solution would make it easier to build proxies without actually packaging an end to end solution. It would however have the setback of *only* being a library, and also, it would force third parties to entirely implement caching solutions. Finally, it would provide no usability at all to the fully generic "my backend is not in golang" use case, i.e. if someone made a C or java or python backend, they can use KPNG's "brain" as a separate process. All-in-all, a client library that eases the burden of making proxies is an alternative to this approach but, it would throw away alot of the functionality, modularity, and the "marketplace" aspect of separate backends evolving rapidly with regard to a stable core KPNG "brain". + +*Not re writing the kube-proxy and let it live on in pkg/ with new backends emerging over time* + +Because of configuration challenges, community growth challenges in the existing kube-proxy, we'd assert that the modular and easy to evolve/maintain model here is superior to a monolith in-tree. We cite particularly the eBPF and Windows kube proxy user stories as "home run" scenarios for KPNG, because windows benefits from it's own approaches, and it shouldnt clutter the core K8s codebase, and similarly, eBPF is a bit of a niche' approach which shouldnt live in the Kubernetes core codebase. + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-network/2104-reworking-kube-proxy-architecture/kep.yaml b/keps/sig-network/2104-reworking-kube-proxy-architecture/kep.yaml new file mode 100644 index 00000000000..64cfa5be0c5 --- /dev/null +++ b/keps/sig-network/2104-reworking-kube-proxy-architecture/kep.yaml @@ -0,0 +1,57 @@ +title: rework kube-proxy architecture +kep-number: 2104 +authors: + - "" +owning-sig: sig-network +participating-sigs: + - sig-network + - sig-windows +status: provisional|implementable|implemented|deferred|rejected|withdrawn|replaced +creation-date: 2020-10-10 +reviewers: + - "@thockin" + - "@danwinship" +approvers: + - "@thockin" + - "@danwinship" + +##### WARNING !!! ###### +# prr-approvers has been moved to its own location +# You should create your own in keps/prod-readiness +# Please make a copy of keps/prod-readiness/template/nnnn.yaml +# to keps/prod-readiness/sig-xxxxx/00000.yaml (replace with kep number) +#prr-approvers: + +see-also: + - "/keps/sig-aaa/1234-we-heard-you-like-keps" + - "/keps/sig-bbb/2345-everyone-gets-a-kep" +replaces: + - "/keps/sig-ccc/3456-replaced-kep" + +# The target maturity stage in the current dev cycle for this KEP. +stage: alpha|beta|stable + +# The most recent milestone for which work toward delivery of this KEP has been +# done. This can be the current (upcoming) milestone, if it is being actively +# worked on. +latest-milestone: "v1.19" + +# The milestone at which this feature was, or is targeted to be, at each stage. +milestone: + alpha: "v1.19" + beta: "v1.20" + stable: "v1.22" + +# The following PRR answers are required at alpha release +# List the feature gate name and the components for which it must be enabled +feature-gates: + - name: MyFeature + components: + - kube-apiserver + - kube-controller-manager +disable-supported: true + +# The following PRR answers are required at beta release +metrics: + - my_feature_metric + diff --git a/keps/sig-network/2104-reworking-kube-proxy-architecture/kpng-filterreset-syncer.png b/keps/sig-network/2104-reworking-kube-proxy-architecture/kpng-filterreset-syncer.png new file mode 100644 index 00000000000..6039bc2fb32 Binary files /dev/null and b/keps/sig-network/2104-reworking-kube-proxy-architecture/kpng-filterreset-syncer.png differ diff --git a/keps/sig-network/2104-reworking-kube-proxy-architecture/kpng-fullstate-syncer.png b/keps/sig-network/2104-reworking-kube-proxy-architecture/kpng-fullstate-syncer.png new file mode 100644 index 00000000000..2ed06887461 Binary files /dev/null and b/keps/sig-network/2104-reworking-kube-proxy-architecture/kpng-fullstate-syncer.png differ