Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SFC steering options with SRv6 #1667

Open
rastislavs opened this issue Sep 10, 2019 · 8 comments
Open

SFC steering options with SRv6 #1667

rastislavs opened this issue Sep 10, 2019 · 8 comments

Comments

@rastislavs
Copy link
Collaborator

By SRv6 implementation of Service Function Chaining (SFC) in Contiv, we have several options for steering the traffic into service chains:

  • L2 steering – steers all traffic that comes to an interface
  • L3 steering – steers traffic that matches destination IP / subnet

This issue was created with purpose to start the discussion on the options that we have:

1. Using L2 steering, we can implement service chains similar to the existing l2xconnect SFC renderer implementation (see SFC examples) where the chain always starts on an interface, either in a pod, or on an external DPDK interface. The only difference between SRv6 and l2xconnect rendering would be the underlying technology used to forward frames in the chain. The SRv6 rendering would still bring some benefits:

  • we would get rid of VXLAN tunnels between the nodes and VXLAN encapsulation
  • with SRv6, we are able to load-balance the traffic between multiple chain instances, in case that pods in the chain have multiple replicas – this is not possible with l2xconnect
    On the other hand, the l2xconnect renderer may be a bit faster performance-wise, since l2xconnect is the fastest way of forwarding packets on VPP.

2. L3 steering is much different, since it does not allow us to define an interface as the start of the chain. With L3 steering, only packets that match destination IP / subnet can go into the chain. The destination IP/subnet can be:

  • 2.1 IP address of the pod in which the chain ends
  • 2.2 Virtual IP address (similar to k8s Services) or a subnet
  • 2.3 IP address / subnet external to the k8s cluster (e.g. customer/service provider network)

The option 2.1 brings some challenges, e.g. destination IP address conflicts in the routing table. Also, I’m not sure about the use-case for this, since the behavior would be dependent on the dynamic IPAM allocation of the pod, which is not very usable in real-world scenarios.

The option 2.2 would mean allocating a virtual IP address or subnet for each service chain. Any traffic destined to this IP / subnet (from a pod or from an external interface) would be steered into the service chain. The virtual IP subnet could be:

  • a) statically defined, as part of the CRD definition of the service chain, it would be up to the user to choose a IP/subnet that would not conflict within anything else
  • b) automatically allocated by Contiv, from a pre-defined subnet range
  • c) the service chain definition could reference a k8s service, where the steering IP address would be a cluster IP allocated to that service by k8s. In this case, even DNS records for the given service could be used to point the traffic into the chain

The option 2.3 is in fact the same as 2.2 a), with the difference that the statically defined IP address/subnet is not virtual, but has its purpose outside of the scope of the k8s cluster. This can be very helpful for creating service chains interconnected with e.g. legacy networking infra, or other cloud-native infrastructure.

@fgschwan
Copy link
Collaborator

Just want to say that L2 and L3 steering do not overlap in use cases and to cover all cases we need both When chain ends using end.DX2, then L2 steering must be used due to ethernet header in packet that is expected in end.DX2. Similarly, end.DX4/end.DX6 expects to NOT have ethernet header in packet, so L3 steering must be used.

Regarding 2.2 c)

c) the service chain definition could reference a k8s service, where the steering IP address would be a cluster IP allocated to that service by k8s.

I understood this that we have 1 service and 1 SFC chain that share the same virtual IP. The service/SFC chain doesn't share anything except cluster IP address and they are not connected serially We would run into IP collisions again or at least at some point packet would not know where to go (service or chain?).
@rastislavszabo Did you mean this option in different way?

@rastislavs
Copy link
Collaborator Author

rastislavs commented Sep 10, 2019

L2 and L3 steering do not overlap in use cases

agree, L2 and L3 steering are completely different use-cases

Regarding 2.2 c)

The service/SFC chain doesn't share anything except cluster IP address and they are not connected serially

yes

We would run into IP collisions again

The k8s service should have no endpoints in this case (no pods linked to that service). We can document that and may for instance decide to not render the SFC (and log an error) in case there is an endpoint for such a service.

--
BTW my preferred option is 2.2.a + 2.3 for now

@fgschwan
Copy link
Collaborator

The k8s service should have no endpoints in this case (no pods linked to that service). We can document that and may for instance decide to not render the SFC (and log an error) in case there is an endpoint for such a service.

So you meant a dummy service with no endpoints that serves functionality related to IP address (cluster IP provider, cluster IP assign/delete, DNS, ...). Basically use already working service configuration functionality for SFC chain purposes.

@rastislavs
Copy link
Collaborator Author

So you meant a dummy service with no endpoints that serves functionality related to IP address (cluster IP provider, cluster IP assign/delete, DNS, ...). Basically use already working service configuration functionality for SFC chain purposes.

exactly

@ahsalam
Copy link

ahsalam commented Sep 16, 2019

Thanks Rasto for clearly explaining the issue.
I believe options 2.2.a and 2.2.b are probably the cleanest solutions.
The virtual IP will serve as binding SID which we can reference using l2/l3 steering based on the use-case, as mentioned by Filip.

@fgschwan
Copy link
Collaborator

The virtual IP will serve as binding SID which we can reference using l2/l3 steering based on the use-case, as mentioned by Filip.

@ahabdels
The virtual IP is not SRv6 policy binding SID. The policy gets its own binding SID that is generated similar as for other SRv6 policies (we can make it the same IP address if we really want to, i just need to change the code for BSID getter). The virtual IP address is used for steering traffic into policy (steering<=>if dest=virtualIP then use policy with BSID xyz) and it is only needed for L3 steering (L2 steering don't need IP addresses).

@rastislavs
Copy link
Collaborator Author

Let's assume we would use the CRD approach for defining steering criteria as destination IP/subnet (2.2.a).

For that we need to modify the SFC data model: servicefunctionchain.proto

What would be the preferred way for its modification?

a) add a new field to the ServiceFunctionChain message, e.g. steering_ip or steering_subnet, or something similar, and provide

b) define steering as a special type of ServiceFunction (like Pod and ExternalInterface), e.g. L3Steering

Or are there any other suggestions?

@fgschwan
Copy link
Collaborator

Or are there any other suggestions?

You can also make it part of first ServiceFunction. I mention this only for completeness of choices, but it is not my preferred option. I would prefer option A, to have it as top level field. This setting is IMHO related to the whole chain. You can i.e use it for next hop going into inner(=not first or last pod/interface in chain) service. So service can use this ip address/subnet for traffic identification or other stuff. Current SRv6 implementation doesn't use it, it just push packets into inner services and expects services to handle every packet from incoming interface.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants