Skip to content
This repository has been archived by the owner on Jul 11, 2023. It is now read-only.

Support capability to incrementally migrate legacy services to a service mesh #2172

Closed
4 tasks
addozhang opened this issue Dec 9, 2020 · 10 comments
Closed
4 tasks
Labels
size/XL 20 days (4 weeks) stale
Projects

Comments

@addozhang
Copy link
Contributor

addozhang commented Dec 9, 2020

Please describe the Improvement and/or Feature Request

This is supplied description of #2012 .

As described before, we are working Spring Cloud on Kubernetes migrating to mesh, and expect it as smooth as possible, such as starting with part of services via adding injection annotation to them and make all namespaces monitored by osm.

Currently the service communicating with each other is via ip:port and same in HOST header. We can upgrade our SDK to inject real service name (foo) to header. But this is not mandatory sdk upgrade to develop team.

I think this is very common for migration on an existing system.

Above all, there will be many cases existing during migration:

# Client SDK Client Sidecar Server Sidecar Traffic Control Support
1 legacy x x x
2 legacy x x
3 legacy x x
4 legacy x
5 new x x x
6 new x x
7 new x
8 new

To support all case working, we need some feature:

Related issues:
#2986

Scope (please mark with X where applicable)

  • New Functionality [ ]
  • Install [ ]
  • SMI Traffic Access Policy [ ]
  • SMI Traffic Specs Policy [ ]
  • SMI Traffic Split Policy [ ]
  • Permissive Traffic Policy [ ]
  • Ingress [ ]
  • Egress [ ]
  • Envoy Control Plane [X]
  • CLI Tool [ ]
  • Metrics [ ]
  • Certificate Management [ ]
  • Sidecar Injection [ ]
  • Logging [ ]
  • Debugging [ ]
  • Tests [ ]
  • CI System [ ]
  • Project Release [ ]

Possible use cases

@shashankram
Copy link
Member

To support all case working, we need some feature:

This will be supported after #2096 is implemented

This can be achieved using #1001

  • one more filter chain in outbound listener to match POD CIDR as ip range, case 3, 4

We do not plan to support multiple outbound listeners at this moment. We are using filter chains per IP range. It seems as though you would like an additional filter_chain matching pod IPs.

  • destination port as filter chain match in inbound listener to support case 2,6

destination port based filter chain matching is already supported now.

@addozhang
Copy link
Contributor Author

To support all case working, we need some feature:

This will be supported after #2096 is implemented

Yes, saw your reply in another issue.

This can be achieved using #1001

ok

  • one more filter chain in outbound listener to match POD CIDR as ip range, case 3, 4

We do not plan to support multiple outbound listeners at this moment. We are using filter chains per IP range. It seems as though you would like an additional filter_chain matching pod IPs.

I mean to add one more filter chain instead of outbound listener.
You're right, one more filter chain with POD CIDR (such as, 10.128.0.0/12) will handle this case. But did not find a mechanism to compute out 10.128.0.0 yet.

  • destination port as filter chain match in inbound listener to support case 2,6

destination port based filter chain matching is already supported now.

Yes, found it in commit days before. Great job.

@shashankram
Copy link
Member

@addozhang for the service specific filter chain, refer to:

func (lb *listenerBuilder) getOutboundHTTPFilterChainMatchForService(dstSvc service.MeshService) (*xds_listener.FilterChainMatch, error) {

I am wondering why do you need the pod CIDR. OSM requires destination to always be a service fronting the pods. What is the use case to access pods directly without using k8s Service?

@addozhang
Copy link
Contributor Author

@shashankram currently, our system are running Spring Cloud on Openshift. The service discovery is Netfilx Erueka which maintaining service instance with ip and port. In our case, the ip is the one of pod.

So we need a filter chain with pod cidr to match pod ip traffic to handle in mesh.

I think this is common case for micro service implemented with spring cloud.

@shashankram
Copy link
Member

@shashankram currently, our system are running Spring Cloud on Openshift. The service discovery is Netfilx Erueka which maintaining service instance with ip and port. In our case, the ip is the one of pod.

So we need a filter chain with pod cidr to match pod ip traffic to handle in mesh.

I think this is common case for micro service implemented with spring cloud.

I see, so client app is using Netflix Eureka for service discovery and directly making API requests to the pod ip:port instead of Kubernetes service name?

@addozhang
Copy link
Contributor Author

@shashankram currently, our system are running Spring Cloud on Openshift. The service discovery is Netfilx Erueka which maintaining service instance with ip and port. In our case, the ip is the one of pod.
So we need a filter chain with pod cidr to match pod ip traffic to handle in mesh.
I think this is common case for micro service implemented with spring cloud.

I see, so client app is using Netflix Eureka for service discovery and directly making API requests to the pod ip:port instead of Kubernetes service name?

Yes, correct. This is why I am following to latest code find a solution. I think we can also abandon Netfilx Eureka gradually by using osm-controller and envoy to delegate discovery. One all service migrated to mesh, we will abandon eureka with upgrading our SDK.

But before that, we need the migration smoothly, and really hope osm can help us. :) . As I know, a lot of companies are facing same issue. Because we had same technology roadmap: Monolithic, MicroSerivice, Kubernetes, and now is mesh.

@shashankram
Copy link
Member

shashankram commented Dec 10, 2020

@shashankram currently, our system are running Spring Cloud on Openshift. The service discovery is Netfilx Erueka which maintaining service instance with ip and port. In our case, the ip is the one of pod.
So we need a filter chain with pod cidr to match pod ip traffic to handle in mesh.
I think this is common case for micro service implemented with spring cloud.

I see, so client app is using Netflix Eureka for service discovery and directly making API requests to the pod ip:port instead of Kubernetes service name?

Yes, correct. This is why I am following to latest code find a solution. I think we can also abandon Netfilx Eureka gradually by using osm-controller and envoy to delegate discovery. One all service migrated to mesh, we will abandon eureka with upgrading our SDK.

But before that, we need the migration smoothly, and really hope osm can help us. :) . As I know, a lot of companies are facing same issue. Because we had same technology roadmap: Monolithic, MicroSerivice, Kubernetes, and now is mesh.

I understand your use case for pod IP based routing better now. We will discuss this as a part of our next milestone.

Note that currently OSM's design requires destination to be services. To be able to support IP based destinations, we need to think through the design and architecture before making changes to the code.

@addozhang
Copy link
Contributor Author

@shashankram Thanks.

@shashankram shashankram changed the title Smooth migration to mesh Support capability to incrementally migrate legacy services to a service mesh Dec 10, 2020
@shashankram shashankram removed their assignment Dec 10, 2020
@draychev draychev added this to Planned & Scoped in OSM Roadmap via automation Mar 5, 2021
@draychev draychev added this to the v0.9.0 milestone Mar 5, 2021
@draychev draychev added the size/XL 20 days (4 weeks) label Mar 5, 2021
@draychev draychev modified the milestones: v0.9.0, v0.10.0 May 5, 2021
@draychev draychev moved this from Planned & Scoped to Research & Scoping in OSM Roadmap Jun 2, 2021
@draychev draychev moved this from Research & Scoping to Planned & Scoped in OSM Roadmap Jun 2, 2021
@draychev draychev removed this from the v0.10.0 milestone Jun 22, 2021
@github-actions
Copy link

This issue will be closed due to a long period of inactivity. If you would like this issue to remain open then please comment or update.

@github-actions github-actions bot added the stale label Feb 26, 2022
@github-actions
Copy link

github-actions bot commented Mar 5, 2022

Issue closed due to inactivity.

@github-actions github-actions bot closed this as completed Mar 5, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
size/XL 20 days (4 weeks) stale
Projects
No open projects
OSM Roadmap
  
Planned & Scoped
Development

No branches or pull requests

3 participants