Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limitations around TCP Services make Istio pretty unusable on larger multi-tenant clusters #9784

Closed
Stono opened this issue Nov 7, 2018 · 43 comments
Assignees
Labels
area/networking lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed
Milestone

Comments

@Stono
Copy link
Contributor

Stono commented Nov 7, 2018

Hi,
I've talked about this a few times now in different forums but never really collated it all together in one place. We really need to find a solution, otherwise we may well have to pull istio out all together.

In summary, these limitations make Istio pretty unusable for us. They're all based around the fact that TCP services are effectively a cluster wide concern because they're listening on 0.0.0.0:port:

  • A TCP headless service cannot have the same port as any HTTP service on the entire cluster
  • A TCP headless service cannot have the same port as another TCP service on the entire cluster

To put this into a practical example for us so you can see just how painful this is, we have two namespaces that both have Zookeeper in them. One is for Kafka, and one is Solr.

Neither of these namespaces use istio/istio-proxy (because they need to talk to to node directly by FQDN hostname or IP, which we know Istio doesn't do). However, applications in other namespaces, which are on the mesh need to be able to talk to both of these.

So you have a situation where an application in the mesh needs to be able to talk to two different sets of Zookeepers on a cluster (perfectly reasonable, different zookeeper clusters for different concerns).

  • [ Application A | envoy ] -> needs to talk to [ zk-0.zookeeper-headless.kafka, zk-1.zookeeper-headless.kafka etc ]
  • [ Application A | envoy ] -> also needs to talk to [ zk-0.zookeeper-headless.solr, zk-1.zookeeper-headless.solr etc ]

Just to throw some more into the mix, the solr-headless service, which Application A needs to talk to happens to be on port 8080.

All of the above is actually, impossible (as far as I can see) with Istio, as you can see from the discovery logs:

    "ProxyStatus": {
        "pilot_conflict_outbound_listener_http_over_current_tcp": {
            "0.0.0.0:8080": {
                "proxy": "sourcing-preferences-68f97fb5d8-wkttd.sourcing-preferences",
                "message": "Listener=0.0.0.0:8080 AcceptedHTTP=istio-pilot.istio-system.svc.cluster.local RejectedTCP=solr-headless.search-solr.svc.cluster.local HTTPServices=1"
            }
        },
        "pilot_conflict_outbound_listener_tcp_over_current_tcp": {
            "0.0.0.0:2181": {
                "proxy": "sourcing-preferences-68f97fb5d8-wkttd.sourcing-preferences",
                "message": "Listener=0.0.0.0:2181 AcceptedTCP=zookeeper-headless.search-solr.svc.cluster.local RejectedTCP=zookeeper-headless.kafka.svc.cluster.local TCPServices=1"
            },
        },
    },

The only solution I can see here is to have all headless services on different TCP ports, which is completely unmanageable at any sort of scale (we have 100s of applications on a single cluster each managed by different teams, they should not need to coordinate to make sure ports don't conflict with http or tcp ports defined more broadly on the mesh).

Semi related issues:

And probably more...

@Stono
Copy link
Contributor Author

Stono commented Nov 7, 2018

Just to elaborate even further with TCP services and the multi-tenancy issues, despite the conflict above, in this example Application A is attempting to connect to zookeeper-0.zookeeper-headless.search-solr, but as you can see from the logs envoy is treating it as zookeeper-headless.kafka, as that "cluster" is mesh wide on 2181.

Therefore we're in a situation where the connection to search-solr, is working by nothing more than a side effect of the kafka service, and people deploying in the kafka namespace, can affect connections to the search-solr instances (lets say they change their DestinationRule to enable mTLS?)

[2018-11-07 11:52:10.193][22][debug][filter] src/envoy/tcp/mixer/filter.cc:98] [C130] Called tcp filter onNewConnection: remote 10.198.14.12:33668, local 10.198.13.31:2181
[2018-11-07 11:52:10.193][22][debug][filter] external/envoy/source/common/tcp_proxy/tcp_proxy.cc:305] [C130] Creating connection to cluster outbound|2181||zookeeper-headless.kafka.svc.cluster.local
[2018-11-07 11:52:10.193][22][debug][upstream] external/envoy/source/common/upstream/original_dst_cluster.cc:86] Created host 10.198.13.31:2181.
[2018-11-07 11:52:10.193][22][debug][connection] external/envoy/source/common/network/connection_impl.cc:572] [C131] connecting to 10.198.13.31:2181
[2018-11-07 11:52:10.193][17][debug][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:952] membership update for TLS cluster outbound|2181||zookeeper-headless.kafka.svc.cluster.local
[2018-11-07 11:52:10.193][17][debug][upstream] external/envoy/source/common/upstream/original_dst_cluster.cc:41] Adding host 10.198.13.31:2181.
[2018-11-07 11:52:10.193][22][debug][connection] external/envoy/source/common/network/connection_impl.cc:581] [C131] connection in progress
[2018-11-07 11:52:10.193][22][debug][main] external/envoy/source/server/connection_handler_impl.cc:218] [C130] new connection
[2018-11-07 11:52:10.193][22][debug][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:952] membership update for TLS cluster outbound|2181||zookeeper-headless.kafka.svc.cluster.local
[2018-11-07 11:52:10.194][22][debug][connection] external/envoy/source/common/network/connection_impl.cc:466] [C131] connected
[2018-11-07 11:52:10.194][22][debug][filter] src/envoy/tcp/mixer/filter.cc:105] Called tcp filter completeCheck: OK

@costinm
Copy link
Contributor

costinm commented Nov 7, 2018

The root cause of the limitation is that for headless services, the connection is initiated to the IP of the endpoint. We can't create 1 listener for each endpoint - some TCP services may have large number of replicas, and besides the endpoints change very often. For normal services, we listen and intercept the cluster IP - which is stable, and Istio EDS finds the endpoint IP.

To solve this for TCP, the plan is to use the 'on-demand' configuration: Envoy will detect requests going to
IPs without a listener, and make a call to Pilot to get the config dynamically.
Unfortunately this will NOT solve the problem when a HTTP and TCP service share the same port, because HTTP listens on 0.0.0.0:port
We could switch HTTP back to VIP:port - which would also reduce the size of RDS, but first step is to get on-demand going. The work has started - but it'll likely take several weeks to get anything, so most optimistic guess is 1.2.

There is a separate effort - which may land in 1.1 ( or 1.1.x ) where we isolate the namespaces. When this is enabled, apps in a namespace A will only automatically get configs for other services and endpoints in same namespace. To access a service in any other namespace - you would need to add an explicit configuration (https://docs.google.com/document/d/1x8LI3T7SHW-yDrrt3ryEr6iSs9ewMCt4kXtJpIaSSlI/edit?ts=5be2c288#heading=h.m6yvqjh71gxi), similar with egress.
#9361 is the initial PoC using annotations - the main goal is to increase the scalability to ~5000+ services, but also to avoid any cross-namespace issues.

For now separate ports is the only option with 1.0.x. Namespace isolation appears to be a small change and may be backported - but current target is 1.1.x

@Stono
Copy link
Contributor Author

Stono commented Nov 7, 2018

Hey @costinm - thanks for the detailed response.

I'm not sure if here or the doc is the right place to discuss it further, the proposal seems great for scaling reasons but if I'm reading the doc right, I'm not sure scoping will actually solve the issue above, if you're limited to just "public" and "private".

Take the following example where we have an app that needs to talk to Kafka and Zookeeper in different namespaces, in order to achieve that, both services would need to be "public"

Namespace "MyAppThatTalksToKafkaAndSolr"

Namespace "Kafka"

  • TCP Service ZooKeeper(scope=public)

Namespace "Solr"

  • TCP Service ZooKeeper(scope=public)

Surely at that point both zookeeper.kafka and zookeeper.solr services will once again globally conflict?

@JasonRD
Copy link

JasonRD commented Dec 7, 2018

I have the similar scenario, but my 'kafaka' and 'solr' are not in k8s cluster. Looking forward to the solution.

@GavinLin168
Copy link

I have the similar scenario, any solution or suggestion for the issue ? thanks!

@morvencao
Copy link
Member

/cc

1 similar comment
@nishanth-pinnapareddy
Copy link

/cc

@vngzs
Copy link

vngzs commented May 20, 2019

/cc

More of this in the wild: a recent post on Reddit from a confused user.

@containerpope
Copy link

Are there any updates on this?

@Stono
Copy link
Contributor Author

Stono commented Jun 4, 2019

@mkjoerg I think #13970 will help this, you'll loose some metrics etc around the connection but who cares, if it works.

@hzxuzhonghu
Copy link
Member

Sidecars can be used to limit egress service.

@iandyh
Copy link
Contributor

iandyh commented Jun 13, 2019

@hzxuzhonghu If I read it right, using sidecar will require application to explicitly list out all the services they want to connect to so that this issue can be avoided?

@hzxuzhonghu
Copy link
Member

Yes, you need to specify all the services you want to connect to, we still can set namespace scoped services.

@iandyh
Copy link
Contributor

iandyh commented Jun 13, 2019

@hzxuzhonghu This is making istio even more complicated to use..

@iandyh
Copy link
Contributor

iandyh commented Jun 21, 2019

@hzxuzhonghu Is there any reason why Istio choose to group all the clusters by ports? Currently finding the route is port -> rds(clsuter) -> endpoint

Why not directly matching the cluster like the inbound listeners?

@hzxuzhonghu
Copy link
Member

This is determined by envoy. Personally i think for outbound there may be many clusters on same port, but for inbound, i think it can be only one. For outbound traffic, it is more complicated. I am not sure if this is right, hha.

@costinm
Copy link
Contributor

costinm commented Jul 25, 2019

As I mentioned earlier in the thread, the fundamental problem is how to route to 'stateful sets'. We need to intercept the traffic somehow. Client app gets the IP of a specific instance, and it initiates a connection. In Envoy we need to know which service the IP belongs to, so we apply routes/policy/telemetry. Distributing all the IPs of a stateful set and creating a listener for each is too expensive ( mem usage, scale ). Doing on-demand lookup is an option but implementation is tricky and moving slowly, we are starting with on-demand RDS but won't be ready in 1.3, and it's now focused on scaling to large number of vhosts.

For HTTP we have some more options - but in TCP case the only option I currently know is using the port. The port does not have to be unique per mesh - but we must have a way for a particular envoy to map a port to a service name. Sidecar API allows some options on what to import, so if you have namespaceA using port 8000 for serviceA, and namespaceB using port 8000 for serviceB - a client could import either A or B, and will get the corresponding service. The 'whitebox' mode for Sidecar also allow more customization, you can choose the local ports using Sidecar.egress.port.

HTTP and VIPs are easy and don't depend as much on port, and if anyone has a viable proposal
for handling stateful sets without using the port it would be very valuable to get it done - but until we find a solution all we can do is improve usability on selecting which ports are visible ( isolation ) and allowing more flexibility in mapping to local port numbers.

@hzxuzhonghu
Copy link
Member

Distributing all the IPs of a stateful set and creating a listener for each is too expensive ( mem usage, scale )

This doesn't seem a problem. As for statefulsets, the common quantity is about3, 5, 7. This figure can not be too big, as the cost to keep consistent among then is very big.

@Demonian
Copy link

Demonian commented Aug 1, 2019

I think this issue should be prioritized, cause it making usage of headless services and StatefullSets very unscalable and error prone.

Here is my scenario, which is pretty the same as @Stono has mentioned in this thread:
I have two services one is Headless Service with TCP 7003 port (api-api) which is used by StatefulSet, and the second normal Service with HTTP 7003 port (sandbox):
Services configs

apiVersion: v1
kind: Service
metadata:
  name: sandbox
  namespace: sandbox
  labels:
    service: sandbox
spec:
  ports:
  - name: http-sandbox
    port: 7003
    targetPort: 7003
  
  - name: https-sandbox-health
    port: 7002
    targetPort: 7002
  selector:
    service: sandbox

---
apiVersion: v1
kind: Service
metadata:
  name: api-api
  namespace: sandbox
  labels:
    app: api
    type: api
spec:
  clusterIP: None
  ports:
  - name: rest-ext
    port: 7003
    targetPort: 7003
  - name: rest-int
    port: 7002
    targetPort: 7002
  - name: rpc
    port: 8444
    targetPort: 8444
  selector:
    app: api
    type: api

Pilot errors
I can see many of such errors on pilot with such config:

"pilot_conflict_outbound_listener_tcp_over_current_http": {
            "0.0.0.0:7003": {
                "proxy": "api-gw-784886849b-psh7q.sandbox",
                "message": "Listener=0.0.0.0:7003 AcceptedTCP=api-api..svc.cluster.local RejectedHTTP=sandbox.sandbox.svc.cluster.local TCPServices=1"
            }
        }

Conclusion
So I have figured out what is happening both TCP 7003 port and HTTP 7003 port are interpreted as 0.0.0.0:7003 envoy listener, which make a conflict. But the main problem here is that all traffic management configurations will not work for sandbox because the route for it will be missing and traffic will be interpreted as a raw TCP.

Workaround
The only real workaround is to use unique port across all Headless services in the cluster.

Maybe we should think for at least some more scalable workaround for this.

cc @cezarsa @hzxuzhonghu @Stono

@hzxuzhonghu
Copy link
Member

hzxuzhonghu commented Aug 2, 2019

@Demonian Your problem is that istio donot support using same port for both http and tcp.

Edit: The cause is headless service with tcp port.

@Demonian
Copy link

Demonian commented Aug 2, 2019

@hzxuzhonghu It doesn't support this only if there is headless service with TCP, because normal service with TCP will be mapped to ip:port listeners.

@hzxuzhonghu
Copy link
Member

Yes, you are right, for headless service, with multiple instances, the tcp port will conflict.

@hzxuzhonghu hzxuzhonghu added this to the 1.3 milestone Aug 2, 2019
@iandyh
Copy link
Contributor

iandyh commented Aug 5, 2019

Our current approach is to add all the headless service with following annotation: networking.istio.io/exportTo:.. This will prevent Istio proxies in other namespaces fetching these configurations.

@Demonian
Copy link

Demonian commented Aug 5, 2019

Yes, but this mean, that every statefullset should be in separate namespace, which is not super convenient in our case.

@costinm
Copy link
Contributor

costinm commented Aug 28, 2019

I think I commented already, but to be clear:

  • headless services using HTTP should work fine. We generate 0.0.0.0:port for HTTP, and route on host header.

  • headless TCP services using isolation - exportTo or Sidecar.egress should also work fine

  • egress (or exportTo) for 2 headless services on the same port in a single app does not currently work. There are 2 solutions I know - on-demand LDS ( where Envoy asks for config based on the IP:port), or generating a listener with all endpoints of the 2 conflicting services. Shriram had a PR attempting to do the later - but it had some problems. It can be changed to generate such config only for the apps that are in this specific situation ( have to import 2 headless on same port), while leaving
    all other apps as they are now.
    The main concern with the second solution is scalability and memory use - however if it is targeted to
    only the narrow set of apps that are in this situation I think it can be viable.

@rshriram
Copy link
Member

rshriram commented Aug 28, 2019

From the original motivation of the issue

A TCP headless service cannot have the same port as any HTTP service on the entire cluster

this has been fixed with the protocol sniffing functionality introduced by @yxue in the current 1.3 release.

A TCP headless service cannot have the same port as another TCP service on the entire cluster

This is still an issue and #16242 attempted to take an approach that should solve it for the most common case (99% of usage) where you typically have less than 5-7 pods for a headless service like etcd/elasticsearch/postgres/cassandra etc. Such headless services do not use autoscaling and they are most often statefulsets. However, there was a lot of opposition to that PR as it was merely buying time by postponing the occurrence of the problem until one had more than 8 pods for a headless service. This is the only concern that I feel is valid.

Concerns about scalability and memory use are over inflated. It is no different from a scenario where the user creates 7-8 service entries for the headless service manually, with the pod IPs [this is very bad UX btw]. And unless we hit a situation where an end user has 100 headless services on a cluster where each headless service has 5 pods, I would not be worried about scalability.

That said, I forsure do not want to make this a custom solution only when there are two imports on the same port or the so called "narrow set of apps that have this issue". Asking end users to learn sidecar is not an answer. The system should just work out of the box. Sidecar is an optimization and not a panacea to fix problems. Secondly, there is nothing preventing end users from launching two headless services in the same namespace, where each one could be offering something on same port (like a JMX metrics thingy for example).

@andraxylia
Copy link
Contributor

We discussed this issue in the networking WG meeting and there are some scale implications. @howardjohn will pick up a PR from @rshriram and do some testing so we can understand until what scale we can enable listeners per IP:port.

@Stono and others who thumbed up the issue: meanwhile, it would be good to understand what is your scale: how many services/how many headless, how many pods per headless service, do you use auto-scaling, do you use or intend to use the Sidecar API.

@Stono
Copy link
Contributor Author

Stono commented Sep 3, 2019

Hola,
We largely gave up with headless services on istio not long after raising this issue, we now use a combination of excludePorts annotations on the sidecars to say, exclude 6379 for redis clusters and statefulset.kubernetes.io/pod-name: kafka-0 annotations on services to create semi-static service ip's for each kafka pod in the statefulset.

We also make heavy use of Sidecar, effectively we default to namespace level isolation and expand out whereas when i originally wrote this issue we didn't have that, and it was a cluster wide problem (eg separate teams could deploy into separate namespaces but break each others workloads).

In terms of scale we have 6 kafka brokers right now, probably scale to around 10 based on current estimations.

No AutoScaling.

@adamglt
Copy link
Contributor

adamglt commented Sep 3, 2019

Hey - we're also using stateful sets pretty extensively.
For Kafka, there's ~5-10 clusters, each with 3-7 brokers.

Our main use is various Akka clusters (our own apps).
In dev - theres anywhere from 50-200 SSets, each with ~3-11 instances.
Prod is smaller - so about 30% of that.

Our internal platform allocates unique SSet ports to avoid collisions - but we can't always do that with 3rd party stuff (e.g. Kafka).

We do use autoscaling in our own apps, and no Sidecar API usage yet.

@howardjohn howardjohn modified the milestones: 1.3, 1.4 Sep 13, 2019
@howardjohn
Copy link
Member

#16845 landed in master which may address some of these issues. What we still need to do is add testing for this as well as measure performance impacts

@howardjohn
Copy link
Member

howardjohn commented Oct 1, 2019

CPU usage: https://snapshot.raintank.io/dashboard/snapshot/osN1j81pNaAMf9HAtnK6Er8cFhDLDubb?panelId=6&fullscreen&orgId=2
Before is with the new behavior, after is with it disabled.

Test case is very much a worst case -- a headless service scaling between 0 and 10 replicas every 3 min

From a CPU profile seems like a lot of time is spent on envoyfilter stuff since I have a few, will retry with those off.

Edit: cpu drops to ~2.2 CPU without envoy filter, vs 3.2 CPU with envoyfilter. So the change uses roughly 60% more CPU in some bad cases

@howardjohn
Copy link
Member

This is believed to be fixed in 1.4 but I haven't had a chance to verify all of the scenarios in the original issues. If someone has tried these, please let us know how it functions.

We do have some testing for this as well but can't beat diversity of kafka/zookeeper/etc

@howardjohn
Copy link
Member

I am going to go with the optimistic approach that this is fixed in 1.4 and hope no one proves me wrong. If you do see issues though, please let us know

@iandyh
Copy link
Contributor

iandyh commented Jan 15, 2020

@howardjohn Hello. I did a deep dive into the configurations in 1.4.x. It looks like by adding listeners with ClusterIP, it resolves the conflicts between non-headless services and headless services.

However, it did not resolve cluster wide conflicts between TCP headless services. I guess this is pretty impossible to solve considering the factors mentioned by the comments above. The only way I can think of is to limit the visibility(using networking.istio.io/exportTo=.).

I am more than happy if you correct me that I am wrong. Thanks in advance.

@AceHack
Copy link

AceHack commented Jan 28, 2020

Is this problem fixed or not? we plan to deploy 10s of zookeeper, solr, kafka cluster to the same kubernetes cluster with istio installed, is this going to break?

@Fryuni
Copy link

Fryuni commented Feb 20, 2020

We are the opposite of @AceHack, we already have Zookeeper and Kafka and this issue makes us fearful of adding Istio.

@duderino duderino added the lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed label Feb 20, 2020
@duderino
Copy link
Contributor

duderino commented Feb 20, 2020

@Fryuni and @AceHack I had a chat with @howardjohn who fixed this. We think this really is fixed but we haven't tested with Zookeeper or Kafka and don't have the bandwidth. So there's a testing gap there, but if you try it and find any issues, report them here and we'll fix them.

@cameronbraid
Copy link

cameronbraid commented Apr 17, 2020

I may have run into this issue in istio 1.5.1

I have two redis deployments in two different namespaces A and B

The sidecar proxy of an app in namespace B, which should be connecting to redis in namespace B is reporting outgoing connections to the redis service in namespace A

extract from logs of app in namespace B (namespace A is external-auth-server)

2020-04-17T06:40:22.676754Z	info	Envoy proxy is ready
[2020-04-17T06:40:24.641Z] "- - -" 0 UF,URX "-" "-" 0 0 495 - "-" "-" "-" "-" "10.244.2.77:6379" outbound|6379||eas-redis-ha.external-auth-server.svc.cluster.local - 10.244.2.77:6379 10.244.0.98:57396 - -
[2020-04-17T06:40:24.624Z] "- - -" 0 UF,URX "-" "-" 0 0 512 - "-" "-" "-" "-" "10.244.2.77:6379" outbound|6379||eas-redis-ha.external-auth-server.svc.cluster.local - 10.244.2.77:6379 10.244.0.98:57378 - -

eas-redis-ha.external-auth-server service is a headless service

@hzxuzhonghu
Copy link
Member

@cameronbraid Could you show our services yaml and what you expect

@cameronbraid
Copy link

The app is running in namespace drivenow-staging-z using the following service to access redis-sentinel

redis-sentinel.drivenow-staging-z

kind: Service
apiVersion: v1
metadata:
  name: redis-sentinel
  namespace: drivenow-staging-z
  selfLink: /api/v1/namespaces/drivenow-staging-z/services/redis-sentinel
  uid: 1393616a-c50d-4cda-a61a-df35912b3c7d
  resourceVersion: '891946'
  creationTimestamp: '2020-04-14T06:22:01Z'
  labels:
    app: redis-sentinel
    kapp.k14s.io/app: '1586845308752692340'
    kapp.k14s.io/association: v1.4e1f354e6e6f08008a9929bbd34748a9
  annotations:
    kapp.k14s.io/disable-label-scoping: ''
    kapp.k14s.io/identity: v1;drivenow-staging-z//Service/redis-sentinel;v1
    kapp.k14s.io/original: >-
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"kapp.k14s.io/disable-label-scoping":""},"labels":{"app":"redis-sentinel","kapp.k14s.io/app":"1586845308752692340","kapp.k14s.io/association":"v1.4e1f354e6e6f08008a9929bbd34748a9"},"name":"redis-sentinel","namespace":"drivenow-staging-z"},"spec":{"ports":[{"name":"sentinel","port":26379,"protocol":"TCP","targetPort":26379}],"selector":{"redis-sentinel":"true"},"sessionAffinity":"None","type":"ClusterIP"}}
    kapp.k14s.io/original-diff: |
      []
    kapp.k14s.io/original-diff-full: ''
    kapp.k14s.io/original-diff-md5: 58e0494c51d30eb3494f7c9198986bb9
spec:
  ports:
    - name: sentinel
      protocol: TCP
      port: 26379
      targetPort: 26379
  selector:
    redis-sentinel: 'true'
  clusterIP: 10.107.227.226
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

eas-redis-ha.external-auth-server

kind: Service
apiVersion: v1
metadata:
  name: eas-redis-ha
  namespace: external-auth-server
  selfLink: /api/v1/namespaces/external-auth-server/services/eas-redis-ha
  uid: 2c18f1da-61c0-48b7-a74d-abe715301d1f
  resourceVersion: '894949'
  creationTimestamp: '2020-04-17T06:37:34Z'
  labels:
    app: redis-ha
    chart: redis-ha-4.4.4
    heritage: Helm
    kapp.k14s.io/app: '1586845361505416342'
    kapp.k14s.io/association: v1.c4c0053a454532702feaf6ffdcb5890b
    release: eas
  annotations:
    kapp.k14s.io/disable-label-scoping: ''
    kapp.k14s.io/identity: v1;external-auth-server//Service/eas-redis-ha;v1
    kapp.k14s.io/original: >-
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"kapp.k14s.io/disable-label-scoping":""},"labels":{"app":"redis-ha","chart":"redis-ha-4.4.4","heritage":"Helm","kapp.k14s.io/app":"1586845361505416342","kapp.k14s.io/association":"v1.c4c0053a454532702feaf6ffdcb5890b","release":"eas"},"name":"eas-redis-ha","namespace":"external-auth-server"},"spec":{"clusterIP":"None","ports":[{"name":"server","port":6379,"protocol":"TCP","targetPort":"redis"},{"name":"sentinel","port":26379,"protocol":"TCP","targetPort":"sentinel"},{"name":"exporter-port","port":9121,"protocol":"TCP","targetPort":"exporter-port"}],"selector":{"app":"redis-ha","release":"eas"},"type":"ClusterIP"}}
    kapp.k14s.io/original-diff: |
      - type: test
        path: /spec/sessionAffinity
        value: None
      - type: remove
        path: /spec/sessionAffinity
    kapp.k14s.io/original-diff-full: ''
    kapp.k14s.io/original-diff-md5: 871014bcda665dc62cb90cb1e2783c76
spec:
  ports:
    - name: server
      protocol: TCP
      port: 6379
      targetPort: redis
    - name: sentinel
      protocol: TCP
      port: 26379
      targetPort: sentinel
    - name: exporter-port
      protocol: TCP
      port: 9121
      targetPort: exporter-port
  selector:
    app: redis-ha
    release: eas
  clusterIP: None
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed
Projects
None yet
Development

No branches or pull requests