Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DestinationRule workload selector doesn't work for inbound clusters #41235

Closed
lmarszal opened this issue Sep 30, 2022 · 5 comments
Closed

DestinationRule workload selector doesn't work for inbound clusters #41235

lmarszal opened this issue Sep 30, 2022 · 5 comments
Assignees

Comments

@lmarszal
Copy link

lmarszal commented Sep 30, 2022

Bug Description

I tried to apply following manifests to the cluster (empty minikube cluster, default istio installation with minimal profile):

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: lmtest
  name: lmtest
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lmtest
  template:
    metadata:
      labels:
        app: lmtest
      name: lmtest
    spec:
      serviceAccountName: lmtest
      containers:
      - image: nginx
        name: lmtest
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
          limits:
            cpu: 100m
            memory: 50Mi
        ports:
          - name: http-app
            containerPort: 80
            protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: lmtest
  name: lmtest
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: lmtest
  name: lmtest
spec:
  selector:
    app: lmtest
  ports:
  - name: http-app-80
    port: 80
    protocol: TCP
    targetPort: 80
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  labels:
    app: lmtest
  name: lmtest
spec:
  host: lmtest.default.svc.cluster.local
  workloadSelector:
    matchLabels:
      app: lmtest
  trafficPolicy:
    connectionPool:
      http:
        http1MaxPendingRequests: 10
        http2MaxRequests: 10

When I check inbound cluster configuration I see connection pool settings are not applied:

$ istioctl proxy-config cluster $(kubectl get po -l app=lmtest -o name) --direction inbound -o yaml
- circuitBreakers:
    thresholds:
    - maxConnections: 4294967295
      maxPendingRequests: 4294967295
      maxRequests: 4294967295
      maxRetries: 4294967295
      trackRemaining: true
  cleanupInterval: 60s
  commonLbConfig: {}
  connectTimeout: 10s
  lbPolicy: CLUSTER_PROVIDED
  metadata:
    filterMetadata:
      istio:
        services:
        - host: lmtest.default.svc.cluster.local
          name: lmtest
          namespace: default
  name: inbound|80||
  type: ORIGINAL_DST
  upstreamBindConfig:
    sourceAddress:
      address: 127.0.0.6
      portValue: 0

When I remove workloadSelector from destination rule everything works as expected:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  labels:
    app: lmtest
  name: lmtest
spec:
  host: lmtest.default.svc.cluster.local
  # workloadSelector:
  #   matchLabels:
  #     app: lmtest
  trafficPolicy:
    connectionPool:
      http:
        http1MaxPendingRequests: 10
        http2MaxRequests: 10
$ istioctl proxy-config cluster $(kubectl get po -l app=lmtest -o name) --direction inbound -o yaml
- circuitBreakers:
    thresholds:
    - maxConnections: 4294967295
      maxPendingRequests: 10
      maxRequests: 10
      maxRetries: 4294967295
      trackRemaining: true
  cleanupInterval: 60s
  commonLbConfig: {}
  connectTimeout: 10s
  lbPolicy: CLUSTER_PROVIDED
  metadata:
    filterMetadata:
      istio:
        config: /apis/networking.istio.io/v1alpha3/namespaces/default/destination-rule/lmtest
        services:
        - host: lmtest.default.svc.cluster.local
          name: lmtest
          namespace: default
  name: inbound|80||
  type: ORIGINAL_DST
  upstreamBindConfig:
    sourceAddress:
      address: 127.0.0.6
      portValue: 0

Frankly I was always a little confused about the syntax where same destination rule is applied to both source and destination - but believe there are good reasons for that. What I'm looking for is configuring source and destination differently - configure traffic source for circuit breaking and destination just for overload prevention.

Version

$ istioctl version
client version: 1.15.1
control plane version: 1.15.1
data plane version: 1.15.1 (2 proxies)
$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.25.0


### Additional Information

[bug-report.tar.gz](https://github.com/istio/istio/files/9684991/bug-report.tar.gz)
@howardjohn
Copy link
Member

fyi @kfaseela

@kfaseela kfaseela self-assigned this Oct 1, 2022
@kfaseela
Copy link
Member

kfaseela commented Oct 6, 2022

Hello @lmarszal : Please see https://github.com/istio/istio/blob/master/pilot/pkg/model/sidecar.go#L577 workloadSelector is enabled only for outbound config, and for inbound config it was decided that sidecar API should be the preferred way forward. Probably I can enhance the workloadSelector documentation to clarify this behaviour, which was missed out.

@hzxuzhonghu
Copy link
Member

and for inbound config it was decided that sidecar API should be the preferred way forward.

Which field of Sidecar API?

@lmarszal
Copy link
Author

Hello @lmarszal : Please see https://github.com/istio/istio/blob/master/pilot/pkg/model/sidecar.go#L577 workloadSelector is enabled only for outbound config, and for inbound config it was decided that sidecar API should be the preferred way forward. Probably I can enhance the workloadSelector documentation to clarify this behaviour, which was missed out.

Thanks for the clarification @kfaseela. So it looks like it's a missing feature - being able to configure a circuit breaking on an inbound cluster - rather than a bug.

@kfaseela
Copy link
Member

@hzxuzhonghu @lmarszal This was actually the preferred way forward for inbound config: istio/api#1754

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants