We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi. I have milvus cluster nodes in k8s and coordinators on VMs. And coordinators cant connect to nodes and proxy. My ingress for example for proxy:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: milvus namespace: milvus-production annotations: nginx.ingress.kubernetes.io/backend-protocol: GRPC nginx.ingress.kubernetes.io/grpc-max-size: 100m nginx.ingress.kubernetes.io/proxy-body-size: 4m nginx.ingress.kubernetes.io/ssl-redirect: 'true' status: loadBalancer: ingress: - ip: 10.241.190.13 - ip: 10.241.190.14 - ip: 10.241.190.15 spec: ingressClassName: nginx defaultBackend: service: name: milvus port: number: 443 tls: - hosts: - milvus-proxy.example.net secretName: tls-secret-net rules: - host: milvus-proxy.example.net http: paths: - path: / pathType: Prefix backend: service: name: milvus port: number: 443
service apiVersion: v1 kind: Service metadata: name: milvus namespace: milvus-production labels: component: proxy status: loadBalancer: {} spec: ports: - name: milvus protocol: TCP port: 19530 targetPort: milvus - name: metrics protocol: TCP port: 9091 targetPort: metrics - name: milvus2 protocol: TCP port: 443 targetPort: milvus2 selector: app.kubernetes.io/instance: milvus app.kubernetes.io/name: milvus component: proxy clusterIP: 10.221.7.117 clusterIPs: - 10.221.7.117 type: ClusterIP sessionAffinity: None ipFamilies: - IPv4 ipFamilyPolicy: SingleStack internalTrafficPolicy: Cluster
apiVersion: v1 kind: Service metadata: name: milvus namespace: milvus-production labels: component: proxy status: loadBalancer: {} spec: ports: - name: milvus protocol: TCP port: 19530 targetPort: milvus - name: metrics protocol: TCP port: 9091 targetPort: metrics - name: milvus2 protocol: TCP port: 443 targetPort: milvus2 selector: app.kubernetes.io/instance: milvus app.kubernetes.io/name: milvus component: proxy clusterIP: 10.221.7.117 clusterIPs: - 10.221.7.117 type: ClusterIP sessionAffinity: None ipFamilies: - IPv4 ipFamilyPolicy: SingleStack internalTrafficPolicy: Cluster
milvus config in k8s (default.yaml) proxy: address: milvus-proxy.example.net ip: milvus-proxy.example.net port: 19530 internalPort: 443
proxy: address: milvus-proxy.example.net ip: milvus-proxy.example.net port: 19530 internalPort: 443
when rootcoord try connect to proxy, in logs {"level":"WARN","time":"2025/03/07 11:15:00.998 +00:00","caller":"retry/retry.go:130","message":"retry func failed","retried":4,"error":"empty grpc client: failed to connect milvus-proxy.example.net:443, reason: context deadline exceeded: connection error: desc = \"error reading server preface: http2: frame too large\""
{"level":"WARN","time":"2025/03/07 11:15:00.998 +00:00","caller":"retry/retry.go:130","message":"retry func failed","retried":4,"error":"empty grpc client: failed to connect milvus-proxy.example.net:443, reason: context deadline exceeded: connection error: desc = \"error reading server preface: http2: frame too large\""
curl -v https://mulvus-proxy.example.net `
No response
The text was updated successfully, but these errors were encountered:
@haorenfsa any suggestion on this?
Sorry, something went wrong.
ingress need larger size limit, try this nginx.ingress.kubernetes.io/proxy-body-size: 4m -> nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/proxy-body-size: 4m
nginx.ingress.kubernetes.io/proxy-body-size: 100m
Could it be the same issue as mentioned in this comment from the ingress-nginx repo?
ingress-nginx
No branches or pull requests
Is there an existing issue for this?
What would you like to be added?
Hi.
I have milvus cluster nodes in k8s and coordinators on VMs. And coordinators cant connect to nodes and proxy.
My ingress for example for proxy:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: milvus namespace: milvus-production annotations: nginx.ingress.kubernetes.io/backend-protocol: GRPC nginx.ingress.kubernetes.io/grpc-max-size: 100m nginx.ingress.kubernetes.io/proxy-body-size: 4m nginx.ingress.kubernetes.io/ssl-redirect: 'true' status: loadBalancer: ingress: - ip: 10.241.190.13 - ip: 10.241.190.14 - ip: 10.241.190.15 spec: ingressClassName: nginx defaultBackend: service: name: milvus port: number: 443 tls: - hosts: - milvus-proxy.example.net secretName: tls-secret-net rules: - host: milvus-proxy.example.net http: paths: - path: / pathType: Prefix backend: service: name: milvus port: number: 443
service
apiVersion: v1 kind: Service metadata: name: milvus namespace: milvus-production labels: component: proxy status: loadBalancer: {} spec: ports: - name: milvus protocol: TCP port: 19530 targetPort: milvus - name: metrics protocol: TCP port: 9091 targetPort: metrics - name: milvus2 protocol: TCP port: 443 targetPort: milvus2 selector: app.kubernetes.io/instance: milvus app.kubernetes.io/name: milvus component: proxy clusterIP: 10.221.7.117 clusterIPs: - 10.221.7.117 type: ClusterIP sessionAffinity: None ipFamilies: - IPv4 ipFamilyPolicy: SingleStack internalTrafficPolicy: Cluster
milvus config in k8s (default.yaml)
proxy: address: milvus-proxy.example.net ip: milvus-proxy.example.net port: 19530 internalPort: 443
when rootcoord try connect to proxy, in logs
{"level":"WARN","time":"2025/03/07 11:15:00.998 +00:00","caller":"retry/retry.go:130","message":"retry func failed","retried":4,"error":"empty grpc client: failed to connect milvus-proxy.example.net:443, reason: context deadline exceeded: connection error: desc = \"error reading server preface: http2: frame too large\""
curl -v https://mulvus-proxy.example.net
<title>404 Not Found</title>`
404 Not Found
nginx `
Why is this needed?
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: