Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gRPC client response http1.x server #2497

Closed
kent-williams opened this issue May 11, 2018 · 26 comments
Closed

gRPC client response http1.x server #2497

kent-williams opened this issue May 11, 2018 · 26 comments

Comments

@kent-williams
Copy link


BUG REPORT :

NGINX Ingress controller version:
0.14.0

Kubernetes version (use kubectl version):
1.10.2

Environment:

  • Cloud provider or hardware configuration:
    Baremetal via juju
  • OS (e.g. from /etc/os-release):
    ubuntu 16.04
  • Kernel (e.g. uname -a):
    4.4.0-116-generic
  • Install tools:
  • Others:

What happened:
Continue to get the following gRPC client response:
StatusCode.UNAVAILABLE, Trying to connect an http1.x server

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
Following gRPC Example:

Created TLS Secret for Ingress.

app.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grpc-app
  labels:
    k8s-app: grpc-app
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: grpc-app
    spec:
      containers:
      - name: grpc-app
        image: quay.io/kubernetes-ingress-controller/grpc-fortune-teller:0.1
        ports:
        - containerPort: 50051
          name: grpc

svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: grpc-service
  namespace: default
spec:
  selector:
    k8s-app: grpc-app
  ports:
  - port: 50051
    targetPort: 50051
    name: grpc

ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "nginx"
    inginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/grpc-backend: "true"
  name: grpc-ingress
  namespace: default
spec:
  rules:
  - host: grpc.192.168.22.68.xip.io
    http:
      paths:
      - backend:
          serviceName: grpc-service
          servicePort: grpc
  tls:
  - secretName: grpc.192.168.22.68.xip.io
    hosts:
      - grpc.192.168.22.68.xip.io
$ kubectl describe ing
Name:             grpc-ingress
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<none>)
TLS:
  grpc.192.168.22.68.xip.io terminates grpc.192.168.22.68.xip.io
Rules:
  Host                       Path  Backends
  ----                       ----  --------
  grpc.192.168.22.68.xip.io  
                                grpc-service:grpc (<none>)
Annotations:
  inginx.ingress.kubernetes.io/ssl-redirect:         true
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"inginx.ingress.kubernetes.io/ssl-redirect":"true","kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/grpc-backend":"true"},"name":"grpc-ingress","namespace":"default"},"spec":{"rules":[{"host":"grpc.192.168.22.68.xip.io","http":{"paths":[{"backend":{"serviceName":"grpc-service","servicePort":"grpc"}}]}}],"tls":[{"hosts":["grpc.192.168.22.68.xip.io"],"secretName":"grpc.192.168.22.68.xip.io"}]}}

  kubernetes.io/ingress.class:               nginx
  nginx.ingress.kubernetes.io/grpc-backend:  true
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  CREATE  59m                nginx-ingress-controller  Ingress default/grpc-ingress
  Normal  UPDATE  50m (x2 over 59m)  nginx-ingress-controller  Ingress default/grpc-ingress

NGINX Ingress Controller Logs including startup and incoming gRPC request at the bottom

NGINX Ingress controller
  Release:    0.14.0
  Build:      git-734361d
  Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------

W0511 22:08:18.875038       7 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0511 22:08:18.875256       7 main.go:181] Creating API client for https://10.152.183.1:443
I0511 22:08:18.890125       7 main.go:225] Running in Kubernetes Cluster version v1.10 (v1.10.2) - git (clean) commit 81753b10df112992bf51bbc2c2f85208aad78335 - platform linux/amd64
I0511 22:08:18.891935       7 main.go:84] validated default/default-http-backend as the default backend
I0511 22:08:19.037886       7 stat_collector.go:77] starting new nginx stats collector for Ingress controller running in namespace  (class nginx)
I0511 22:08:19.037895       7 stat_collector.go:78] collector extracting information from port 18080
I0511 22:08:19.047089       7 nginx.go:278] starting Ingress controller
I0511 22:08:20.149419       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"grpc-ingress", UID:"0a1ab91e-555f-11e8-bd35-2c4d544790d3", APIVersion:"extensions", ResourceVersion:"15198", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/grpc-ingress
I0511 22:08:20.150772       7 backend_ssl.go:67] adding secret default/grpc.192.168.22.68.xip.io to the local store
I0511 22:08:20.247504       7 nginx.go:299] starting NGINX process...
I0511 22:08:20.247546       7 leaderelection.go:175] attempting to acquire leader lease  default/ingress-controller-leader-nginx...
I0511 22:08:20.250691       7 controller.go:168] backend reload required
I0511 22:08:20.250729       7 stat_collector.go:34] changing prometheus collector from  to default
I0511 22:08:20.253700       7 status.go:196] new leader elected: nginx-ingress-kubernetes-worker-controller-h4b8v
I0511 22:08:20.360726       7 backend_ssl.go:173] updating local copy of ssl certificate default/grpc.192.168.22.68.xip.io with missing intermediate CA certs
I0511 22:08:20.382251       7 controller.go:177] ingress backend successfully reloaded...
I0511 22:08:23.583845       7 controller.go:168] backend reload required
I0511 22:08:23.722125       7 controller.go:177] ingress backend successfully reloaded...
I0511 22:09:02.297156       7 leaderelection.go:184] successfully acquired lease default/ingress-controller-leader-nginx
I0511 22:09:02.297226       7 status.go:196] new leader elected: nginx-ingress-kubernetes-worker-controller-hksss
I0511 22:10:02.309287       7 status.go:356] updating Ingress default/grpc-ingress status to [{ }]
I0511 22:10:02.312302       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"grpc-ingress", UID:"0a1ab91e-555f-11e8-bd35-2c4d544790d3", APIVersion:"extensions", ResourceVersion:"15367", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/grpc-ingress
192.168.128.95 - [192.168.128.95] - - [11/May/2018:22:10:55 +0000] "PRI * HTTP/2.0" 400 174 "-" "-" 0 0.001 [] - - - -
192.168.128.95 - [192.168.128.95] - - [11/May/2018:22:10:56 +0000] "PRI * HTTP/2.0" 400 174 "-" "-" 0 0.000 [] - - - -
192.168.128.95 - [192.168.128.95] - - [11/May/2018:22:10:57 +0000] "PRI * HTTP/2.0" 400 174 "-" "-" 0 0.000 [] - - - -
192.168.128.95 - [192.168.128.95] - - [11/May/2018:22:10:58 +0000] "PRI * HTTP/2.0" 400 174 "-" "-" 0 0.001 [] - - - -

Anything else we need to know:

@pcj
Copy link
Contributor

pcj commented May 12, 2018

Hi Kent, looking at this. At first glance, I'd recommend the following:

  1. As a sanity check confirm the pod is working:
$ kubectl port-forward fortune-teller-app-7467bf49f8-grwsr 50051
Forwarding from 127.0.0.1:50051 -> 50051
$ grpcurl -plaintext localhost:50051 build.stack.fortune.FortuneTeller/Predict
{
  "message": "Even the clearest and most perfect circumstantial evidence is likely to be at\nfault, after all, and therefore ought to be received with great caution.  Take\nthe case of any pencil, sharpened by any woman; if you have witnesses, you will\nfind she did it with a knife; but if you take simply the aspect of the pencil,\nyou will say that she did it with her teeth.\n\t\t-- Mark Twain, \"Pudd'nhead Wilson's Calendar\""
}

Also, increase the logging on the nginx controller. Here's mine:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
        k8s-app: nginx-ingress-lb
  template:
    metadata:
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '10254'
      labels:
        k8s-app: nginx-ingress-lb
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/aledbf/nginx-ingress-controller:0.348
          args:
             - /nginx-ingress-controller
             - --default-backend-service=default/default-http-backend
             - --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate
             - --v=3
          env:
             - name: POD_NAME
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.name
             - name: POD_NAMESPACE
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443

(may also compare outcome using this image (quay.io/aledbf/nginx-ingress-controller:0.348) to 0.14, though I doubt that would be the cause)

@pcj
Copy link
Contributor

pcj commented May 12, 2018

Also @aledbf I can't see grpc-fortune-teller via browsing https://quay.io/organization/kubernetes-ingress-controller or:

$ docker login quay.io
Username (pcj): 
Password: 
Login Succeeded

$ docker pull quay.io/kubernetes-ingress-controller/grpc-fortune-teller:0.1
Error response from daemon: unauthorized: access to the requested resource is not authorized

@kent-williams
Copy link
Author

Hi @pcj , many thanks for the help!

I actually wasn't able to grab the image as you just pointed out. I'm just using an gRPC test image of my own that has a single response like yours. It works just fine using NodePort and no Ingress.

This is the full output from the client:

$ python3 information_client.py grpc.192.168.22.68.xip.io
Traceback (most recent call last):
  File "information_client.py", line 18, in <module>
    run(sys.argv)
  File "information_client.py", line 12, in run
    response = stub.RequestID(getid_pb2.IDRequest(name='you'))
  File "/usr/local/lib/python3.5/dist-packages/grpc/_channel.py", line 487, in __call__
    return _end_unary_response_blocking(state, call, False, deadline)
  File "/usr/local/lib/python3.5/dist-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Trying to connect an http1.x server)>

The port-forward command output. Is it suppose hang? Had to ctrl-c to exit.

$ kubectl port-forward grpc-app-d94776447-jqs9l 50051
Forwarding from 127.0.0.1:50051 -> 50051
Forwarding from [::1]:50051 -> 50051

More verbose controller output from startup to several request attempts, made a gist here.

@aledbf
Copy link
Member

aledbf commented May 12, 2018

docker pull quay.io/kubernetes-ingress-controller/grpc-fortune-teller:0.1

Fixed

@kent-williams
Copy link
Author

@pcj, Is anything popping out to you here? Can I provide anything else?

@pcj
Copy link
Contributor

pcj commented May 15, 2018

Looks like you may have mis-spelled the annotation inginx.ingress.kubernetes.io/ssl-redirect.

@kent-williams
Copy link
Author

@pcj Thanks for catching that! Unfortunately it doesn't seem to have fixed the issue.

I continue to get StatusCode.UNAVAILABLE, Trying to connect an http1.x server with the gRPC client.

Do you get an address listed on your ingress?

$ kubectl get ingress
NAME           HOSTS                       ADDRESS   PORTS     AGE
grpc-ingress   grpc.192.168.22.68.xip.io             80, 443   4m
$ kubectl describe ing
Name:             grpc-ingress
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<none>)
TLS:
  grpc.192.168.22.68.xip.io terminates grpc.192.168.22.68.xip.io
Rules:
  Host                       Path  Backends
  ----                       ----  --------
  grpc.192.168.22.68.xip.io  
                                grpc-service:grpc (<none>)
Annotations:
  kubernetes.io/ingress.class:               nginx
  nginx.ingress.kubernetes.io/grpc-backend:  true
  nginx.ingress.kubernetes.io/ssl-redirect:  true
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  55s   nginx-ingress-controller  Ingress default/grpc-ingress
  Normal  UPDATE  46s   nginx-ingress-controller  Ingress default/grpc-ingress

nginx ingress controller log tail

I0516 17:33:02.363291       7 controller.go:177] ingress backend successfully reloaded...
I0516 17:33:11.890376       7 queue.go:70] queuing item sync status
I0516 17:33:11.890429       7 queue.go:111] syncing sync status
I0516 17:33:11.902455       7 status.go:356] updating Ingress default/grpc-ingress status to [{ }]
I0516 17:33:11.905124       7 store.go:504] updating annotations information for ingress default/grpc-ingress
I0516 17:33:11.905218       7 main.go:109] No default affinity was found for Ingress grpc-ingress
I0516 17:33:11.905236       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"grpc-ingress", UID:"2f091a32-592f-11e8-bd35-2c4d544790d3", APIVersion:"extensions", ResourceVersion:"528138", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/grpc-ingress
I0516 17:33:11.905429       7 store.go:518] updating references to secrets for ingress default/grpc-ingress
I0516 17:33:11.905459       7 backend_ssl.go:43] starting syncing of secret default/grpc.192.168.22.68.xip.io
I0516 17:33:11.905606       7 ssl.go:58] Creating temp file /ingress-controller/ssl/default-grpc.192.168.22.68.xip.io.pem068738403 for Keypair: default-grpc.192.168.22.68.xip.io.pem
I0516 17:33:11.906740       7 ssl.go:112] parsing ssl certificate extensions
I0516 17:33:11.907016       7 backend_ssl.go:105] found 'tls.crt' and 'tls.key', configuring default/grpc.192.168.22.68.xip.io as a TLS Secret (CN: [grpc.192.168.22.68.xip.io])
I0516 17:33:11.907100       7 nginx.go:335] Event UPDATE received - object &Ingress{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:grpc-ingress,GenerateName:,Namespace:default,SelfLink:/apis/extensions/v1beta1/namespaces/default/ingresses/grpc-ingress,UID:2f091a32-592f-11e8-bd35-2c4d544790d3,ResourceVersion:528138,Generation:1,CreationTimestamp:2018-05-16 17:33:02 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{kubernetes.io/ingress.class: nginx,nginx.ingress.kubernetes.io/grpc-backend: true,nginx.ingress.kubernetes.io/ssl-redirect: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:IngressSpec{Backend:nil,TLS:[{[grpc.192.168.22.68.xip.io] grpc.192.168.22.68.xip.io}],Rules:[{grpc.192.168.22.68.xip.io {HTTPIngressRuleValue{Paths:[{ {grpc-service {1 0 grpc}}}],}}}],},Status:IngressStatus{LoadBalancer:k8s_io_api_core_v1.LoadBalancerStatus{Ingress:[{ }],},},}
I0516 17:33:11.907280       7 queue.go:70] queuing item &Ingress{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:grpc-ingress,GenerateName:,Namespace:default,SelfLink:/apis/extensions/v1beta1/namespaces/default/ingresses/grpc-ingress,UID:2f091a32-592f-11e8-bd35-2c4d544790d3,ResourceVersion:528138,Generation:1,CreationTimestamp:2018-05-16 17:33:02 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{kubernetes.io/ingress.class: nginx,nginx.ingress.kubernetes.io/grpc-backend: true,nginx.ingress.kubernetes.io/ssl-redirect: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:IngressSpec{Backend:nil,TLS:[{[grpc.192.168.22.68.xip.io] grpc.192.168.22.68.xip.io}],Rules:[{grpc.192.168.22.68.xip.io {HTTPIngressRuleValue{Paths:[{ {grpc-service {1 0 grpc}}}],}}}],},Status:IngressStatus{LoadBalancer:k8s_io_api_core_v1.LoadBalancerStatus{Ingress:[{ }],},},}
I0516 17:33:11.907381       7 queue.go:111] syncing default/grpc-ingress
I0516 17:33:11.907429       7 endpoints.go:79] getting endpoints for service default/default-http-backend and port &ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:80,NodePort:0,}
I0516 17:33:11.907456       7 endpoints.go:125] endpoints found: [{10.1.42.13 80 0 0 &ObjectReference{Kind:Pod,Namespace:default,Name:default-http-backend-86nww,UID:60c976ee-5555-11e8-bd35-2c4d544790d3,APIVersion:,ResourceVersion:4921,FieldPath:,}}]
I0516 17:33:11.907493       7 controller.go:664] creating upstream default-grpc-service-grpc
I0516 17:33:11.907515       7 controller.go:764] obtaining port information for service default/grpc-service
I0516 17:33:11.907533       7 endpoints.go:79] getting endpoints for service default/grpc-service and port &ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:50051,NodePort:0,}
I0516 17:33:11.907551       7 endpoints.go:125] endpoints found: [{10.1.42.18 50051 0 0 &ObjectReference{Kind:Pod,Namespace:default,Name:grpc-app-d94776447-jqs9l,UID:040a59a4-555f-11e8-bd35-2c4d544790d3,APIVersion:,ResourceVersion:10505,FieldPath:,}}]
I0516 17:33:11.907634       7 controller.go:396] secret  does not contain 'ca.crt', mutual authentication not enabled - ingress rule default/grpc-ingress.
I0516 17:33:11.907652       7 controller.go:426] replacing ingress rule default/grpc-ingress location / upstream default-grpc-service-grpc (upstream-default-backend)
I0516 17:33:11.907679       7 controller.go:207] obtaining information about stream services of type TCP located in configmap 
I0516 17:33:11.907695       7 controller.go:207] obtaining information about stream services of type UDP located in configmap 
I0516 17:33:11.907722       7 controller.go:161] skipping backend reload (no changes detected)
192.168.128.95 - [192.168.128.95] - - [16/May/2018:17:34:11 +0000] "PRI * HTTP/2.0" 400 174 "-" "-" 0 0.001 [] - - - -

@rocketraman
Copy link

rocketraman commented Jun 12, 2018

I have the same issue. The nginx.conf in the ingress is configured with grpc_pass as expected.

I am connecting directly to the ingress controller to avoid any issues with platform load balancers without support for HTTP/2:

$ kubectl port-forward -n ingress-nginx ingress-nginx-internal-65788c69f8-82fmx 8181:80
Forwarding from 127.0.0.1:8181 -> 80
Forwarding from [::1]:8181 -> 80
Handling connection for 8181

On the node-based client (it also fails with a java client, though with a different client error):

GrpcTestService@localhost:8181> 
Error:  { Error: 14 UNAVAILABLE: Trying to connect an http1.x server

And in the nginx ingress pod logs at debug level 5:

2018/06/12 19:27:34 [info] 37#37: *228 client sent invalid request while reading client request line, client: 10.2.0.65, server: _, request: "PRI * HTTP/2.0"

@rocketraman
Copy link

I guess it's failing because nginx does not support both http and http/2 on port 80... See #2444.

@kent-williams
Copy link
Author

@rocketraman Were you able to get your setup working on another port then?

@rocketraman
Copy link

@kent-williams The only easy way I see to do that with ingress-nginx is to create another ingress controller with a different class. I haven't tried that yet because my workaround of using node-port directly is working fine for now, and am hoping for a better built-in solution that doesn't require managing another ingress.

@kent-williams
Copy link
Author

@pcj @aledbf I have yet to get this successfully working still. I can confirm that using port-forward to the pod works just fine. I have noticed that request attempts from chrome seem to get routed just fine to the appropriate backend pod, but do not from the grpc client? Below is log outputs from the ingress controller that I sent the request to. Obviously a GET request is expected to fail, but it's clearly getting to the pod, and the grpc client request are not.

nginx ingress controller log from grpc client request

192.168.128.95 - [192.168.128.95] - - [02/Jul/2018:22:34:34 +0000] "PRI * HTTP/2.0" 400 174 "-" "-" 0 0.001 [] - - - - dd89b731ec9071e33b58b1b2cf35aedc  

nginx ingress controller log from chrome request

2018/07/02 22:35:57 [error] 15690#15690: *63 upstream rejected request with error 2 while reading response header from upstream, client: 192.168.128.95, server: 192.168.22.71.nip.io, request: "GET / HTTP/2.0", upstream: "grpc://10.1.82.23:50051", host: "192.168.22.71.nip.io"
192.168.128.95 - [192.168.128.95] - - [02/Jul/2018:22:35:57 +0000] "GET / HTTP/2.0" 502 576 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36" 364 0.003 [default-gpd-grpc] 10.1.82.23:50051 0 0.004 502 4ca5bd42ca4282f93f6ea669c49b6ee6

Any suggestions for uncovering more information on the routing would be much appreciated!

@mayankjuneja
Copy link

@kent-williams I am facing similar issues while running the example, were you able to get this working? Thanks!

@kent-williams
Copy link
Author

@mayankjuneja I have not unfortunately. I'm hoping that someone can shine some more light on this less than helpful log output from the nginx controller from gRPC connection attempts.

192.168.128.95 - [192.168.128.95] - - [10/Jul/2018:16:58:40 +0000] "PRI * HTTP/2.0" 400 174 "-" "-" 0 0.001 [] - - - - edb5e516a8b6678de6e6bb838d90d1e1

@kent-williams
Copy link
Author

@aledbf
I am using TLS termination at the Ingress.
Is there anyway to obtain more useful debug information than the following from the controller at verbose level 5?

192.168.128.95 - [192.168.128.95] - - [16/Jul/2018:22:25:40 +0000] "PRI * HTTP/2.0" 400 174 "-" "-" 0 0.001 [] - - - - 38d352a5a1770858827b67b1f90228eb
2018/07/16 22:25:40 [debug] 7830#7830: *13 free: 00007FCC1F4BE000, unused: 0
2018/07/16 22:25:40 [debug] 7830#7830: *13 free: 00007FCC1F6D9000, unused: 3289
2018/07/16 22:25:40 [debug] 7830#7830: *13 close http connection: 4
2018/07/16 22:25:40 [debug] 7830#7830: *13 event timer del: 4: 1466584561
2018/07/16 22:25:40 [debug] 7830#7830: *13 reusable connection: 0
2018/07/16 22:25:40 [debug] 7830#7830: *13 free: 00007FCC1F433C00
2018/07/16 22:25:40 [debug] 7830#7830: *13 free: 00007FCC1F45CC00, unused: 136

@kent-williams
Copy link
Author

I have this working now.

I was not properly opening a TLS channel in the gRPC client.

It would have been helpful to know from the ingress controller side that the request were not being routed to the backend because of this.

@mayankjuneja
Copy link

@kent-williams Are you using grpcurl? Do you mind sharing the client code, what changes did you make? Thanks!

@kent-williams
Copy link
Author

kent-williams commented Jul 18, 2018

@mayankjuneja I was using it to test an initial connection.

You should be able to get through with the -insecure option with grpcurl at port 443. This was necessary for me since I am using a self signed cert for the gRPC ingress secret.

I will link my test grpc client/server tomorrow!

@kent-williams
Copy link
Author

@mayankjuneja my testing client/server grpc-python-kubernetes

Let me know if I can help!

@odino
Copy link

odino commented Aug 27, 2018

hey @kent-williams any idea on how we can disable TLS auth? I have everything running just like you did, but want to try disabling it.

@kent-williams
Copy link
Author

kent-williams commented Aug 27, 2018

@odino , I'm not sure it's possible. @aledbf has stated several times in issues that TLS is required for grpc.

@kruczjak
Copy link

kruczjak commented Aug 27, 2018

@odino if you really want to use grpc backend without TLS, you can use e.g. nghttpx proxy. It works for me perfectly :) (I'm using it only for gRPC in internal, private network). There is even ingress implementation for it: https://github.com/zlabjp/nghttpx-ingress-lb

@tapanhalani
Copy link

tapanhalani commented Sep 30, 2019

Hi. I am facing this issue on my local kubernetes cluster, which has a grpc-server service listening on port 50053, and an ingress object with nginx.ingress.kubernetes.io/backend-protocol: "GRPC" annotation . I am using nginx-ingress-controller with the configuration use-http2: "true". When I try connecting from a grpc-client written in go to the localhost:443 , then I am getting response rpc error: code = Unavailable desc = transport is closing , with ingress logs as follows: 192.168.65.3 - [192.168.65.3] - - [30/Sep/2019:13:43:43 +0000] "PRI * HTTP/2.0" 400 163 "-" "-" 0 0.004 [] [] - - - - ad5422a7b6d023b9257b770d4a3edcee

@jonasrmichel
Copy link

For what it's worth, I encountered this same error and ultimately realized it was the result of providing a custom --annotations-prefix, while still using the default (nginx.ingress.kubernetes.io) in my Ingress objects.

@caolele
Copy link

caolele commented Mar 13, 2020

If anyone tried to make K8s Tensorflow Serving over gRPC work using Helm Charts, here is my working example. Hope it helps.

@pen-pal
Copy link

pen-pal commented Sep 20, 2023

I have this working now.

I was not properly opening a TLS channel in the gRPC client.

It would have been helpful to know from the ingress controller side that the request were not being routed to the backend because of this.

can you share the configuration how you achieved this?

i am facing similar issue of 400 status code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests