Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add example using grpc and http2 #18

Closed
bowei opened this issue Oct 11, 2017 · 73 comments
Closed

Add example using grpc and http2 #18

bowei opened this issue Oct 11, 2017 · 73 comments
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bowei
Copy link
Member

bowei commented Oct 11, 2017

From @aledbf on December 1, 2016 22:39

Copied from original issue: kubernetes/ingress-nginx#39

@bowei bowei added enhancement help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Oct 11, 2017
@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @aledbf on December 30, 2016 3:24

Something like this should work:

https://github.com/caiofilipini/grpc-weather/

  • create deployment
kubectl run \
  --image=caiofilipini/grpc-weather:master grpc-weather \
  --port=9000 \
  --env="OPEN_WEATHER_MAP_API_KEY=<token from http://openweathermap.org/api>" \
  --env="WEATHER_UNDERGROUND_API_KEY=<token from http://openweathermap.org/api>"
  • expose service
kubectl expose deployment grpc-weather --port=9000 --name=grpc-weather
  • create ingress
echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grpc-weather
spec:
  rules:
  - host: grpc-weather
    http:
      paths:
      - backend:
          serviceName: grpc-weather
          servicePort: 9000
        path: /
" | kubectl create -f -
  • use docker image to run the client
docker run \
  --rm \
  -it \
  --name weather_service \
  --entrypoint bash \
  --net=host \
  caiofilipini/grpc-weather:master
  • execute
echo "<grpc-weather svc IP> grpc-weather" >>/etc/hosts
curl -v <grpc-weather svc IP>:80 -H 'Host: grpc-weather"
make build-client
./weather_client/client --s grpc-weather --p 80 Santiago

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @krancour on April 3, 2017 16:21

@aledbf I haven't tried, but does this work? I recently tried using a different upstream grpc service and requests kept failing because of connection reset by peer.

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @aledbf on April 3, 2017 16:25

@krancour works if you use the ssl-passthrough annotation, i.e. just use nginx for the TLS hello and then terminate SSL in the pod (this is because nginx does not support http2 in the upstream)

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @krancour on April 3, 2017 16:27

@aledbf ahhh... thanks! I will try that.

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @philipithomas on June 28, 2017 21:58

can a grpc server listen on port 80? More specifically - how can ssl-passthrough be configured for port 80?

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @aledbf on June 28, 2017 21:59

@philipithomas I just answered this in your issue :)

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @drigz on August 17, 2017 9:8

For others following this trail, the other issue is #923.

@bowei
Copy link
Member Author

bowei commented Oct 11, 2017

From @nlamirault on September 6, 2017 16:6

I would like to use an nginx ingress controller to expose a grpc-gateway service. The gRPC services are on 8080 and the REST gateway on 9090.
I create theses ingresses :

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    ingress.kubernetes.io/ssl-redirect: "true"
  name: diablo-http
  namespace: nimbus
spec:
  rules:
  - host: diablo.caas.net
    http:
      paths:
      - path: /
        backend:
          serviceName: diablo
          servicePort: 9090
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    ingress.kubernetes.io/ssl-passthrough: "true"
  name: diablo-grpc
  namespace: nimbus
spec:
  rules:
  - host: diablo-rpc.caas.net
    http:
      paths:
      - path: /
        backend:
          serviceName: diablo
          servicePort: 8080

A HTTP request to diablo.caas.net works fine, but the CLI which use the gRPC backend is not working with diablo-rpc.caas.net.
Any idea how can i do that ?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 9, 2018
@dbeaulieu
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 10, 2018
@shankj3
Copy link

shankj3 commented Mar 8, 2018

Any luck getting this to work, @bowei ? I am having the same issue.

@munjalpatel
Copy link

munjalpatel commented Mar 14, 2018

I am also interested in exposing internal gRPC based services to external clients through GKE ingress controller with SSL.

Looks like some work to support http/2 is already in progress here :)
#146

@nicksardo nicksardo added kind/documentation Categorizes issue or PR as related to documentation. and removed enhancement labels May 4, 2018
@nicksardo
Copy link
Contributor

cc @agau4779

@bgetsug
Copy link

bgetsug commented Sep 2, 2018

Is there anyone who can speak to the priority of completing this or anyone who can provide a working example since #146 has been merged?

I am struggling to configure HTTPS (HTTP/2) ingress on GKE in front of the Cloud Endpoints ESP for a gRPC service. To clarify I'm trying to achieve the following architecture:

GCP HTTPS LB (includes HTTP/2 ?)
         |
Cloud Endpoints ESP listenting for HTTP/2, deployed to GKE, exposed as NodePort Service
         |
gRPC server deployed to GKE

Anything seem wrong or unreasonable with that?

I have referred to the following existing documentation/samples:

At most, I am able to get ESP working with my gRPC service and can connect via the pod IPs on the private network. The health check created by the ingress does not seem to be working. Is there any way to configure it for HTTP/2 or TCP instead of HTTP?

I would also welcome any alternative suggestions including manually creating an LB or anything I might orchestrate with Terraform. Thank you.

@bgetsug
Copy link

bgetsug commented Sep 4, 2018

Well after much knob twiddling I was able to access a gRPC+Cloud Endpoints (ESP) service through an HTTPS LB. I ended up configuring everything manually via the web console, but will probably write a script so I can at least replicate the process with gcloud.

A few notes that may help others:

  • https://cloud.google.com/load-balancing/docs/https/#components

    To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
    1. Configure an HTTPS load balancer.
    2. Enable HTTP/2 as the protocol from the load balancer to the backends.
    The load balancer negotiates HTTP/2 with clients as part of the SSL handshake by using the ALPN TLS extension.

  • https://cloud.google.com/load-balancing/docs/backend-service#HTTP2-limitations

    HTTP/2 to backends is not supported for Kubernetes Engine

    Does that mean it doesn't work, or just not "supported"?

  • Configuring the LB backend to use ESP's HTTP/2 port did not work. I had to enable the SSL port, mount the certs (same as the LB frontend) to the container (Deployment), and expose it via the Service. So the backend is configured for the HTTP/2 protocol, but points to ESP's SSL port.

  • I enabled ESP's /healthz endpoint and used that for the backend health check, with the same protocol and port as the backend.

Unfortunately, I was never able to make an Ingress behave due to the inability to customize backend health checks. For that, it sounds like we'll need resolution on #42.

@bowei
Copy link
Member Author

bowei commented Sep 4, 2018

@agau4779 -- did we ever update the OSS docs?

@manjotpahwa
Copy link

Not yet, in progress.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 8, 2018
@bgetsug
Copy link

bgetsug commented Dec 9, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 9, 2018
@clehene
Copy link

clehene commented Dec 17, 2018

@bgetsug I was able to override the LB health check to point to the ESP http_port and the probe will see it healthy. However it gets overridden (I guess by LGBC) after a while back to the main service port (HTTP/2).

I assume GLBC sets the probe to Ingress/spec/backend/servicePort and, hence, any other probe will not really work.

I was not able to figure out how the health probe is set by GLBC. After shaving yaks trying to get GLBC which uses non standard vendoring paths and GoLand's stubbornness in not accepting anything else but vendor/pkg|src I linked everything but tests still won't run...

I wonder if I could at least figure out if setting a different probe (e.g. TCP)

Also see https://benguild.com/2018/11/11/quickstart-golang-kubernetes-grpc-tls-lets-encrypt/
Some options would be to inject the health check in gRPC / netty or to mangle nginx config in some way to answer the check on http/2

Now the second problem

While the probe works, it looks like the backend over HTTP/2 won't

io.grpc.StatusRuntimeException: UNAVAILABLE: HTTP status code 502
invalid content-type: text/html; charset=UTF-8
headers: Metadata(:status=502,content-type=text/html; charset=UTF-8,referrer-policy=no-referrer,content-length=332,date=Mon, 17 Dec 2018 09:58:11 GMT,alt-svc=clear)
DATA-----------------------------

<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>

DATA-----------------------------
	at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:233)
...

Regarding

Configuring the LB backend to use ESP's HTTP/2 port did not work. I had to enable the SSL port, mount the certs (same as the LB frontend) to the container (Deployment), and expose it via the Service. So the backend is configured for the HTTP/2 protocol, but points to ESP's SSL port.

Perhaps that's because gRPC downgrades HTTP/2 to h2c?
Doesn't this defeat the purpose, though? I was assuming TLS termination in GLB. Otherwise, wouldn't a TCP load balancer for 443 with a HTTP/2 enabled backend work?

Also note that while HTTP/2 is not supported, the docs mentioned it (although until a few hours the link returned a 404).

I'm referring to this doc https://cloud.google.com/load-balancing/docs/https/
Particularly

If you want to configure an HTTP(S) load balancer using HTTP/2 with Google Kubernetes Engine Ingress, see HTTP/2 for Load Balancing with Ingress.

https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2 <- this was broken

The remaining problem is that it's likely the HTTP/2 backend will likely not downgrade to h2c as gRPC does.
https://cloud.google.com/load-balancing/docs/backend-service#troubleshoot-HTTP2

@rramkumar1
Copy link
Contributor

@gnarea That's an interesting question but I don't have a great answer for you there. I don't see a way to do this without tweaking your app.

@bowei
Copy link
Member Author

bowei commented Sep 17, 2020

It seems like unencrypted HTTP2 is not as common -- https://http2.github.io/faq/#does-http2-require-encryption, so that is probably why support for it on the backend is not possible right now.

Generating the self-signed cert is straightforward, can be done with an init container with a script, but you really have to watch out for cert expiry.

#!/bin/bash
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365

@gnarea
Copy link

gnarea commented Sep 17, 2020

Thanks @rramkumar1 and @bowei!

Putting a TLS reverse proxy in front of the gRPC server fixed that problem, but it's uncovered another problem: The load balancer is failing to connect to the proxy and I suspect it's because the LB is refusing the self-issued certificate (not necessarily because it's self-issued, but for a different reason). Here's what's happening:

  1. I make a gRPC request to the LB over the Internet. Behind the scenes, this is an HTTP/2 POST request to https://cogrpc-test.relaycorp.tech/relaynet.cogrpc.CargoRelay/DeliverCargo. The result is a 502 response.
  2. In the LB logs, I can see that the reason for the failure is statusDetails: "failed_to_connect_to_backend":
     {
     "jsonPayload": {
         "statusDetails": "failed_to_connect_to_backend",
         "@type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry"
     },
     "httpRequest": {
         "requestMethod": "POST",
         "requestUrl": "https://cogrpc-test.relaycorp.tech/relaynet.cogrpc.CargoRelay/DeliverCargo",
         "requestSize": "526587",
         "status": 502,
         "responseSize": "447",
         "userAgent": "grpc-node/1.24.2 grpc-c/8.0.0 (linux; chttp2; ganges)",
         "remoteIp": "<redacted>",
         "serverIp": "10.12.2.9",
         "latency": "9.005462s"
     },
     "resource": {
         "type": "http_load_balancer",
         ...
         }
     },
     "timestamp": "2020-09-17T20:49:31.046721Z",
     "severity": "WARNING",
     "logName": "projects/public-gw/logs/requests",
     "receiveTimestamp": "2020-09-17T20:49:40.415675057Z",
     }
    
  3. In the TLS proxy logs, the request above maps to the following logs -- which suggests the TLS handshake failed:
     E 2020-09-17T20:49:31.344735524Z 20200917T204931.344037 [   40] 10.12.2.1:52170 :0 7:8 ssl handshake start
     E 2020-09-17T20:49:31.344918961Z 20200917T204931.344054 [   40] 10.12.2.1:52170 :0 7:8 ssl client handshake revents=1
     E 2020-09-17T20:49:31.345076960Z 20200917T204931.344089 [   40] 10.12.2.1:52170 :0 7:8 ssl client handshake err=SSL_ERROR_SYSCALL
     E 2020-09-17T20:49:31.345232699Z 20200917T204931.344097 [   40] {client} SSL socket error in handshake: No error information
    
  4. As expected, there are no corresponding logs on the gRPC server (with tracing enabled).
  5. I can confirm the TLS proxy is configured with a valid (yet untrusted) certificate and that HTTP/2 ALPN is supported. Here's what I get when I curl the proxy using port-forwarding in k8s (note that this isn't a valid gRPC request):
     $ curl -k -v https://localhost:8080
     *   Trying 127.0.0.1:8080...
     * TCP_NODELAY set
     * Connected to localhost (127.0.0.1) port 8080 (#0)
     * ALPN, offering h2
     * ALPN, offering http/1.1
     * successfully set certificate verify locations:
     *   CAfile: none
     CApath: /etc/ssl/certs
     * TLSv1.3 (OUT), TLS handshake, Client hello (1):
     * TLSv1.3 (IN), TLS handshake, Server hello (2):
     * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
     * TLSv1.3 (IN), TLS handshake, Certificate (11):
     * TLSv1.3 (IN), TLS handshake, CERT verify (15):
     * TLSv1.3 (IN), TLS handshake, Finished (20):
     * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
     * TLSv1.3 (OUT), TLS handshake, Finished (20):
     * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
     * ALPN, server accepted to use h2
     * Server certificate:
     *  subject: C=CH; ST=Zurich; L=Zurich; O=Snakeoil Inc; OU=IT Department; CN=10.12.2.9
     *  start date: Sep 17 21:38:37 2020 GMT
     *  expire date: Sep 17 21:38:37 2021 GMT
     *  issuer: C=CH; ST=Zurich; L=Zurich; O=Snakeoil Inc; OU=IT Department; CN=10.12.2.9
     *  SSL certificate verify result: self signed certificate (18), continuing anyway.
     * Using HTTP2, server supports multi-use
     * Connection state changed (HTTP/2 confirmed)
     * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
     * Using Stream ID: 1 (easy handle 0x55f78a0561d0)
     > GET / HTTP/2
     > Host: localhost:8080
     > User-Agent: curl/7.65.3
     > Accept: */*
     > 
     * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
     * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
     * old SSL session ID is stale, removing
     * Connection state changed (MAX_CONCURRENT_STREAMS == 3)!
     * HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
     * stopped the pause stream!
     * Connection #0 to host localhost left intact
     curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
    

How can I tell why the LB is aborting during the TLS handshake? Are there any logs I can check?

Is the LB expecting the backend certificate to have a specific Common Name or Subject Alternative Name? I'm currently using the IP of the pod (10.12.2.9 as shown in the logs above).

@rramkumar1
Copy link
Contributor

@gnarea Is your backend trying to do mTLS? If so, its not supported on GCLB.

Your errors look very similar to other errors I've seen where folks are trying to use mTLS.

@gnarea
Copy link

gnarea commented Sep 18, 2020

@rramkumar1, no, no mutual TLS.

I've managed to solve this issue by ditching the TLS proxy and getting the gRPC server to do TLS with a self-issued certificate.

I think the problem is that the TLS proxy was generating a self-issued certificate with a CommonName (set to the IP address of the pod), but it wasn't setting the SubjectAlternativeName extension. This is important. So now I have some custom code to create the certificate and set the SAN:

# TypeScript code using the "selfsigned" NPM package
# I need to check if the "keyUsage" extension is required by the LB or the Go gRPC client used by the health checks
# ipAddress should be the IP of the pod

async function selfIssueCertificate(ipAddress: string): Promise<KeyCertPair> {
  const pems = selfsigned.generate([{ name: 'commonName', value: ipAddress }], {
    days: 365,
    extensions: [
      { name: 'basicConstraints', cA: true },
      {
        dataEncipherment: true,
        digitalSignature: true,
        keyCertSign: true,
        keyEncipherment: true,
        name: 'keyUsage',
        nonRepudiation: true,
      },
      {
        altNames: [
          {
            ip: ipAddress,
            type: 7, // IP Address
          },
        ],
        name: 'subjectAltName',
      },
    ],
  });
  return { cert_chain: Buffer.from(pems.cert), private_key: Buffer.from(pems.private) };
}

I can't believe this is finally working.

GCP folks, some feedback:

  • Please log the reason why the connection failed. failed_to_connect_to_backend is not enough at all.
  • Please create a Docker image that acts as a TLS proxy with all the appropriate configuration to work with your LB and an HTTP/2 backend.

@gnarea
Copy link

gnarea commented Sep 18, 2020

Here's the final deployment in case it helps anyone:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gw-test-relaynet-internet-gateway-cogrpc
  labels:
    helm.sh/chart: relaynet-internet-gateway-0.1.0
    app.kubernetes.io/name: relaynet-internet-gateway-cogrpc
    app.kubernetes.io/instance: gw-test
    app.kubernetes.io/version: "1.6.3"
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: relaynet-internet-gateway-cogrpc
      app.kubernetes.io/instance: gw-test
  template:
    metadata:
      labels:
        app.kubernetes.io/name: relaynet-internet-gateway-cogrpc
        app.kubernetes.io/instance: gw-test
    spec:
      containers:
        - name: cogrpc
          image: "<YOU-IMAGE>"
          imagePullPolicy: IfNotPresent
          env:
            - name: SERVER_IP_ADDRESS
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          ports:
            - name: grpc
              containerPort: 8080
              protocol: TCP
        - name: cogrpc-health-check
          image: salrashid123/grpc_health_proxy:1.0.0
          imagePullPolicy: IfNotPresent
          command:
            - "/bin/grpc_health_proxy"
            - "-http-listen-addr"
            - "0.0.0.0:8082"
            - "-grpcaddr"
            - "127.0.0.1:8080"
            - "-service-name"
            - "relaynet.cogrpc.CargoRelay"
            - "-grpctls"
            - "-grpc-tls-no-verify"
            - "--logtostderr=1"
            - "-v"
            - "10"
          ports:
            - name: health-check
              containerPort: 8082
              protocol: TCP
          livenessProbe:
            httpGet:
              port: "health-check"
          readinessProbe:
            httpGet:
              port: "health-check"

@raboof
Copy link

raboof commented Sep 18, 2020

Please create a Docker image that acts as a TLS proxy with all the appropriate configuration to work with your LB and an HTTP/2 backend

While this would be a useful component in the short term, I'd say the 'real' fix would be to allow using h2c (HTTP/2 without HTTPS) between the loadbalancer and the service 'directly'.

@gnarea
Copy link

gnarea commented Sep 18, 2020

I totally agree with @raboof: GCP LBs should be able to handle TLS termination regardless of whether the backend uses HTTP 1 or 2.

Here's an issue to improve the logging in case anyone's interested: https://issuetracker.google.com/issues/168884858

@bowei
Copy link
Member Author

bowei commented Sep 18, 2020

We will send this to the L7 proxy folks -- in the meantime, comment on the issue tracker -- it's a good way to show there is interest as well.

@salrashid123
Copy link

@gnarea you can use envoy to proxy the tls intended for the gRPC service. here is an exampleof that i just updated plus an easy button way to gen sni from subordinate cert.
however, If you use envoy now, its yet another container (making a grand total of containers +2 (envoy+grpc_healthcheck binary) just for grpc lb which is crazy complex...). You can, if you really, rally want to, mash lua to help with the healthchecks and still proxy (meaning you now down to +1 but now have complexity shunted to the envoy config.. here. these are workarounds. there needs to be an easier way

@gnarea
Copy link

gnarea commented Sep 21, 2020

these are workarounds. there needs to be an easier way

Indeed.

BTW, I've just created a separate issue for the gRPC health checks: https://issuetracker.google.com/issues/168994852

@gnarea
Copy link

gnarea commented Sep 21, 2020

@bowei, is there a issue for the TLS termination on the L7 proxy? I'd like to star it and get updates.

@bowei
Copy link
Member Author

bowei commented Sep 21, 2020

@gnarea I just noticed that this issue is pretty old and not specific to support for unencrypted backends for HTTP2. It probably is better to open an issue for that topic so the activity can be tracked easier.

gnarea added a commit to relaycorp/awala-gateway-internet that referenced this issue Sep 21, 2020
kodiakhq bot pushed a commit to relaycorp/awala-gateway-internet that referenced this issue Sep 21, 2020
* fix(CogRPC): Self-issue TLS certificate

As one of the few workarounds for kubernetes/ingress-gce#18

* Update URL to CogRPC server in functional test suite

* Fix COGRPC_ADDRESS in functional tests

* Document need to issue cert
@gnarea
Copy link

gnarea commented Sep 22, 2020

Done @bowei: https://issuetracker.google.com/issues/169122105

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 21, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@override80
Copy link

override80 commented Apr 9, 2021

For all of you coming here as of today, this is what I did to get a grpc plaintext application working on GKE behind GCLB. Thanks @salrashid123 and @gnarea for providing insights. I used the docker-hitch sidecar from here instead of envoy (much simpler to me) which generates TLS certificates on the fly picking up pod ip address at startup. I slightly modified it to accomodate http2 and alpn with the correct ciphers supported by GCLB (here the fork, it might be better to move params to Dockerfile variable, but anyway...).

Here you can see redacted manifests for k8s:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: component
    app.kubernetes.io/instance: istance_name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: app_name
    app.kubernetes.io/part-of: part_name
    app.kubernetes.io/version: 083d4037
  name: app_name
  namespace: ns_name
spec:
  minReadySeconds: 5
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      app.kubernetes.io/instance: istance_name
      app.kubernetes.io/name: app_name
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: component
        app.kubernetes.io/instance: istance_name
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: app_name
        app.kubernetes.io/part-of: part_name
        app.kubernetes.io/version: 083d4037
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/name
                  operator: In
                  values:
                  - app_name
              topologyKey: kubernetes.io/hostname
            weight: 100
      containers:
      - image: override80/hitch:1.0.0
        imagePullPolicy: IfNotPresent
        env:
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        name: hitch
        ports:
        - containerPort: 8443
          name: https
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - image: gcr.io/cloud-solutions-images/grpc_health_proxy
        imagePullPolicy: IfNotPresent
        args:
        - --http-listen-addr=0.0.0.0:8080
        - --grpcaddr=localhost:8000
        - --service-name=app.Check
        - --logtostderr=1
        - -v=1
        name: hc-proxy
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - image: redacted.registry/app/server:e0dfbce1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 2
        name: app
        ports:
        - containerPort: 8000
          name: grpc
          protocol: TCP
        - containerPort: 9091
          name: prom
          protocol: TCP
        resources:
          limits:
            cpu: 512m
            memory: 128Mi
          requests:
            cpu: 125m
            memory: 64Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /app/.env
          name: configs
          readOnly: true
          subPath: .env
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: registry-credentials
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: configs
        secret:
          defaultMode: 420
          secretName: configs
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: app-grpc-backendconfig
  namespace: ns_name
spec:
  healthCheck:
    port: 8080
    requestPath: /healthz
    type: HTTP
apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/app-protocols: '{"https":"HTTP2"}'
    cloud.google.com/backend-config: '{"default": "app-grpc-backendconfig"}'
    cloud.google.com/neg: '{"ingress": true}'
    prometheus.io/scrape: "true"
  labels:
    app.kubernetes.io/component: component
    app.kubernetes.io/instance: istance_name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: app_name
    app.kubernetes.io/part-of: part_name
    app.kubernetes.io/version: 083d4037
  name: svc_name
  namespace: ns_name
spec:
  externalTrafficPolicy: Local
  ports:
  - name: https
    port: 8443
    protocol: TCP
    targetPort: 8443
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: grpc
    port: 80
    protocol: TCP
    targetPort: grpc
  - name: prom
    port: 9091
    protocol: TCP
    targetPort: prom
  selector:
    app.kubernetes.io/instance: istance_name
    app.kubernetes.io/name: app_name
  sessionAffinity: None
  type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    acme.cert-manager.io/http01-edit-in-place: "true"
    cert-manager.io/cluster-issuer: production
    external-dns.alpha.kubernetes.io/hostname: extarnal.dns.name
    external-dns.alpha.kubernetes.io/ttl: "200"
    kubernetes.io/ingress.class: gce
  labels:
    app.kubernetes.io/component: component
    app.kubernetes.io/instance: istance_name
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: app_name
    app.kubernetes.io/part-of: part_name
    app.kubernetes.io/version: 083d4037
  name: app_name
  namespace: ns_name
spec:
  backend:
    serviceName: ingress
    servicePort: https
  tls:
  - hosts:
    - extarnal.dns.name
    secretName: external-dns-name-tls

Hope this helps guys! Thanks to everyone for your posts. This is a good resource as well.

gnarea added a commit to relaycorp/awala-gateway-internet that referenced this issue Jan 10, 2024
kodiakhq bot pushed a commit to relaycorp/awala-gateway-internet that referenced this issue Jan 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests