-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logs flooded with ssl handshake errors #8022
Comments
@esatterwhite Are you using Kong to proxy to an HTTPS upstream? In that case you might need to configure |
No, kong is doing TLS termination and proxying plain HTTP to an upstream |
@esatterwhite Could you share the CRD you are using to configure Kong entities (Services, Routes, Plugins)? |
The only plugin currently in use is the Prometheus plugin. Here is the kong setup. Let me know if you need something else apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
konghq.com/https-redirect-status-code: "301"
konghq.com/protocols: https
kubernetes.io/ingress.class: kong
kubernetes.io/tls-acme: "true"
razee.io/branch: main
razee.io/build-url: https://jenkins.use.int.logdna.net/job/answerbook/job/tooling-kong/job/main/32/
razee.io/commit-sha: a4fef9f
razee.io/git-repo: https://github.com/answerbook/tooling-kong.git
creationTimestamp: "2021-10-20T21:59:50Z"
generation: 1
labels:
app: ingress-kong
deploy.razee.io/Reconcile: "false"
razee/watch-resource: debug
version: 2.6.0-alpine.20211108T191546Z
name: ingress-kong
namespace: default
resourceVersion: "522379583"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/ingress-kong
uid: 1f99a043-beb2-4831-8ea9-ef769a7c7a22
spec:
rules:
- host: api.use.stage.logdna.net
http:
paths:
- backend:
serviceName: lda
servicePort: 80
path: /v1
- backend:
serviceName: lda
servicePort: 80
path: /v2/export
- host: app.use.stage.logdna.net
http:
paths:
- backend:
serviceName: ldw
servicePort: 80
path: /
- host: tail.use.stage.logdna.net
http:
paths:
- backend:
serviceName: ldat
servicePort: 80
path: /
tls:
- hosts:
- '*.use.stage.logdna.net'
secretName: ingress-kong-secret apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
konghq.com/https-redirect-status-code: "301"
konghq.com/protocols: https
kubernetes.io/ingress.class: kong
kubernetes.io/tls-acme: "true"
razee.io/branch: main
razee.io/build-url: https://jenkins.use.int.logdna.net/job/answerbook/job/tooling-kong/job/main/32/
razee.io/commit-sha: a4fef9f
razee.io/git-repo: https://github.com/answerbook/tooling-kong.git
creationTimestamp: "2021-10-20T21:28:58Z"
generation: 1
labels:
app: ingress-kong-fallback
deploy.razee.io/Reconcile: "false"
razee/watch-resource: debug
version: 2.6.0-alpine.20211108T191546Z
name: ingress-kong-fallback
namespace: default
resourceVersion: "522379559"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/ingress-kong-fallback
uid: 570d3309-8b6a-419a-bf76-944776f33b52
spec:
backend:
serviceName: lda
servicePort: 80
tls:
- hosts:
- '*.use.stage.logdna.net'
secretName: ingress-kong-secret apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
creationTimestamp: "2021-11-08T22:38:30Z"
generation: 1
labels:
app: ingress-kong-fallback
deploy.razee.io/Reconcile: "false"
razee/watch-resource: debug
version: 2.6.0-alpine.20211108T191546Z
name: ingress-kong-secret
namespace: default
ownerReferences:
- apiVersion: networking.k8s.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: Ingress
name: ingress-kong-fallback
uid: 570d3309-8b6a-419a-bf76-944776f33b52
resourceVersion: "522467334"
selfLink: /apis/cert-manager.io/v1/namespaces/default/certificates/ingress-kong-secret
uid: f6f104a7-4886-42e0-8c97-5874e6e022d2
spec:
dnsNames:
- '*.use.stage.logdna.net'
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt-prod
secretName: ingress-kong-secret
usages:
- digital signature
- key encipherment
status:
conditions:
- lastTransitionTime: "2021-11-08T22:38:31Z"
message: Certificate is up to date and has not expired
observedGeneration: 1
reason: Ready
status: "True"
type: Ready
notAfter: "2022-01-18T20:28:59Z"
notBefore: "2021-10-20T20:29:00Z"
renewalTime: "2021-12-19T20:28:59Z" apiVersion: apps/v1 [16/2986]
kind: Deployment
metadata:
creationTimestamp: "2021-10-20T23:24:53Z"
generation: 6
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
resourceVersion: "522379696"
selfLink: /apis/apps/v1/namespaces/kong/deployments/ingress-kong
uid: f4af5447-4ed2-4765-b278-a0d60525b342
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: ingress-kong
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: "8100"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: ingress-kong
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: workload-app
operator: In
values:
- enabled
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ingress-kong
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- env:
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_ADMIN_LISTEN
value: 127.0.0.1:8444 ssl
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_DATABASE
value: "off"
- name: KONG_ERROR_DEFAULT_TYPE
value: application/json
- name: KONG_HEADERS
value: server_tokens, latency_tokens
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_NGINX_HTTP_CLIENT_BODY_TEMP_PATH
value: /nginx/buffers/client
- name: KONG_NGINX_HTTP_PROXY_TEMP_PATH
value: /nginx/buffers/proxy
- name: KONG_ADMIN_ACCESS_LOG
value: "off"
- name: KONG_PROXY_ACCESS_LOG
value: "off"
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
- name: KONG_NGINX_PROXY_CLIENT_MAX_BODY_SIZE
value: 25m
- name: KONG_NGINX_PROXY_CLIENT_BODY_BUFFER_SIZE
value: 256k
- name: KONG_NGINX_PROXY_PROXY_IGNORE_CLIENT_ABORT
value: "off"
- name: KONG_UPSTREAM_KEEPALIVE_POOL_SIZE
value: "100"
- name: KONG_UPSTREAM_KEEPALIVE_MAX_REQUESTS
value: "100"
- name: KONG_UPSTREAM_KEEPALIVE_IDLE_TIMEOUT
value: "60"
image: us.gcr.io/logdna-k8s/kong:2.6.0-alpine.20211108T191546Z
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: kong-proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "3"
memory: 2Gi
requests:
cpu: "2"
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /nginx/buffers
name: nginx-tmp-buffers
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: us.gcr.io/logdna-k8s/kong-ingress-controller:1.3-alpine.20211108T191546Z
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kong-sa
serviceAccountName: kong-sa
terminationGracePeriodSeconds: 30
volumes:
- emptyDir:
medium: Memory
name: nginx-tmp-buffers
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-10-20T23:25:17Z"
lastUpdateTime: "2021-10-20T23:25:17Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-10-20T23:24:53Z"
lastUpdateTime: "2021-11-08T20:07:11Z"
message: ReplicaSet "ingress-kong-5c846dbd79" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 6
readyReplicas: 1
replicas: 1
updatedReplicas: 1 |
@fffonion Do you need anything else in specific? |
@esatterwhite The config doesn't show any unreasonable items, and as it's showing with |
@fffonion Not really. There is really a single one. The fallback is really just there to funnel anything that we can't match. I'm mostly taking a wild stab in the dark, but thats my best guess here. That still doesn't entirely make sense because I would expect TLS to be terminated before routing to the upstream. |
Its complaining during the initial ssl handshake from what it looks like. I don't know enough about kongs inner ssl handlings to know where that would be coming from - unless it is the SNI problem. What would fix that? |
@esatterwhite It's relatively hard to pinpoint the issue at this time, is it a production instance or staging/dev instance? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Is there an existing issue for this?
Kong version (
$ kong version
)2.6.0-alpine
Current Behavior
Kong is flooding debug logs with ssl handshake errors.
I'm running the ingress controller using cert-manager as outlined in the docs. Currently This seems to work. Kong is serving HTTPS requests and terminating tls just fine. Its not really clear by these logs what is kong on, or what can be done to fix it
Is the a problem with the default cert that ships with kong?
Expected Behavior
No response
Steps To Reproduce
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: