Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

admin console url becomes erroneous (502 bad gateway) after some inactivity time #16850

Closed
2 tasks done
gaetanquentin opened this issue Feb 5, 2023 · 11 comments
Closed
2 tasks done
Labels
area/core area/dist/quarkus area/operator Keycloak.X Operator kind/bug Categorizes a PR related to a bug

Comments

@gaetanquentin
Copy link

gaetanquentin commented Feb 5, 2023

Before reporting an issue

  • I have searched existing issues
  • I have reproduced the issue with the latest release

Area

core

Describe the bug

not sure about the area i indicated above ^^
it is about "administration console"

After beeing loggued in administration console , and switched to my realm, a few minutes later after inactivity , if i refresh the page (F5), i have a "502 bad gateway"

If i come back to administration console main link: same thing.

i had to delete this cookie in the browser for the page come pack ok: KEYCLOAK_LEGACY_IDENTITY

in the ingress nginx log, i can see that:

192.168.1.1 - - [05/Feb/2023:20:56:39 +0000] "GET /realms/master/protocol/openid-connect/auth?client_id=security-admin-console&redirect_uri=https%3A%2F%2Fkc.mysite.net%2Fadmin%2Fmaster%2Fconsole%2F&state=e2c141dd-5942-47d9-870e-fc5253c579a2&response_mode=fragment&response_type=code&scope=openid&nonce=f8100cc7-2535-450c-810d-ea97e2082339&prompt=none&code_challenge=gXdHqXCTwULscs_dC9iQHYiBzS3WKzV78gdgLOmwqV0&code_challenge_method=S256 HTTP/2.0" 502 552 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.88 Safari/537.36" 300 0.008 [keycloak-kc-service-8443] [] 10.1.39.189:8443 0 0.008 502 db914cfb2f0a9a4e729ccfa54ccc5990

Version

kc: 20.0.3, kubernetes 1.26.0

Expected behavior

url stays ok all the time

Actual behavior

After loggued into admin console , and switched to myrealm, the url https://myurl.net/admin/master/console/#/myrealm become erroneous, after some minutes (don't know how much, more that 15 min i think), wwith 502 bad gateway

How to Reproduce?

kubernetes 1.26.0
ingress controler nginx

ingress rule:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    route.openshift.io/termination: passthrough
  creationTimestamp: "2023-02-02T18:25:07Z"
  generation: 2
  labels:
    app: keycloak
    app.kubernetes.io/managed-by: keycloak-operator
  name: kc-ingress
  namespace: keycloak
  ownerReferences:
  - apiVersion: k8s.keycloak.org/v2alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Keycloak
    name: kc
    uid: 7943936b-bcf3-45c4-bf03-b122dbda756c
  resourceVersion: "5430795"
  uid: 41ac2160-f8b9-474d-915d-d8ad73552951
spec:
  defaultBackend:
    service:
      name: kc-service
      port:
        number: 8443
  rules:
  - host: kc.mysite.net
    http:
      paths:
      - backend:
          service:
            name: kc-service
            port:
              number: 8443
        pathType: ImplementationSpecific
status:
  loadBalancer: {}

Anything else?

No response

@gaetanquentin gaetanquentin added kind/bug Categorizes a PR related to a bug status/triage labels Feb 5, 2023
@ghost ghost added the area/admin/ui label Feb 5, 2023
@padraic-shafer
Copy link

I had this problem also. The solution turned out to be increasing the buffer size used by nginx. It seems that reauthenticating an old session was causing a lot more data between nginx and the upstream server, causing nginx to choke with its default settings.

See Why do I get 502 when trying to authenticate.

Add these directives to the http block in nginx.conf (or the equivalent in your nginx ingress configuration):

proxy_buffer_size   128k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;

@ssilvert
Copy link
Contributor

ssilvert commented Feb 6, 2023

@gaetanquentin Does that solution work for you? Do you mind if I close this issue?

@tyokyo320
Copy link

I had this problem also. The solution turned out to be increasing the buffer size used by nginx. It seems that reauthenticating an old session was causing a lot more data between nginx and the upstream server, causing nginx to choke with its default settings.

See Why do I get 502 when trying to authenticate.

Add these directives to the http block in nginx.conf (or the equivalent in your nginx ingress configuration):

proxy_buffer_size   128k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;

Thank you so much! It's working for me!

nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"

@gaetanquentin
Copy link
Author

@gaetanquentin Does that solution work for you? Do you mind if I close this issue?

it looks like it is working, with this global conf in configmap (microk8s):
nginx-load-balancer-microk8s-conf:

apiVersion: v1
data:
  proxy_buffer_size: 128k
  proxy_buffers: "4"
  proxy_busy_buffers_size: 256k
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-load-balancer-microk8s-conf","namespace":"ingress"}}
  creationTimestamp: "2023-01-16T00:02:33Z"
  name: nginx-load-balancer-microk8s-conf
  namespace: ingress
  resourceVersion: "7494247"
  uid: 6aff75f7-bf37-4b9f-9f4d-fd53c796b22e

@gaetanquentin
Copy link
Author

gaetanquentin commented Feb 7, 2023

Finaly, it is not resolved:
image

i think editing the configmap is not sufficient, nginx controllers are not reloaded.

@ssilvert
Copy link
Contributor

ssilvert commented Feb 7, 2023

OK. I don't think this is a UI problem so I will change the area to core.

@gaetanquentin
Copy link
Author

I had this problem also. The solution turned out to be increasing the buffer size used by nginx. It seems that reauthenticating an old session was causing a lot more data between nginx and the upstream server, causing nginx to choke with its default settings.

See Why do I get 502 when trying to authenticate.

Add these directives to the http block in nginx.conf (or the equivalent in your nginx ingress configuration):

proxy_buffer_size   128k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;

about ' proxy_buffers 4 256k;' : in nginx doc, there is no proxy_buffers but [proxy-buffers-number] instead

(https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-buffers-number)

and '4 256k' what does that mean ? 4256k?

regards,

@gaetanquentin
Copy link
Author

ok so with global configmap, it doesn't work
with annotation:

nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"

it works fine.

thank you @tyokyo320

@padraic-shafer
Copy link

about ' proxy_buffers 4 256k;' : in nginx doc, there is no proxy_buffers but [proxy-buffers-number] instead

(https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-buffers-number)

and '4 256k' what does that mean ? 4256k?

It seems like this particular nginx option might not be configurable through the k8s ingress ConfigMap? Here is the documentation from nginx for proxy_buffers number size.

@mabartos
Copy link
Contributor

mabartos commented Mar 7, 2023

Related to #17167; more information here.

@ghost ghost added team/cloud-native labels Mar 7, 2023
@stianst stianst closed this as not planned Won't fix, can't repro, duplicate, stale Mar 8, 2023
@stianst
Copy link
Contributor

stianst commented Mar 8, 2023

Closing as this is not a Keycloak issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/core area/dist/quarkus area/operator Keycloak.X Operator kind/bug Categorizes a PR related to a bug
Projects
None yet
Development

No branches or pull requests

6 participants