Skip to content
This repository has been archived by the owner on Dec 7, 2020. It is now read-only.

Gatekeeper cannot get OpenID configuration #576

Closed
abstractj opened this issue Apr 28, 2020 · 9 comments
Closed

Gatekeeper cannot get OpenID configuration #576

abstractj opened this issue Apr 28, 2020 · 9 comments
Assignees

Comments

@abstractj
Copy link

What:

I'm trying to use Gatekeeper to protect an application. I'm getting the error shown below in the log output:

1.5804951742046025e+09 info starting the service {"prog": "keycloak-gatekeeper", "author": "Keycloak", "version": "7.0.0 (git+sha: f66e137, built: 03-09-2019)"}

1.58049517420485e+09 info attempting to retrieve configuration discovery url {"url": "https://login.citygate.io/auth/realms/master", "timeout": "30s"}

1.580495184205164e+09 warn failed to get provider configuration from discovery {"error": "Get https://login.citygate.io/auth/realms/master/.well-known/openid-configuration: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}

My Gatekeeper configuration looks like this:
{{discovery-url: https://login.citygate.io/auth/realms/master}}
{{client-id: docker-admin}}
{{client-secret: }}
{{listen: :80}}
{{enable-refresh-tokens: true}}
{{redirection-url: https://docker-admin.citygate.io}}
{{encryption-key: }}
{{upstream-url: http://app:5000}}
{{resources:}}
{{- uri: /*}}
{{methods:}}
{{- GET}}
{{- POST}}
{{roles:}}
{{- admin}}

Reference:

@luketainton
Copy link

Hi @abstractj,

Thanks for migrating this issue! I did work around this problem by manually creating an nsswitch.conf file and bind mounting it inside the container at /etc/nsswitch.conf - file contents are below.

hosts: files dns

I assume this forces it to look at /etc/hosts first, which is the desired behaviour because I had used the extra_hosts option in docker-compose.yml to manually set the IP for my Keycloak server.

Hope this helps!

@abstractj abstractj self-assigned this May 10, 2020
@abstractj
Copy link
Author

Thanks for reporting it back. For now I'm closing this issue, as this seems more like a Docker configuration issue, instead of an issue on Louketo.

@ASzc maybe the introduction of nsswitch.conf is something that we can take into consideration for #609 to prevent issues like this. But I will leave that up to you.

@ASzc
Copy link
Contributor

ASzc commented Jun 11, 2020

In #638, with UBI, there will be a /etc/nsswitch.conf file present in the image. It's host line is the following:

hosts:      files dns myhostname

Which I think is acceptable as far as this issue goes, so no further action is required, unless in the future we go to scratch or similar options.

@TcaManager
Copy link

Hi, I have a similar problem with Bitnami Image here https://github.com/bitnami/bitnami-docker-keycloak-gatekeeper/issues/11 so I'm trying Louketo too, but same problem appears.
My docker-compose.yml file.

version: '3.7'

services:
  postgres:
    image: postgres
    volumes:
      - type: bind
        source: ../postgres_data
        target: /var/lib/postgresql/data
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=Pa55w0rd
    ports:
      - 5432:5432
    restart: unless-stopped
  pgadmin:
    image: dpage/pgadmin4
    environment:
      PGADMIN_DEFAULT_EMAIL: 'pgadmin4@pgadmin.org'
      PGADMIN_DEFAULT_PASSWORD: 'Pa55w0rd'
    volumes:
      - type: bind
        source: ../pgadmin_data
        target: /var/lib/pagadmin
    ports:
      - 5050:80
    restart: unless-stopped
    depends_on:
      - postgres
  ldap:
    image: i2hm/openldap:latest
  keycloak:
    image: quay.io/keycloak/keycloak:latest
    environment:
      DB_VENDOR: POSTGRES
      DB_ADDR: postgres
      DB_DATABASE: keycloak
      DB_USER: postgres
      DB_SCHEMA: public
      DB_PASSWORD: Pa55w0rd
      KEYCLOAK_USER: admin
      KEYCLOAK_PASSWORD: Pa55w0rd
    ports:
      - 8080:8080
    expose:
      - 8080
    depends_on:
      - postgres
  louketo-proxy:
    image: quay.io/louketo/louketo-proxy:latest
    command: >-
      --config /opt/louketo/config.yml
    ports:
      - 3000:3000
    volumes:
      - type: bind
        source: ../gatekeeper/config.yml
        target: /opt/louketo/config.yml
    depends_on:
      - keycloak
      - web
  web:
    build: .
    volumes:
      - type: bind
        source: ../gunicorn_log
        target: /var/log
    ports:
      - 5000:5000
    expose:
      - 5000
    depends_on:
      - postgres

My config file.

client-id: flask_example
client-secret: 7216c35c-0f0a-4dbf-adc8-beb0dc933f27
discovery-url: http://keycloak:8080/auth/realms/identite
enable-default-deny: true
listen: :3000
upstream-url: http://web:5000
resources:
- uri: /admin*
  methods:
  - GET
  roles:
  - client:test1
  - client:test2
  require-any-role: true
  groups:
  - admins
  - users
- uri: /backend*
  roles:
  - client:test1
- uri: /public/*
  white-listed: true
- uri: /favicon
  white-listed: true
- uri: /css/*
  white-listed: true
- uri: /img/*
  white-listed: true
headers:
  myheader1: value_1
  myheader2: value_2

I get this error.

louketo-proxy_1  | 2020-08-20T12:32:14.829Z	info	attempting to retrieve configuration discovery url	{"url": "http://keycloak:8080/auth/realms/identite", "timeout": "30s"}
louketo-proxy_1  | 2020-08-20T12:32:14.835Z	warn	failed to get provider configuration from discovery	{"error": "Get \"http://keycloak:8080/auth/realms/identite/.well-known/openid-configuration\": dial tcp: lookup keycloak on 127.0.0.11:53: no such host"}
louketo-proxy_1  | 2020-08-20T12:32:17.835Z	info	attempting to retrieve configuration discovery url	{"url": "http://keycloak:8080/auth/realms/identite", "timeout": "30s"}
louketo-proxy_1  | 2020-08-20T12:32:17.848Z	warn	failed to get provider configuration from discovery	{"error": "Get \"http://keycloak:8080/auth/realms/identite/.well-known/openid-configuration\": dial tcp: lookup keycloak on 127.0.0.11:53: no such host"}

Thanks for your help. Sorry for posting here and at bitnami too.

@luketainton
Copy link

luketainton commented Aug 20, 2020

Hi @TcaManager,

Did you try manually adding the /etc/nsswitch.conf file?

Another idea is to add the extra_hosts directive to your config. You'll want something like this:

louketo-proxy:
    image: quay.io/louketo/louketo-proxy:latest
    command: >-
      --config /opt/louketo/config.yml
    ports:
      - 3000:3000
    volumes:
      - type: bind
        source: ../gatekeeper/config.yml
        target: /opt/louketo/config.yml
    depends_on:
      - keycloak
      - web
    extra_hosts:
      - "keycloak:X.X.X.X"

Swap keycloak for the actual hostname, and X.X.X.X for the internal or external IP address of the Keycloak server.

@TcaManager
Copy link

extra_hosts:
      - "keycloak:X.X.X.X"

Interesting @luketainton , thanks for your help. What do you mean with internal/external IP address? Do I need to get any ifconfig docker interface IP? My host has many FQDN and IPs in /etc/hosts but I'm on Linux and I wonder which IP would my container use to reach the host? I saw docker0 in ifconfig and many interfaces br-9e188c5db690 (br for bridge?)

@luketainton
Copy link

extra_hosts:
      - "keycloak:X.X.X.X"

Interesting @luketainton , thanks for your help. What do you mean with internal/external IP address? Do I need to get any ifconfig docker interface IP? My host has many FQDN and IPs in /etc/hosts but I'm on Linux and I wonder which IP would my container use to reach the host? I saw docker0 in ifconfig and many interfaces br-9e188c5db690 (br for bridge?)

Is your Keycloak container accessible from other hosts? If so, it'll be the IP address of the host. Otherwise try copying the depends_on section but renaming it to links, then remove the extra_hosts section.

@TcaManager
Copy link

You mean from other containers? Keycloak finds the postgres container, so I guess containers can see eachother inside the docker-compose.yml file. On my host, I mean my workstation, which is the docker host, I can access the keycloak on port 8080 and all others containers having "ports" section, except louketo that doesn't start. My workstation has a lot of IP addresses, I'd like to know which one is reachable by containers, depending on which network_mode I use.

For example

julien@julien-HP-EliteBook-820-G3:~/WorkSpace/keycloak_poc/infra$ ifconfig
br-2e7e54d28b65: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.20.0.1  netmask 255.255.0.0  broadcast 172.20.255.255
        ether 02:42:9e:3c:be:e2  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-4f643451a7a7: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.23.0.1  netmask 255.255.0.0  broadcast 172.23.255.255
        ether 02:42:c4:b9:a5:cb  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-7d2178051e92: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.22.0.1  netmask 255.255.0.0  broadcast 172.22.255.255
        ether 02:42:47:e0:f3:15  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-9e188c5db690: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.24.0.1  netmask 255.255.0.0  broadcast 172.24.255.255
        inet6 fe80::42:c4ff:fec6:6389  prefixlen 64  scopeid 0x20<link>
        ether 02:42:c4:c6:63:89  txqueuelen 0  (Ethernet)
        RX packets 3030  bytes 3438269 (3.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1194  bytes 244273 (244.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-c8cd4df9c68d: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.21.0.1  netmask 255.255.0.0  broadcast 172.21.255.255
        inet6 fe80::42:63ff:fe16:d5c9  prefixlen 64  scopeid 0x20<link>
        ether 02:42:63:16:d5:c9  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 39  bytes 5340 (5.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-f843d01574b1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.19.0.1  netmask 255.255.0.0  broadcast 172.19.255.255
        ether 02:42:7d:65:6b:4d  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:fdff:fefb:d18a  prefixlen 64  scopeid 0x20<link>
        ether 02:42:fd:fb:d1:8a  txqueuelen 0  (Ethernet)
        RX packets 88  bytes 6480 (6.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 101  bytes 14247 (14.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker_gwbridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
        inet6 fe80::42:3eff:fe63:6ff3  prefixlen 64  scopeid 0x20<link>
        ether 02:42:3e:63:6f:f3  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 83  bytes 11474 (11.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s31f6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.30.146.185  netmask 255.255.255.128  broadcast 172.30.146.255
        inet6 fe80::8edc:b268:2323:3678  prefixlen 64  scopeid 0x20<link>
        ether 30:e1:71:7a:33:06  txqueuelen 1000  (Ethernet)
        RX packets 795340  bytes 304731964 (304.7 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 206686  bytes 31281217 (31.2 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xe1200000-e1220000  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Boucle locale)
        RX packets 80778  bytes 9923390 (9.9 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 80778  bytes 9923390 (9.9 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth7878603: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::5035:57ff:fefa:cc79  prefixlen 64  scopeid 0x20<link>
        ether 52:35:57:fa:cc:79  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 149  bytes 19436 (19.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

@luketainton
Copy link

@TcaManager it'll be whatever the address is on the NIC that represents that particular Docker network - you can run docker inspect <NAME>, which will tell you the details (inc. IP address) of the container that you specify.

Otherwise, you can try this instead:

louketo-proxy:
    image: quay.io/louketo/louketo-proxy:latest
    command: >-
      --config /opt/louketo/config.yml
    ports:
      - 3000:3000
    volumes:
      - type: bind
        source: ../gatekeeper/config.yml
        target: /opt/louketo/config.yml
    depends_on:
      - keycloak
      - web
    links:
      - keycloak
      - web

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants