This document will walk-through how to create two managed Kubernetes clusters on separate providers (GKE and EKS), deploying:
- Dex as the OIDC issuer for both clusters.
- Gangway web server to authenticate users to Dex and help generate Kubeconfig files.
- kube-oidc-proxy to expose both clusters to OIDC authentication.
- Contour as the ingress controller with TLS SNI passthrough enabled.
- Cert-Manager to issue and manage certificates.
It will also demonstrate how to enable different authentication methods that dex supports, namely, username and password, and Github, however more are available.
The tutorial will be using Cert-Manager to generate certificates signed by Let's Encrypt for components in both GKE and EKS using a DNS challenge. Although not the only way to generate certificates, the tutorial assumes that a domain will be used which belongs to your Google Cloud project, and records of sub-domains of this domain will be created to assign DNS to the components. A Google Cloud Service Account will be created to manage these DNS challenges and it's secrets passed to Cert-Manager.
A Service Account has been created for terraform with it's secrets stored at
~/.config/gcloud/terraform-admin.json
. The Service Account needs at least
these IAM Roles attached:
Compute Admin
Kubernetes Engine Admin
DNS Administrator
Security Reviewer
Service Account Admin
Service Account Key Admin
Project IAM Admin
You have an AWS account with permissions to create an EKS cluster and other relevent permissions to create a fully fledged cluster, including creating load balancers, instance pools etc. Typically, these environment variables must be set when running terraform and deploying the manifests before OIDC authentication has been set up:
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
AWS_ACCESS_KEY_ID
First the GKE and EKS cluster will be created, along with secrets to be used for OIDC authentication for each cluster. The amazon terraform module has dependant resources on the google module, so the google module must be created first.
CLOUD=google make terraform_apply
CLOUD=amazon make terraform_apply
This will create a standard Kubernetes cluster in both EKS and GKE, a Service
Account to manage Google Cloud DNS records for DNS challenges and OIDC secrets
for both clusters. It should generate a JSON configuration file for both
clusters in ./manifests/google-config.json
and ./manifests/amazon.json
respectively.
Copy config.dist.jsonnet
to both gke-config.jsonnet
and eks-config.jsonnet
.
These two files will hold configuration for setting up the OIDC authentication
in both clusters as well as assigning DNS. Firstly, determine what sub-domain
will be used for either cluster, using a domain you own in your Google Cloud
Project, e.g.
gke.mydomain.company.net
eks.mydomain.company.net
Populate each configuration file with it's corresponding domain and Let's Encrypt contract email.
Since the GKE cluster will be hosting Dex, the OIDC issuer, it's configuration file must contain how or what users will use to authenticate. Here we will show two methods, username and password, and Github.
Usernames and passwords can be populated with the following block within the
dex
block.
dex+: {
users: [
$.dex.Password('admin@example.net', '$2y$10$i2.tSLkchjnpvnI73iSW/OPAVriV9BWbdfM6qemBM1buNRu81.ZG.'), // plaintext: secure
],
},
The username will be the username used by the user to authenticate and the user identity used for RBAC within Kubernetes. The password is a bcrypt encryption hash of the plain text password. This can be generated by the following:
htpasswd -bnBC 10 "" MyVerySecurePassword | tr -d ':'
Dex also supports multiple 'connectors' that enable third party applications to
provide OAuth to it's system. For Github, this involves creating an 'OAuth App'.
The Authorization callback URL
should be populated with the Dex callback URL, i.e.
https://dex.gke.mydomain.company.net/callback
.
The resulting Client ID
and Client Secret
can then be used to populate the
configuration file:
dex+: {
connectors: [
$.dex.Connector('github', 'GitHub', 'github', {
clientID: 'myGithubAppClientID',
clientSecret: 'myGithubAppClientSecret',
orgs: [{
name: 'company',
}],
}),
],
},
You can find more information on github OAuth apps here.
Finally, Dex needs to be configured to also accept the gangway client in the EKS
cluster. To do this, we add a Dex Client block in the configuration. We need to
populate it's redirect URL as well as the client ID and client secret using
values that were created in the ./manifests/amazon-config.json
by terraform.
The resulting block should would look like:
eksClient: $.dex.Client('my_client_id_in_./manifests/amazon-config.json') + $.dex.metadata {
secret: 'my_client_secret_in_./manifests/amazon-config.json',
redirectURIs: [
'https://gangway.eks.mydomain.company.net/callback',
],
},
The resulting gke-config.jsonnet
file should look similar to
(import './manifests/main.jsonnet') {
base_domain: 'gke.mydomain.company.net',
cert_manager+: {
letsencrypt_contact_email:: 'myemail@company.net',
},
dex+: {
users: [
$.dex.Password('admin@example.net', '$2y$10$i2.tSLkchjnpvnI73iSW/OPAVriV9BWbdfM6qemBM1buNRu81.ZG.'), // plaintext: secure
],
connectors: [
$.dex.Connector('github', 'GitHub', 'github', {
clientID: 'myGithubAppClientID',
clientSecret: 'myGithubAppClientSecret',
orgs: [{
name: 'company',
}],
}),
],
},
eksClient: $.dex.Client('my_client_id_in_./manifests/amazon-config.json') + $.dex.metadata {
secret: 'my_client_secret_in_./manifests/amazon-config.json',
redirectURIs: [
'https://gangway.eks.mydomain.company.net/callback',
],
},
}
The EKS cluster will not be hosting the dex server so only needs to be
configured with it's domain, Dex's domain and the Let's Encrypt contact email.
The resuting eks-config.jsonnet
file should look similar to:
(import './manifests/main.jsonnet') {
base_domain: 'eks.mydomain.company.net',
dex_domain: 'dex.gke.mydomain.company.net',
cert_manager+: {
letsencrypt_contact_email:: 'myemail@company.net',
},
}
Once the configuration files have been created the manifests can be deployed.
Copy or create a symbolic link from the gke-config.jsonnet
file to
config.jsonnet
and apply.
$ ln -s gke-config.jsonnet config.jsonnet
$ export CLOUD=google
$ make manifests_apply
You should then see the components deployed to the cluster in the auth
namespace.
export KUBECONFIG=.kubeconfig-google
$ kubectl get po -n auth
NAME READY STATUS RESTARTS AGE
contour-55c46d7969-f9gfl 2/2 Running 0 46s
dex-7455744797-p8pql 0/1 ContainerCreating 0 12s
gangway-77dfdb68d-x84hj 0/1 ContainerCreating 0 11s
Verify that the ingress has been configured to what you were expecting.
$ kubectl get ingressroutes -n auth
You should now see the DNS challenge attempting to be furfilled by Cert-Manager in your DNS Zone details in the Google Cloud console.
Once complete, three TLS secrets will be generated, gangway-tls
, dex-tls
,
and kube-oidc-proxy-tls
.
$ kubectl get -n auth secret
You can save these certifcates locally, and resotre them any time using:
$ make manifests_backup_certificates
$ make manifests_restore_certificates
An A record can now be created so the DNS can be resolved to the Contour Load Balancer public IP Adress. Take a note of the external-IP address exposed:
$ kubectl get svc contour -n auth
Create an A record set with a wild card sub-domain to your domain, with some reasonable TTL pointing to the exposed IP address of the Contour Load Balancer.
DNS name: *.gke.mydomain.company.net
Record resource type: A
IPv4 address: $CONTOUR_IP
You can check that the DNS record has been propagated by trying to resolve it using:
$ host https://gangway.gke.mydomain.company.net
Once propagated, you can then visit the Gangway URL, follow the instructions and download your Kubeconfig with OIDC authentication, pointing to the kube-oidc-proxy. Trying the Kubeconfig, you should be greeted with an error message that your OIDC username does not have enough RBAC permissions to access that resource.
The EKS cluster manifests can now be deployed using the eks-config.jsonnet
.
$ rm config.jsonnet && ln -s eks-config.jsonnet config.jsonnet
$ export CLOUD=amazon
$ make manifests_apply
Get the AWS DNS URL for the contour Load Balancer.
$ export KUBECONFIG=.kubeconfig-amazon
$ kubectl get svc -n auth
Once the the contour LoadBalancer has an external URL, we need to create a CNAME record set to resolve the DNS.
DNS name: *.eks.mydomain.company.net
Record resource type: CNAME
Canonical name: $CONTOUR_AWS_URL
When components have their TLS secrets, you will then be able to login to the Gangway portal on EKS and download your Kubeconfig. Again, when trying this Kubeconfig, you should initially be greeted with unauthorized to that resource until RBAC permissions have been granted to this user.