OpenShift Cluster Console UI
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
auth Tighten the validation around OAuth URLs Dec 6, 2018
cmd/bridge Accept service-ca.crt file as parameter Dec 17, 2018
contrib Remove Tectonic branding Jul 17, 2018
docs-internal Fix link Sep 12, 2018
examples Additional branding updates Aug 8, 2018
frontend Merge pull request #1096 from rebeccaalpert/add-event-namespace Jan 18, 2019
pkg/proxy Remove noise from the console logs Aug 29, 2018
server Enable gzip compression of static files Dec 18, 2018
vendor vendor: revendor Jul 6, 2018
version Rename for packaging: coreos-inc/bridge -> openshift/console. May 12, 2018
.gitignore use 'render' prop for <LazyRoute> to prevent unmounting when query pa… Nov 26, 2018 docs: udpates to project documentation Sep 20, 2016
Dockerfile Include system roots in cert pool when talking to OAuth server Aug 13, 2018
Dockerfile-builder Bump versions of various software in builder image. May 30, 2018
Dockerfile.product Bump chromedriver to version 2.43.3 Nov 20, 2018
Dockerfile.product.nodejs multistage builds for downstream Aug 30, 2018
LICENSE chore: Adds license Apr 2, 2018
Procfile Procfile: modified commands to correctly execute Aug 2, 2017 Fixing typos Sep 20, 2018 frontend: Add object-shorthand to linter Dec 19, 2018
bill-of-materials.json chore: Adds license bill of materials Mar 26, 2018 Rename all shell files to have .sh extensions. May 23, 2018 Rename all shell files to have .sh extensions. May 23, 2018 Rename all shell files to have .sh extensions. May 23, 2018 prevent unnecessary re-rendering of Firehose by checking if resources… Oct 9, 2018 Rename all shell files to have .sh extensions. May 23, 2018 Rename all shell files to have .sh extensions. May 23, 2018 Rename all shell files to have .sh extensions. May 23, 2018
e2e.Dockerfile added failfast for Protractor Dec 21, 2017
e2e.yaml added failfast for Protractor Dec 21, 2017 Rename all shell files to have .sh extensions. May 23, 2018
glide.lock server: support config file for openshift-ansible Jul 6, 2018
glide.yaml server: support config file for openshift-ansible Jul 6, 2018
jenkins Add jenkins file that just calls Needed for compatibility… May 23, 2018 Jenkins: always exit on error Nov 12, 2018 Add script to check for bad links in console. Fix broken links. Oct 26, 2017 Rename all shell files to have .sh extensions. May 23, 2018 Oops. Missed a .sh rename. May 25, 2018 Rename all shell files to have .sh extensions. May 23, 2018 Rename all shell files to have .sh extensions. May 23, 2018 Rename all shell files to have .sh extensions. May 23, 2018 Service Catalog Integration Tests Sep 27, 2018 Rename all shell files to have .sh extensions. May 23, 2018

OpenShift Console

Codename: "Bridge"

Build Status

The console is a more friendly kubectl in the form of a single page webapp. It also integrates with other services like monitoring, chargeback, and OLM. Some things that go on behind the scenes include:

  • Proxying the Kubernetes API under /api/kubernetes
  • Providing additional non-Kubernetes APIs for interacting with the cluster
  • Serving all frontend static assets
  • User Authentication



  1. node.js >= 8 & yarn >= 1.3.2
  2. go >= 1.8 & glide >= 0.12.0 (go get & glide-vc
  3. kubectl and a k8s cluster
  4. jq (for contrib/
  5. Google Chrome/Chromium >= 60 (needs --headless flag) for integration tests

Build everything:


Backend binaries are output to /bin.

Configure the application


Registering an OpenShift OAuth client requires administrative privileges for the entire cluster, not just a local project. Run the following command to log in as cluster admin:

oc login -u system:admin

To run bridge locally connected to an OpenShift cluster, create an OAuthClient resource with a generated secret and read that secret:

oc process -f examples/console-oauth-client.yaml | oc apply -f -
oc get oauthclient console-oauth-client -o jsonpath='{.secret}' > examples/console-client-secret

If the CA bundle of the OpenShift API server is unavailable, fetch the CA certificates from a service account secret. Otherwise copy the CA bundle to examples/ca.crt:

oc get secrets -n default --field-selector -o json | \
    jq '.items[0].data."service-ca.crt"' -r | python -m base64 -d > examples/ca.crt
# Note: use "openssl base64" because the "base64" tool is different between mac and linux

Set the OPENSHIFT_API environment variable to tell the script the API endpoint:


Finally run the console and visit localhost:9000:


OpenShift (without OAuth)

For local development, you can also disable OAuth and run bridge with an OpenShift user's access token. Run the following commands to create an admin user and start bridge for a cluster up environment:

oc login -u system:admin
oc adm policy add-cluster-role-to-user cluster-admin admin
oc login -u admin
source ./contrib/

Native Kubernetes

If you have a working kubectl on your path, you can run the application with:

export KUBECONFIG=/path/to/kubeconfig
source ./contrib/

The script in contrib/ sets sensible defaults in the environment, and uses kubectl to query your cluster for endpoint and authentication information.

To configure the application to run by hand, (or if doesn't work for some reason) you can manually provide a Kubernetes bearer token with the following steps.

First get the secret ID that has a type of by running:

kubectl get secrets

then get the secret contents:

kubectl describe secrets/<secret-id-obtained-previously>

Use this token value to set the BRIDGE_K8S_BEARER_TOKEN environment variable when running Bridge.


The script will run any command from a docker container to ensure a consistent build environment. For example to build with docker run:

./ ./

The docker image used by builder-run is itself built and pushed by the script push-builder, which uses the file Dockerfile-builder to define an image. To update the builder-run build environment, first make your changes to Dockerfile-builder, then run push-builder, and then update the BUILDER_VERSION variable in builder-run to point to your new image. Our practice is to manually tag images builder images in the form Builder-v$SEMVER once we're happy with the state of the push.

Compile, Build, & Push Image

(Almost no reason to ever do this manually, Jenkins handles this automation)

Build an image, tag it with the current git sha, and pushes it to the repo.

Must set env vars DOCKER_USER and DOCKER_PASSWORD or have a valid .dockercfg file.


Jenkins automation

Master branch:

  • Runs a build, pushes an image to Quay tagged with the commit sha

Pull requests:

  • Runs a build when PRs are created or PR commits are pushed
  • Comment with Jenkins rebuild to manually trigger a re-build
  • Comment with Jenkins push to push an image to Quay, tagged with: pr_[pr #]_build_[jenkins build #]

If changes are ever required for the Jenkins job configuration, apply them to both the regular console job and PR image job.


See CONTRIBUTING for workflow & convention details.

See STYLEGUIDE for file format and coding style guide.

Dev Dependencies

go, glide, glide-vc, nodejs/yarn, kubectl

Frontend Development

All frontend code lives in the frontend/ directory. The frontend uses node, yarn, and webpack to compile dependencies into self contained bundles which are loaded dynamically at run time in the browser. These bundles are not committed to git. Tasks are defined in package.json in the scripts section and are aliased to yarn run <cmd> (in the frontend directory).

Install Dependencies

To install the build tools and dependencies:

yarn install

You must run this command once, and every time the dependencies change. node_modules are not committed to git.

Interactive Development

The following build task will watch the source code for changes and compile automatically. You must reload the page in your browser!

yarn run dev

If changes aren't detected, you might need to increase fs.inotify.max_user_watches. See


Run all unit tests:


Run backend tests:


Run frontend tests:


Integration Tests

Integration tests are run in a headless Chrome driven by protractor. Requirements include Chrome, a working cluster, kubectl, and bridge itself (see building above).

Note: If you are running integration tests against OpenShift, you should start bridge using to skip the login page.

Setup (or any time you change node_modules - yarn add or yarn install)

cd frontend && yarn run webdriver-update

Run integration tests:

yarn run test-gui

Run integration tests on an OpenShift cluster:

yarn run test-gui-openshift

This will include the normal k8s CRUD tests and CRUD tests for OpenShift resources. It doesn't include ALM tests since it assumes ALM is not set up on an OpenShift cluster.

Hacking Integration Tests

Remove the --headless flag to Chrome (chromeOptions) in frontend/integration-tests/protractor.conf.ts to see what the tests are actually doing.

Dependency Management

Dependencies should be pinned to an exact semver, sha, or git tag (eg, no ^).


Whenever making vendor changes:

  1. Finish updating dependencies & writing changes
  2. Commit everything except vendor/ (eg, server: add x feature)
  3. Make a second commit with only vendor/ (eg, vendor: revendor)

Add new backend dependencies:

  1. Edit glide.yaml
  2. ./

Update existing backend dependencies:

  1. Edit the glide.yaml file to the desired version (most likely a git hash)
  2. Run ./
  3. Verify update was successful. glide.lock will have been updated to reflect the changes to glide.yaml and the package will have been updated in vendor.


Add new frontend dependencies:

yarn add <package@version>

Update existing frontend dependencies:

yarn upgrade <package@version>

Supported Browsers

We support the latest versions of the following browsers:

  • Edge
  • Chrome
  • Safari
  • Firefox

IE 11 and earlier is not supported.