This console plugin for OpenShift summarizes Velero APIs that are usually installed by OADP: Backup, Restore, Schedule, BackupStorageLocation, VolumeSnapshotLocation, and related resources. While these resources are intended for OpenShift cluster administrators, the plugin can also surface NonAdmin resources for project administrators or editors.
The plugin is added to Administrator → Administration → Backup / Restore and provides the following:
- Overview KPIs, outcome donuts, and a backup/restore timeline (Velero sections only when Velero admin tabs apply — see RBAC and user experience below)
- Tabbed, filterable, paginated lists with links into the standard console resource views
- Velero (operator) tabs when RBAC allows: Backups, Restores, Schedules, backup/volume snapshot locations, and Download requests (
DownloadRequest) — create pre-signed download URLs from the tab or from Download Actions on backup/restore rows - Self-service (OADP) tabs for
NonAdminBackup/NonAdminRestore/NonAdminBackupStorageLocation/NonAdminDownloadRequestwhenoadp.openshift.io/v1alpha1is available and your user can list those resources (separate from directvelero.iotabs) - Create actions in the page header for the active tab only (when permitted); they open the built-in YAML editor
The plugin does not deploy Velero or OADP; it expects those CRDs and APIs to exist. What each user sees is determined entirely by Kubernetes/OpenShift RBAC (and by what the API returns). The UI does not grant or deny access beyond hiding controls the user cannot use; it is a read-oriented view of existing resources.
Overview (Cluster Admin View) — KPI tiles, outcome donuts, backup/restore timeline:

Overview (Project Admin View) — KPIs, donuts:

Resource lists (Cluster Admin View) — Example Backups: tabbed tables, filters, pagination, and optional actions:

Resource lists (Project Admin View) — Example Backups: tabbed tables, filters, pagination, and optional actions:

Download requests (Project Admin View) — List of requested downloads with URLs.
This plugin started as a simple experiment and grew over time. I am not a professional React developer; I wanted to see what was possible. Parts of this repository—including the UI and CSS—were drafted, verified, or refined with the help of AI coding assistants. That worked well for a quick first draft that could then be extended. All changes should still be reviewed, tested, and validated by humans before you rely on them in production.
| Goal | Start here |
|---|---|
| Understand what ships in the repo | This README, then package.json → consolePlugin and console-extensions.json |
| Build UI assets locally | CONTRIBUTING (npm run build); container image under How to install |
| Run on an OpenShift cluster | How to install |
| Upgrade, versions, rollback, day-2 operations | Lifecycle management |
| Permissions, self-service vs Velero tabs, CLI checks | RBAC and user experience |
| Change code | CONTRIBUTING, src/components/OadpOverviewPage.tsx |
| Ask for help / report bugs | SUPPORT (best effort) |
| Legal terms | LICENSE (Apache-2.0) |
| Security reports | SECURITY |
| Release history | CHANGELOG |
Console compatibility: Built against the OpenShift 4.21 dynamic plugin SDK line (@openshift-console/dynamic-plugin-sdk). Align PatternFly and SDK versions with your cluster per Red Hat: Dynamic plugins.
Prerequisites:
- OpenShift with the Console Operator and dynamic plugins enabled
oc/ Helm 3 and permission to create namespaces,ConsolePlugin, and patchconsoles.operator.openshift.io cluster- Velero CRDs available (typically via OADP)
- A container registry your cluster can pull from (unless you use only the internal OpenShift registry)
Check whether you can pull the published image from quay.io/tjungbau/oadp-console-plugin. You can mirror it to your own registry if needed. Alternatively, build the image yourself (see the next section).
Clone this repository. From the repository root, set PLUGIN_IMAGE to the tag your cluster should pull (your registry and path).
# Use a tag your cluster can pull (internal registry or external).
export PLUGIN_IMAGE="${PLUGIN_IMAGE:-image-registry.openshift-image-registry.svc:5000/$(oc project -q)/oadp-console-plugin:latest}"
# Most clusters use linux/amd64. Pin platform when building on Apple Silicon.
podman build --platform linux/amd64 -t "${PLUGIN_IMAGE}" .
podman push "${PLUGIN_IMAGE}"The Dockerfile uses a multi-stage build (Node.js → nginx). See Troubleshooting: Exec format error below if pods fail to start.
The chart under charts/openshift-console-plugin deploys the plugin Service (TLS via serving cert), Deployment, ConsolePlugin, RBAC, and optionally a Job that adds the plugin name to the cluster console spec.plugins list.
helm upgrade --install oadp-console-plugin ./charts/openshift-console-plugin \
--namespace oadp-console-plugin \
--create-namespace \
--set plugin.image="${PLUGIN_IMAGE}"Optional values: deploy/values.example.yaml.
oc -n oadp-console-plugin get pods,consoleplugin
oc get consoles.operator.openshift.io cluster -o jsonpath='{.spec.plugins}{"\n"}'Hard-refresh the Administrator console. Open Administration → Backup / Restore (the nav link targets the current project’s path, e.g. **/k8s/ns/<project>/oadp**).
- Remove the plugin from the console operator (do not drop other plugin names):
oc patch consoles.operator.openshift.io cluster --type=merge --patch "$(
oc get consoles.operator.openshift.io cluster -o json \
| jq -c '{spec:{plugins: [(.spec.plugins // [])[] | select(. != "oadp-console-plugin")]}}'
)"- Remove the Helm release:
helm uninstall oadp-console-plugin -n oadp-console-plugin- Optionally delete the namespace if unused. By default the namespace is
oadp-console-plugin.
Keep these in sync when you cut a release or publish a chart:
| Artifact | What to bump |
|---|---|
VERSION (repo root) |
Source of truth for CI image tags (X.Y.Z). GitHub Actions pushes quay.io/...:$VERSION, ...:latest, and ...:sha-<short> on the default branch. |
package.json version |
User-facing plugin / npm semver (also appears in plugin-manifest.json) — keep equal to VERSION |
package.json consolePlugin.version |
Same as VERSION / package.json version |
charts/.../Chart.yaml version |
Helm chart package version (often bumped with releases) |
charts/.../Chart.yaml appVersion |
Should match the plugin version users expect (same as VERSION) |
| Container image tag | Immutable :$VERSION per release; :latest moves when CI runs (see below). |
| CHANGELOG | Move [Unreleased] items into a dated section when you tag |
The workflow .github/workflows/container-image.yml only runs on pushes to main or master that change the VERSION file (so unrelated commits do not rebuild or overwrite semver tags on Quay). It reads VERSION (must be MAJOR.MINOR.PATCH) and pushes :$VERSION, latest, and sha-<short>. After each push it signs the image digest with Cosign (keyless) using GitHub OIDC (no signing key in the repo). Pull requests still run a build-only job (no Quay push, no sign). Actions → Run workflow runs a full push when you need a rebuild without editing VERSION in that commit (use sparingly).
Verify locally after installing Cosign. Replace <namespace> with your Quay namespace and tighten the identity regexp to your GitHub owner/repo if you like:
cosign verify "quay.io/<namespace>/oadp-console-plugin:1.0.0" \
--certificate-identity-regexp '^https://github.com/.+/.+/.github/workflows/container-image\.yml@' \
--certificate-oidc-issuer-regexp '^https://token.actions.githubusercontent.com$'Signing is bound to the image digest; verifying by tag checks the digest that tag currently points to.
Release checklist: edit VERSION, align package.json / consolePlugin.version / Chart.yaml, update CHANGELOG, commit and push to the default branch in a commit that includes the VERSION change. Optionally git tag vX.Y.Z && git push origin vX.Y.Z for Git history (tags alone do not trigger this workflow).
- Optional: Build and push a new image; use a new tag or digest so clusters pull fresh layers.
- Or check whether a newer console plugin image is available at quay.io/tjungbau/oadp-console-plugin.
- Helm upgrade with the new image:
helm upgrade oadp-console-plugin ./charts/openshift-console-plugin \
-n oadp-console-plugin \
--set plugin.image="<registry>/<path>/oadp-console-plugin:<new-tag>"- Rollout if the image reference did not change digest (e.g. reused
:latestwhile iterating). The Deployment name matchesplugin.namein the chart (defaultoadp-console-plugin):
oc rollout restart deployment/oadp-console-plugin -n oadp-console-plugin- Browser — wait until the pods have restarted, then hard-refresh the console or use a private window so the browser loads new hashed JS chunks.
- Helm:
helm rollback oadp-console-plugin <revision> -n oadp-console-plugin(usehelm historyto pick a revision), orhelm upgradewith the previous image tag.
Theoretically, the plugin should keep working across OpenShift upgrades, but console or SDK expectations can change. Run a few checks when the cluster minor version changes (e.g. 4.21 → 4.22):
- Re-read Red Hat OpenShift dynamic plugins documentation release notes; bump
@openshift-console/dynamic-plugin-sdk - Rebuild the image and redeploy before or right after the console upgrade, then smoke-test Backup / Restore and manifest loading (
/api/plugins/oadp-console-plugin/).
Breaking UI or API changes should be called out in CHANGELOG.
Note: If something fails after an upgrade, open an issue in this project’s tracker (or your fork’s).
The container CPU architecture must match the nodes (e.g. linux/amd64 vs linux/arm64). Rebuild with the correct --platform, use imagePullPolicy: Always when iterating on :latest, and see the Dockerfile comments for pinning both build stages.
Common causes:
@console/pluginAPIrange in the manifest vs your console build- Stale assets — rebuild the plugin, rebuild/push the image,
helm upgrade, rollout restart - Missing manifest in the image —
oc exec -n oadp-console-plugin deploy/oadp-console-plugin -- cat /usr/share/nginx/html/plugin-manifest.json - NetworkPolicy blocking console pods from the plugin Service on 9443/TCP
- Browser cache — private window / different browser
If the patch Job is disabled, enable the plugin manually:
oc patch consoles.operator.openshift.io cluster --type=merge --patch "$(
oc get consoles.operator.openshift.io cluster -o json \
| jq -c '{spec:{plugins: ((.spec.plugins // []) + ["oadp-console-plugin"] | unique)}}'
)"The plugin talks to the API as the logged-in user. There are two parallel API families:
| Area | API group | Typical use | Tabs / UI |
|---|---|---|---|
| Velero (admin) | velero.io |
Direct Backup, Restore, Schedule, BackupStorageLocation, VolumeSnapshotLocation, DownloadRequest |
Backups, Restores, Schedules, Backup storage locations, Volume snapshot locations, Download requests (plus Download Actions on backup/restore rows when you can create downloadrequests) |
| OADP self-service | oadp.openshift.io |
NonAdmin* CRs that map to Velero objects via the operator |
Self-service backups / restores / storage, Self-service download requests |
Permissions are evaluated with SelfSubjectAccessReview (the same mechanism as oc auth can-i) and with live list watches scoped to the selected project (or all namespaces when the masthead is All projects).
- Use the console Project selector. With All projects, the plugin uses cluster-wide listing where applicable; many users must pick one project to see namespaced lists and self-service resources clearly.
- List watches include
/namespaces/<ns>/...only when a single project is selected; otherwise listing would hit cluster-wide URLs and often return 403 for project-scoped roles.
The six direct Velero tabs (Backups, Restores, Schedules, Backup storage locations, Volume snapshot locations, and Download requests) and their overview KPIs, charts, and timeline are shown only if create on namespaces is allowed cluster-wide (SSAR with no namespace — like oc auth can-i create namespaces). Users without that permission see only the NonAdmin (self-service) tabs and overview content for resources they can list, even if they have broad velero.io rights in a project (for example OpenShift project admin usually cannot create namespaces cluster-wide, so they still get the self-service–only experience here).
Each Velero tab (when the namespace gate above passes) is shown only if the user can both:
listthat resource (in the current SSAR scope — project or cluster), andcreatethat same resource type (e.g. Backups tab requirescreateonbackups.velero.io, not only onrestores).
So a user who may create only Restores does not get the Backups or Schedules tabs just because they can list them. Operators who should see every Velero tab need create (and usually list) on each type they need in that namespace (or cluster-wide rules where used).
Important: Project admin / edit role often includes velero.io create in the namespace (this is automatically added by OpenShift OLM), but this plugin still hides direct Velero tabs unless the user can create namespaces cluster-wide. For day-to-day project work, use NonAdmin self-service CRs and tabs.
Self-service tabs are shown when the user can list the corresponding oadp.openshift.io resource. Create buttons on those tabs follow create on that resource.
As the user (or with impersonation):
# Gate for all direct Velero tabs in this plugin (cluster-scoped)
oc auth can-i create namespaces --as=<user>
# Velero — per tab, after the gate above
oc auth can-i list backups.velero.io -n <project> --as=<user>
oc auth can-i create backups.velero.io -n <project> --as=<user>
# repeat for restores, schedules, backupstoragelocations, volumesnapshotlocations, downloadrequests
# Self-service
oc auth can-i list nonadminbackups.oadp.openshift.io -n <project> --as=<user>
oc auth can-i create nonadminbackups.oadp.openshift.io -n <project> --as=<user>
# nonadmindownloadrequests — self-service download tab and row actions
oc auth can-i list nonadmindownloadrequests.oadp.openshift.io -n <project> --as=<user>
oc auth can-i create nonadmindownloadrequests.oadp.openshift.io -n <project> --as=<user>| Goal | Typical approach |
|---|---|
| Full Velero UI in this plugin | The user must pass the cluster create namespaces SSAR and have list + create on each velero.io type they need in scope (per tab). Usually cluster administrators (or equivalent ClusterRoleBindings) satisfy the namespace gate. |
| Self-service only (typical project users) | Grant list / create on oadp.openshift.io NonAdmin* in the project. Without create namespaces cluster-wide, direct Velero tabs do not appear in this plugin regardless of project admin/edit on velero.io. |
| Per-tab Velero (after namespace gate) | Even with create namespaces, each tab still requires list and create for that Velero resource type; the reader-style message applies only to users who pass the namespace gate. |
If the UI does not match oc auth can-i, hard-refresh the browser and confirm the cluster is serving a new plugin image after npm run build / image push / rollout (see Troubleshooting).
- OADP Operator — Installs Velero and related components; this plugin is a separate UI layer.
- console-plugin-template — The Helm chart here is derived from that pattern (serving certs,
ConsolePlugin, optional console patch Job).
- Contributing: CONTRIBUTING
- Support (best effort, no SLA): SUPPORT
- Security: SECURITY
- License: LICENSE — Apache-2.0 (also declared in
package.json)
