Open RAN Agent is a design-first bootstrap repository for a 5G SA RAN control and operations architecture centered on CU-CP, CU-UP, DU-high, split 7.2x southbound integration, a native low-PHY / fronthaul runtime boundary, and deterministic operations through bin/ranctl.
Status
This repository is intentionally architecture-first. It prioritizes system boundaries, OTP application boundaries, failure domains, southbound contracts, and operational workflows before full live protocol stacks or real-time data paths.
Jump to: Why this repo · Architecture · What works today · Getting started · Project layout · Advanced workflows · Roadmap
This project exists to make early Open RAN system work more explicit, testable, and operable.
- Clear ownership boundaries between BEAM-based control/orchestration and native runtime paths.
- Deterministic operator workflows through a single mutable control surface:
bin/ranctl. - Inspectable artifacts and evidence for
precheck,plan,apply,verify,rollback, and debug workflows. - Backend portability through a canonical FAPI-oriented IR rather than backend-specific control logic.
- Honest design posture: assumptions, open questions, and deferred decisions are recorded directly instead of being hidden in ad hoc code.
This is a good fit for contributors interested in RAN architecture, BEAM supervision and fault isolation, operational tooling, and southbound integration boundaries.
Figure source: docs/assets/infographics/architecture-overview.infographic
- Use a Mix umbrella as the repo backbone so BEAM applications share tooling, config, and release conventions.
- Keep BEAM responsible for control, orchestration, state management, and fault isolation.
- Push slot-timed and fronthaul-adjacent work behind a native gateway boundary, starting with a Port-based sidecar.
- Normalize DU-high southbound traffic through a canonical FAPI-oriented IR so local and Aerial-style backends share one contract.
- Treat
bin/ranctlas the only mutable action entrypoint for operational changes. - Keep Symphony, Codex, and skill workflows outside hot paths. They may propose or orchestrate actions, but they do not directly own runtime state transitions.
- Design for single DU / single cell / single UE attach-plus-ping first, while keeping extension points for Aerial, cuMAC, and multi-cell work.
- Record assumptions, open questions, and deferred decisions explicitly.
Selected implementation decisions
- Build structure: Mix umbrella with selective Erlang modules inside apps and native sidecars for RT-sensitive work
- Language split: Elixir is the default for app boundaries, supervision, config, and ops layers; Erlang is reserved for protocol-heavy modules where it later proves advantageous
- BEAM versus native boundary:
ran_du_hightalks toran_fapi_core;fapi_rt_gatewayhandles backend transport and timing-sensitive bridging - Canonical southbound contract:
slot_batch-oriented IR with backend capability negotiation and explicit health states - Scheduler abstraction:
ran_scheduler_hostowns the scheduler boundary;cpu_scheduleris the default implementation andcumac_schedulerremains a future adapter - Failure domains: isolate
association,ue subtree,cell_group, andbackend gateway - Automation model: Symphony and Codex orchestrate skills; skills stay thin wrappers around
bin/ranctl; MCP is out of scope
| Area | Status | Notes |
|---|---|---|
| Architecture docs and ADRs | ✅ | System overview and design decisions are first-class repository content |
ranctl control lifecycle |
✅ | precheck -> plan -> apply -> verify -> rollback with file-backed outputs |
| Dashboard / Deploy Studio | ✅ | Local UI for topology preview, actions, readiness, and evidence |
| OAI DU runtime bridge | ✅ | Real OpenAirInterface DU orchestration via generated Docker Compose assets |
| Bootstrap packaging | ✅ | Source-first bundle generation and stricter topology validation |
| Target-host deploy chain | ✅ | Ship, preflight, remote ranctl, and evidence fetch workflows |
| Native runtime boundary | Contracts and placeholders exist; real RT backends are still staged | |
| End-to-end live protocol stacks | 🚧 | Deliberately deferred at this phase |
The current phase does not aim to deliver a complete production RAN stack. The following remain explicitly deferred for now:
- live ASN.1 codecs
- SCTP and GTP-U runtime stacks
- real eCPRI or O-RAN FH transport
- real local DU-low implementation
- real NVIDIA Aerial integration
- real cuMAC integration
- production Symphony hooks
| Goal | Start here |
|---|---|
| Understand the full system boundary | docs/architecture/00-system-overview.md |
| Understand the mutable action model | docs/architecture/05-ranctl-action-model.md |
| Understand the OAI runtime bridge | docs/architecture/09-oai-du-runtime-bridge.md |
| Understand target-host deployment | docs/architecture/12-target-host-deployment.md |
| Understand ops profiles | docs/architecture/13-ocudu-inspired-ops-profiles.md |
| Understand debugging and evidence | docs/architecture/14-debug-and-evidence-workflow.md |
Read ADRs in order under docs/adr. Use AGENTS.md for persistent repository rules.
Figure source: docs/assets/infographics/ranctl-lifecycle.infographic
A minimal example:
bin/ranctl plan --file examples/ranctl/precheck-switch-local.jsonUseful first commands:
# local design / contract checks
mix contract_ci
# open the local operator UI
bin/ran-dashboard
# exercise the control lifecycle
bin/ranctl precheck --file examples/ranctl/precheck-oai-du-docker.json
bin/ranctl plan --file examples/ranctl/apply-oai-du-docker.json
# exercise the core cutover control surface
bin/ranctl precheck --file examples/ranctl/core/precheck-core-cutover-scp.json
bin/ranctl plan --file examples/ranctl/core/plan-core-cutover-scp.jsonCore ranctl request examples now live under examples/ranctl/core for the current NRF and SCP pilot lanes, gated AMF planning, and NRF shadow verification.
ranctllifecycle, approval handling, and config-aware prechecksran_du_high -> ran_scheduler_host -> ran_fapi_core -> stub backend- controlled failover policy based on configured
backendandfailover_targets - reusable switch/rollback integration harness in
ran_test_support - OAI DU runtime orchestration through generated Docker Compose assets and mocked Docker lifecycle checks
- thin skill wrapper scripts under
ops/skills/*/scripts/run.sh - native boundary placeholders such as
native/fapi_rt_gateway/PORT_PROTOCOL.md
| Path | Purpose |
|---|---|
bin/ |
Operator entrypoints such as ranctl, ran-dashboard, ran-install, and remote deploy helpers |
apps/ |
BEAM umbrella applications for core control, CU/DU layers, config, observability, and test support |
native/ |
RT-sensitive gateway and backend adapter boundaries |
docs/architecture/ |
System walkthroughs and design documents |
docs/assets/ |
README figures, logo assets, infographic source, and preview render assets |
docs/adr/ |
Architectural decision records |
config/ |
Runtime config, environment profiles, and example topologies |
ops/ |
Deploy scripts, skills, and Symphony-facing integration assets |
scripts/ |
Regeneration helpers such as README figure export |
subprojects/ |
Design-first side workspaces such as the clean-room elixir_core/ 5GC exploration track |
examples/ |
Example ranctl requests, incidents, and bootstrap references |
AGENTS.md |
Persistent repository rules |
Full proposed tree
.
|-- AGENTS.md
|-- README.md
|-- bin/
| |-- ran-debug-latest
| |-- ran-install
| |-- ranctl
| |-- ran-dashboard
| |-- ran-deploy-wizard
| |-- ran-fetch-remote-artifacts
| |-- ran-ship-bundle
| |-- ran-remote-ranctl
| `-- ran-host-preflight
|-- config/
| |-- config.exs
| |-- runtime.exs
| |-- dev/
| | |-- README.md
| | `-- single_cell_local.exs.example
| |-- lab/
| | |-- README.md
| | `-- single_cell_stub.exs.example
| `-- prod/
| |-- README.md
| `-- controlled_failover.exs.example
|-- docs/
| |-- assets/
| |-- adr/
| `-- architecture/
|-- apps/
| |-- ran_core/
| |-- ran_cu_cp/
| |-- ran_cu_up/
| |-- ran_du_high/
| |-- ran_fapi_core/
| |-- ran_scheduler_host/
| |-- ran_action_gateway/
| |-- ran_observability/
| |-- ran_config/
| `-- ran_test_support/
|-- native/
| |-- fapi_rt_gateway/
| |-- local_du_low_adapter/
| `-- aerial_adapter/
|-- ops/
| |-- deploy/
| |-- skills/
| `-- symphony/
|-- scripts/
`-- examples/
|-- incidents/
`-- ranctl/
Figure source: docs/assets/infographics/target-host-deploy.infographic
Dashboard and Deploy Studio
bin/ran-dashboard starts a Symphony-style local dashboard for the repo's live RAN and agent surface.
http://127.0.0.1:4050/serves the UIhttp://127.0.0.1:4050/api/dashboardreturns the unified snapshot JSONhttp://127.0.0.1:4050/api/healthreturns the server health probehttp://127.0.0.1:4050/api/actions/runaccepts dashboard-triggeredranctlactionshttp://127.0.0.1:4050/api/deploy/defaultsreturns safe repo-local deploy defaultshttp://127.0.0.1:4050/api/deploy/rundrives Deploy Studio preview and preflight runs
The dashboard pulls together:
- configured cell groups and backend policy from
ran_config - live Docker runtime state for OAI, DU split, UE, FlexRIC, xApps, and support services
- recent
plan/apply/verify/rollback/capture-artifactsoutputs fromartifacts/* - available operator skills from
ops/skills/* - target-host deploy preview state, rendered topology/request/env files, and preflight output
- deploy profile selection plus exported
deploy.profile.jsonanddeploy.effective.json - exported
deploy.readiness.jsonwith rollout score, blockers, warnings, and recommendation - remote handoff commands for
scp/ssh/install/preflight - recent remote host transcripts and fetched evidence under
artifacts/remote_runs/* - latest failed deploy or remote run with debug-pack pointers
The dashboard can trigger a subset of ranctl commands directly from the UI:
observeprecheckplanapplyrollbackcapture-artifacts
Deploy Studio stages target-host files into artifacts/deploy_preview/* by default, previews rendered files before touching /etc/open-ran-agent, runs the same preflight path as bin/ran-deploy-wizard, exports deploy.profile.json and deploy.effective.json, computes deploy.readiness.json, generates remote handoff commands once target_host is set, and surfaces the latest remote ranctl transcripts plus fetched evidence bundles.
It also exposes a Debug Desk view of the latest failed install or remote run and the corresponding debug-summary.txt and debug-pack.txt artifacts.
Easy install and debug quickstart
bin/ran-install is the shortest deploy entrypoint.
bin/ran-debug-latest --failures-only
bin/ran-install
bin/ran-install --target-host ran-lab-01
bin/ran-install --target-host ran-lab-01 --apply --remote-precheckThe command will:
- reuse the latest packaged bundle or build one if none exists
- generate safe preview files through
bin/ran-deploy-wizard - export
deploy.profile.json,deploy.effective.json, anddeploy.readiness.json - write quickstart artifacts under
artifacts/deploy_preview/quick_install/* - write
debug-summary.txtanddebug-pack.txtbeside each quick-install, ship, or remote run - optionally execute remote ship plus remote
ranctl precheck - refuse
--applyunless readiness is cleared, unless--forceis set
If an operator only needs the shortest failure-to-evidence path:
bin/ran-debug-latest --failures-only
bin/ran-install --target-host ran-lab-01
RAN_REMOTE_APPLY=1 bin/ran-remote-ranctl ran-lab-01 precheck ./artifacts/deploy_preview/etc/requests/precheck-target-host.jsonRead debug evidence in this order:
- docs/architecture/14-debug-and-evidence-workflow.md
debug-pack.txtdebug-summary.txttranscript.logorcommand.log- fetched
result.jsonlorfetch/extracted/*
CI, packaging, and artifact hygiene
Use the shared local CI contract before pushing changes:
mix contract_ci
mix runtime_ci
mix ci
npm run docs:buildmix contract_ciis the fast design and contract gatemix runtime_ciruns the tagged runtime smoke path and bootstrap packaging smokemix ciruns both- GitHub Actions mirrors the same split in
.github/workflows/ci.yml npm run docs:buildbuilds the VitePress docs site intended for Cloudflare Pages
GitHub Actions also uploads:
- architecture docs and ADR snapshot from the contract job
artifacts/releases/ci-smoke/**plus runtime smoke artifacts from the runtime job
Docs-site deployment is intentionally separate. The recommended production path is Cloudflare Pages Git integration, while GitHub Actions keeps a docs-only validation workflow in .github/workflows/docs-site.yml. Use npm ci && npm run docs:build as the Pages build command and docs/.vitepress/dist as the output directory.
The repo ships a source-first bootstrap bundle for lab-host style distribution:
mix ran.package_bootstrap
mix package_bootstrapThe packaging command writes:
artifacts/releases/<bundle_id>/manifest.jsonartifacts/releases/<bundle_id>/open_ran_agent-<bundle_id>.tar.gz
Packaging is stricter than normal bootstrap validation. It rejects topologies that do not declare controlled failover targets for each cell_group.
Artifact cleanup is explicit and dry-run first:
mix ran.prune_artifacts
mix prune_artifacts
mix ran.prune_artifacts --applyThe planner keeps recent JSON refs, recent runtime dirs, and recent release bundles, while protecting artifacts/control_state/* by default.
Topology override and control-state workflows
The repo can load a single-DU lab topology from RAN_TOPOLOGY_FILE before ranctl or the dashboard starts.
RAN_TOPOLOGY_FILE=config/lab/topology.single_du.rfsim.json bin/ranctl precheck --file examples/ranctl/precheck-oai-du-docker.json
RAN_TOPOLOGY_FILE=config/lab/topology.single_du.rfsim.json bin/ran-dashboardThe loaded topology path is surfaced in the dashboard snapshot and validation report.
ranctl also supports lightweight attach-freeze and drain coordination through metadata.control.
bin/ranctl plan --file examples/ranctl/apply-freeze-attaches.json
bin/ranctl apply --file examples/ranctl/apply-freeze-attaches.json
bin/ranctl plan --file examples/ranctl/apply-drain-cell-group.json
bin/ranctl apply --file examples/ranctl/apply-drain-cell-group.json
bin/ranctl observe --file examples/ranctl/apply-drain-cell-group.json
bin/ranctl rollback --file examples/ranctl/rollback-drain-cell-group.jsoncapture-artifacts writes config and control snapshots alongside the main capture bundle.
OAI DU runtime bridge
The repo includes an executable bridge from ranctl to a real OpenAirInterface DU runtime:
- runtime spec comes from
metadata.oai_runtimeand optionalcell_groupdefaults planrendersartifacts/runtime/<change_id>/docker-compose.ymlplanalso renders patched overlay confs underartifacts/runtime/<change_id>/conf/*.confapplybrings upCUCP + CUUP + DUin RFsim F1 split modeprecheckvalidates split markers and required patch points in the source confsverifyinspects container state, captures log tails, and accepts steady-state DU activity for long-running containersrollbacktears the stack down deterministically
Example flow:
bin/ranctl precheck --file examples/ranctl/precheck-oai-du-docker.json
bin/ranctl plan --file examples/ranctl/apply-oai-du-docker.json
bin/ranctl apply --file examples/ranctl/apply-oai-du-docker.json
bin/ranctl verify --file examples/ranctl/verify-oai-du-docker.json
bin/ranctl rollback --file examples/ranctl/rollback-oai-du-docker.jsonTo run against your own OAI conf set, replace the three *_conf_path fields in examples/ranctl/apply-oai-du-docker-template.json and reuse the same metadata for precheck, plan, apply, and verify.
See docs/architecture/09-oai-du-runtime-bridge.md for the current scope and limitations.
Target-host deploy
The bootstrap bundle carries a target-host install and preflight chain:
ops/deploy/install_bundle.shops/deploy/ship_bundle.shops/deploy/run_remote_ranctl.shops/deploy/preflight.shbin/ran-deploy-wizardbin/ran-fetch-remote-artifactsbin/ran-ship-bundlebin/ran-remote-ranctlbin/ran-host-preflightops/deploy/systemd/ran-dashboard.serviceops/deploy/systemd/ran-host-preflight.serviceconfig/prod/topology.single_du.target_host.rfsim.json.exampleexamples/ranctl/precheck-target-host.json.example
Target-host staging is profile-driven. bin/ran-deploy-wizard and Deploy Studio can render:
deploy.profile.jsondeploy.effective.json
Available deploy profiles:
stable_opstroubleshootlab_attach
Typical flow:
mix ran.package_bootstrap --bundle-id target-host-smoke
./artifacts/releases/target-host-smoke/install_bundle.sh ./artifacts/releases/target-host-smoke/open_ran_agent-target-host-smoke.tar.gz /opt/open-ran-agent
/opt/open-ran-agent/current/bin/ran-deploy-wizard --skip-install
/opt/open-ran-agent/current/bin/ran-host-preflightOr start bin/ran-dashboard and use Deploy Studio to generate the same topology, request, and env files into a safe repo-local preview root before moving them to the live host.
For remote handoff from the packaging host:
bin/ran-deploy-wizard --defaults --safe-preview --skip-install --target-host ran-lab-01
bin/ran-ship-bundle ./artifacts/releases/target-host-smoke/open_ran_agent-target-host-smoke.tar.gz ran-lab-01
RAN_REMOTE_APPLY=1 bin/ran-ship-bundle ./artifacts/releases/target-host-smoke/open_ran_agent-target-host-smoke.tar.gz ran-lab-01
RAN_REMOTE_APPLY=1 bin/ran-remote-ranctl ran-lab-01 precheck ./artifacts/deploy_preview/etc/requests/precheck-target-host.json
RAN_REMOTE_APPLY=1 bin/ran-fetch-remote-artifacts ran-lab-01 ./artifacts/deploy_preview/etc/requests/precheck-target-host.jsonIf artifacts/deploy_preview/etc exists, bin/ran-ship-bundle syncs the rendered topology, request, and env files to the remote host before running preflight. bin/ran-remote-ranctl also auto-fetches matching remote evidence into artifacts/remote_runs/*/fetch unless RAN_REMOTE_FETCH=0 is set, and bin/ran-fetch-remote-artifacts can re-sync the same evidence later on demand.
See docs/architecture/12-target-host-deployment.md, ops/deploy/README.md, and docs/architecture/13-ocudu-inspired-ops-profiles.md.
This repository is prepared for public collaboration.
- license: MIT
- contribution guide: CONTRIBUTING.md
- security reporting: SECURITY.md
- only sanitized examples and templates belong in the repo
- private lab configs, generated artifacts, local crash dumps, and operator-specific OAI or srsRAN settings are intentionally ignored
If you maintain local lab files such as OAI_config_WE_flexric.conf, srsran_config.yml, or private OAI UE and gNB configs, keep them outside git or under ignored local-only filenames.
- architecture documentation
- repo skeleton
- initial BEAM app boundaries
- canonical interfaces and stub modules
- operations workflow skeleton
- config examples
- backlog definition
- executable contract-only
ranctlflow with file-backed plan, state, verify, and capture outputs - end-to-end
stub_fapi_profilepath for boundary validation
- live ASN.1 codecs
- SCTP and GTP-U runtime stacks
- real eCPRI or O-RAN FH transport
- real local DU-low implementation
- real NVIDIA Aerial integration
- real cuMAC integration
- production Symphony hooks
- SA-only deployment is sufficient for the MVP.
- One DU, one cell group, and one UE path are enough to shape the initial contracts.
- RU-side low-PHY exists outside the BEAM core.
- Aerial integration can be represented through backend capabilities and profile selection without assuming internal Aerial implementation details.
- Fill in real app internals behind the current behaviours and structs.
- Harden
bin/ranctlfrom a bootstrap executor toward a release-aware runtime entrypoint. - Replace contract-only backend paths with real gateway-backed session paths.
- Extend integration tests for backend switching, rollback, artifact capture, and target-host flows.
- Ship compiled release and container packaging for target hosts.
Released under the MIT License.