This repo currently contains the Wave 1 to Wave 3 design docs plus the first implemented Set 5 backend foundation, frontend application shell, and first planner-facing trucking workspace for a Docker-first demand-planning runtime.
The following are scaffolded now:
- Docker Compose local development runtime
- Django project bootstrap
- modular Django app package scaffold under
backend/trucking_apps - shared abstract model foundation for UUID and timestamp patterns
- foundational master-data models for locations, truck types, SKUs, and reason codes
- initial app migrations for
masters_configandaudit_governance - Postgres wiring
- built-in Django admin route at
/admin/ - narrow Django admin registration for the foundational master-data entities
- Next.js plus Tailwind application shell foundation under
frontend/ - custom trucking permission declarations and baseline role-group bootstrap command
- canonical
planning_batchandplanning_input_snapshotheader persistence inplanning_inputs - canonical
plan_runandplan_versionheader persistence intruck_planning - planning-operation persistence, admin visibility, and placeholder worker orchestration seam
- bounded JSON service/API seams for plan summary, route detail, operation status, override, clear, submit, approve, and reopen
- compact demand-planning shell with left sidebar, thin header, main workspace stage, right context rail, and generic module routes
- trucking mounted as the first live module inside the shell with route-aware breadcrumb context and a bounded workspace container
- trucking draft-review workspace bound to real run/version/snapshot headers and latest operation status where available
- planner control invocation wiring for truck override, non-critical clear, submit, approve, and reopen against the bounded backend seams
- local dev trucking workspace bootstrap command and frontend-to-backend dev proxy bridge for the Docker stack
- deterministic local trucking seed loader, seed-context inspection command, and compact dev-session switcher for planner/approver/viewer/admin E2E review
- health endpoint at
/healthz/ - worker runtime loop wired to placeholder planning-operation processing
The following are not implemented yet:
- route masters, snapshot detail-line tables, full plan-content tables, and action ledgers
- actual planning jobs and canonical planning API behavior
- lower-level route, truck, stop, pickup, delivery, load, and breach-detail content
- RBAC scope enforcement and planner-facing permission checks
Read agent.md first for repo execution rules.
For material work, the required read order is:
agent.mdREADME.mddocs/README.md- Wave 1 governance and architecture baseline
- Wave 2 contracts, lifecycle, RBAC, and app-boundary docs
- Wave 3 logical-model, API, orchestration, rollout, and sequencing docs
- only then the runtime or code files relevant to the current chunk
Material work must stop at chunk boundaries with validation and runtime-state reporting.
F:\trucking remains read-only source evidence only.
Use the local development stack with the explicit env-qualified compose command:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml -f docker-compose.frontend.yml up -d --buildStop the stack with:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml -f docker-compose.frontend.yml downView logs with:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml logs appFrontend logs:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml -f docker-compose.frontend.yml logs frontend- frontend base:
http://localhost:3000/ - app base:
http://localhost:18080/ - health:
http://localhost:18080/healthz/ - admin:
http://localhost:18080/admin/ - Postgres host port:
localhost:15432
The application shell is part of the Docker stack. Do not install frontend dependencies locally and do not run npm run dev from the host repo.
Start the stack with:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml -f docker-compose.frontend.yml up -d --buildFrontend shell routes now available from the container:
http://localhost:3000/planning/truckinghttp://localhost:3000/planning/demandhttp://localhost:3000/planning/buffer-healthhttp://localhost:3000/planning/rebalancinghttp://localhost:3000/operations/executionhttp://localhost:3000/operations/exceptionshttp://localhost:3000/governance/master-datahttp://localhost:3000/governance/policy
The top header now exposes a compact local Session selector when multiple seeded dev users are configured. Use it to switch between planner, approver, viewer, and admin identities for local end-to-end review.
Create a superuser after the stack is up:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py createsuperuserCheck compose rendering:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml configCheck Django startup inside the container:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py checkApply migrations inside the container:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py migratePreview RBAC group bootstrap changes:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py bootstrap_trucking_rbac --dry-runApply RBAC group bootstrap:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py bootstrap_trucking_rbacBootstrap the bounded local dev trucking workspace:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py ensure_dev_trucking_workspaceSeed the deterministic local trucking E2E dataset:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py seed_trucking_e2eInspect the current seeded run, users, and workflow context:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py show_trucking_seed_contextQueue a placeholder planning operation:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py enqueue_planning_operation draft_generation --requested-by admin --idempotency-key local-demoRun one worker iteration on demand:
docker compose --env-file .env.dev.local -f docker-compose.yml -f docker-compose.dev.yml exec app python manage.py run_planning_worker --once --max-operations 1Scaffolded API surfaces now available:
GET /api/trucking/workspace/GET /api/plan-runs/{run_id}/plan-summary/GET /api/plan-runs/{run_id}/working-plan/GET /api/plan-versions/{version_id}/routes/{route_plan_id}/GET /api/operations/{operation_id}/POST /api/plan-runs/{run_id}/truck-availability-overrides/POST /api/plan-versions/{version_id}/non-critical-clear/POST /api/plan-versions/{version_id}/submit/POST /api/plan-versions/{version_id}/approve/POST /api/plan-versions/{version_id}/reopen/
These surfaces are still intentionally bounded. The trucking workspace bootstrap now resolves the latest canonical run plus the current planner-facing plan summary and latest operation status. Plan summary resolves real run, snapshot, and version headers. Operation status resolves canonical run and version identity where available. Submit, approve, and reopen terminate into real plan_run and plan_version header transitions. Route, truck, stop, pickup, delivery, load, breach-detail, approval-decision, and action-ledger content remains scaffold-only.
For deterministic seed data, local users, and the repeatable trucking walkthrough, use:
Set 5’s bounded chunk sequence is now complete through the optional workflow shell integration. Any next instruction should move into a new set or a separate seed-data / end-to-end enablement task rather than extending Set 5 implicitly.