Merged
Conversation
Features: - Real-time metrics dashboard with KPI cards (uptime, RPS, error rate) - Tool usage analytics with Chart.js visualizations (bar, pie, timeline) - Per-tool latency statistics (p50, p95, p99) - Audit logging with rotation and export (JSON/CSV) - Optional basic authentication - WebSocket for live updates with HTTP polling fallback Implementation: - WebUI package: config, metrics, audit, server modules - FastAPI backend with REST API and WebSocket - Dark theme frontend dashboard - CLI flags: --web-ui, --web-ui-port, --web-ui-config - Environment variable overrides Testing: - 87 new tests (55 unit + 6 integration + 26 main tests) - 96% code coverage - All quality gates pass (pytest, ruff, mypy) Documentation: - webui-setup.md with setup and troubleshooting guide - P10-T1_Validation_Report.md
- Moved PRD to SPECS/ARCHIVE/P10-T1_Web_UI_Control_and_Audit_Dashboard/ - Moved validation report to archive - Updated INDEX.md with new archived task - Updated Archive Log - Marked task as complete in Workplan.md - Updated next.md # Conflicts: # SPECS/INPROGRESS/next.md
Overall Assessment: PASSED Strengths: - Clean architecture with well-separated concerns - Comprehensive testing (87 tests, 96% coverage) - Complete documentation - Security considerations implemented Minor Observations: - WebUI module coverage at 84.8% (acceptable for server components) - Chart.js from CDN (acceptable for initial release) - WebSocket auth uses query param (acceptable for localhost) Verdict: No follow-up required. Implementation complete and ready for release.
- Fixed in_flight tracking bug in metrics.py (pop from _in_flight when request_id provided) - Fixed linting issues in test files (imports, whitespace, unused variables) - Fixed formatting in modified files - All quality gates now pass: * pytest: 289 passed, 96% coverage * ruff: All checks passed * mypy: No issues found * build: Successfully built package
New targets: - install-webui: Install package with Web UI dependencies - test-webui: Run Web UI specific tests with coverage Updated help text to show all available targets
Added section 'Adding New Features' documenting: - How to add new make targets for features - How to add optional dependencies in pyproject.toml - Examples for install-feature and test-feature patterns
New targets: - make webui: Start wrapper with Web UI dashboard on port 8080 - make webui-health: Check Web UI health and display current metrics Updated .PHONY and help text accordingly
README.md: - Added Web UI Dashboard section with features overview - Added link to webui-setup.md in Documentation section AGENTS.md: - Added Phase 10: Web UI Dashboard to project status (68/68 tasks) - Added Web UI Dashboard section after Configuration - Updated project structure to include webui package - Updated docs folder structure - Added Web UI tests and make commands to Testing section docs/webui-setup.md: - Added 'Using Make Commands' section with install-webui, webui, webui-health, test-webui All quality gates pass: - pytest: 289 passed, 96% coverage - ruff: All checks passed - mypy: No issues found
The _extract_tool_name function was not extracting tool names from MCP tool/call format. MCP tool calls have the tool name in params.name, not in method or result.name. Fixed: - Updated _extract_tool_name to check params.name first (MCP tools/call format) - Filter out 'initialize' and 'tools/list' from params.name - Added comprehensive tests for the new extraction logic This fixes the issue where Web UI dashboard showed 'Connected' but no metrics/audit data was captured when MCP tools were called. All quality gates pass: - pytest: 293 passed, 96.1% coverage - ruff: All checks passed - mypy: No issues found
This prevents future format misinterpretation bugs by using strong typing. Changes: - Added src/mcpbridge_wrapper/schemas.py with Pydantic models: * MCPParams: Tool call parameters * MCPRequest: JSON-RPC request with get_tool_name() method * MCPResponse: JSON-RPC response with get_tool_name() and has_error() * MCPError: Error container * parse_mcp_message(): Helper function - Updated __main__.py to use schema validation: * _extract_tool_name(): Now uses MCPRequest/MCPResponse models * _extract_request_id(): Uses MCPRequest model * _has_error(): Uses MCPResponse model - Added pydantic>=2.0.0 to webui dependencies Benefits: - Type-safe MCP message parsing - Clear schema definitions prevent format confusion - Automatic validation of message structure - Self-documenting code via Pydantic models All quality gates pass: - pytest: 293 passed, 94.3% coverage - ruff: All checks passed - mypy: Type checking passes
The Web UI dashboard was showing "Connected" but capturing no metrics
or audit logs when MCP tools were called. Tools worked correctly but the
metrics stayed at 0.
Root Cause:
MCP protocol separates requests and responses:
- Request: {"method": "tools/call", "params": {"name": "BuildProject"}, "id": "123"}
- Response: {"result": {...}, "id": "123"}
The old code extracted tool_name from each line independently. On the
response line, tool_name was None (no params.name), so metrics were
never recorded.
Fix:
1. On request: Store (tool_name, start_time) in pending_requests[request_id]
2. On response: Look up tool_name by request_id and record metrics
This ensures correct latency calculation and audit logging for all
tool calls.
Resolves: Web UI dashboard empty despite successful tool calls
- Add SharedMetricsStore with SQLite backend for multi-process metrics
- Fix get_timeseries() to return format expected by frontend:
- {requests: [{t, v}, ...], errors: [...], latencies: [...]}
- t values are seconds ago (integers)
- 5-second bucketing to match frontend Chart.js
- Add request tracking via stdin forwarder callback
- Add comprehensive tests for SharedMetricsStore
Resolves: Web UI dashboard timeseries charts now display data
- Move PRD and validation report to SPECS/ARCHIVE/P10-T2_Fix_Web_UI_Timeseries_Charts/ - Mark task as completed in Workplan.md
project quality gates.
- Deleted the fallback redefinitions that were causing:
- unused ignore comments
- `Cannot assign to a type`
- `no-redef`
- Simplified `parse_mcp_message()` to return the concrete type from
`MCPRequest.model_validate_json(...)` (and `None` on exception), which
satisfies the declared `Optional[MCPRequest]`.
- Removed monkey-patching of `uvicorn.Server` methods (mypy rejects
assigning to methods).
- Reworked `run_server()` to:
- call `on_started()` directly (if provided)
- start the server via `uvicorn.run(...)` instead of instantiating
`uvicorn.Server` and mutating it
- `mypy src/`: **Success: no issues found**
- `pytest`: **306 passed, 5 skipped**
If you want, I can also run `ruff check src/ tests/` + `ruff format
--check src/ tests/` to fully mirror CI.
running `mypy`, so mypy sees the same dependency/type context as local
dev + the test matrix.
- In `.github/workflows/ci.yml` `lint` job:
- Replaced `pip install ruff mypy` with `pip install -e ".[dev]"`
(this brings in your runtime deps like Pydantic, plus dev tools).
- Expanded `ruff check` to include `tests/` (matches your
`CONTRIBUTING.md` guidance).
```XcodeMCPWrapper/.github/workflows/ci.yml#L31-58
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -e ".[dev]"
- name: Run ruff linter
run: ruff check src/ tests/
- name: Run ruff formatter check
run: ruff format --check src/ tests/
- name: Run mypy type checker
run: mypy src/
```
Previously, the lint job installed only `ruff` + `mypy`, **not** your
package/dependencies. That can cause Pydantic APIs to be treated as
`Any` by mypy in CI, triggering `[no-any-return]`. Installing `-e
".[dev]"` gives mypy the correct dependency graph and type info.
I also verified locally that `mypy src/` succeeds after this change.
`dev` extra does not install `pydantic`**. From your CI log: - It installs `mcpbridge-wrapper==0.2.0` editable - It installs `pytest/ruff/mypy` (from `dev`) - **But it never installs `pydantic`**, because in your `pyproject.toml` it was only listed under the `webui` extra. Mypy then analyzes code that imports/uses Pydantic APIs (`BaseModel.model_validate_json`). With Pydantic not installed (or not available for analysis in that environment), mypy ends up treating those calls as `Any`, which triggers: - `Returning Any from function declared to return ... [no-any-return]` I moved `pydantic` into **core runtime dependencies** so it is always installed (and it should be—your non-webui code uses it in `schemas.py`). ```XcodeMCPWrapper/pyproject.toml#L1-80 [project] name = "mcpbridge-wrapper" version = "0.2.0" ... dependencies = [ "pydantic>=2.0.0", ] ``` This ensures that both: - `pip install -e .` - and `pip install -e ".[dev]"` will include Pydantic, so mypy in CI has the right typing context. Your wrapper imports Pydantic unconditionally in `src/mcpbridge_wrapper/schemas.py`. That means Pydantic is a **required runtime dependency**, not an optional `webui` dependency. After this change, CI’s `lint` job (which now does `pip install -e ".[dev]"`) will also install `pydantic`, and the mypy errors you showed should stop. If you want, I can also adjust the `webui` extra to *not* redundantly include `pydantic` anymore (since it’s now core), but it’s harmless to leave it.
that covers an edge case in `src/mcpbridge_wrapper/transform.py`: a
**valid JSON response line that includes a trailing newline** still gets
transformed correctly.
- Updated `XcodeMCPWrapper/tests/unit/test_transform.py` by adding:
-
`TestProcessResponseLine.test_json_line_with_trailing_newline_gets_transformed`
This asserts that `process_response_line()` transforms the JSON and
injects `structuredContent` even when the input line ends with `\n` (we
*don’t* assert the newline is preserved, since the transformer doesn’t
guarantee output formatting).
Running:
- `pytest -q --cov=src/mcpbridge_wrapper --cov-report=term-missing`
now yields:
- **Total coverage: 90.00%** (meets the **≥ 90%** requirement)
- All tests pass: **307 passed, 5 skipped**
If you want to push coverage higher (buffer above the threshold), the
current report shows other uncovered lines are mostly in
`src/mcpbridge_wrapper/__main__.py` rather than `transform.py`.
- `ruff format --check src/ tests/` → **32 files already formatted** - `ruff format src/ tests/` → **32 files left unchanged** - `pytest -q` → **307 passed, 5 skipped** So the repo state *now* is formatted and tests still pass.
… docs and configs
…ient Configs (PASS) # Conflicts: # SPECS/ARCHIVE/INDEX.md
…web-ui-args-examples P10-T1-4 Add web UI args examples
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Brief description of the changes in this PR.
Type of Change
Quality Gates
Before submitting, ensure all quality gates pass:
Or run individually:
make test- All tests pass with ≥90% coveragemake lint- No linting errorsmake format- Code is properly formattedmake typecheck- Type checking passesmake doccheck- Documentation is synced with DocC (if docs changed)Documentation Sync
If you modified files in
docs/, ensure corresponding DocC files are also updated:docs/installation.mdmcpbridge-wrapper.docc/Installation.mddocs/cursor-setup.mdmcpbridge-wrapper.docc/CursorSetup.mddocs/claude-setup.mdmcpbridge-wrapper.docc/ClaudeCodeSetup.mddocs/codex-setup.mdmcpbridge-wrapper.docc/CodexCLISetup.mddocs/troubleshooting.mdmcpbridge-wrapper.docc/Troubleshooting.mddocs/architecture.mdmcpbridge-wrapper.docc/Architecture.mddocs/environment-variables.mdmcpbridge-wrapper.docc/EnvironmentVariables.mdREADME.mdmcpbridge-wrapper.docc/mcpbridge-wrapper.mdTesting
Checklist