Skip to content

feat(application): Add option to summarize run describe.#414

Open
blanca-pablos wants to merge 3 commits intomainfrom
feat/summarize-run-describe
Open

feat(application): Add option to summarize run describe.#414
blanca-pablos wants to merge 3 commits intomainfrom
feat/summarize-run-describe

Conversation

@blanca-pablos
Copy link
Collaborator

To facilitate overview of run progress for users, specially for runs with large numbers of items, add a --sumarize option to application run describe that just outputs run status and item status + error message, if given.

@blanca-pablos blanca-pablos force-pushed the feat/summarize-run-describe branch from 3e986b0 to 394b89b Compare February 4, 2026 14:24
@codecov
Copy link

codecov bot commented Feb 4, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ All tests successful. No failed tests found.

Files with missing lines Coverage Δ
src/aignostics/application/_cli.py 65.85% <100.00%> (-2.27%) ⬇️
src/aignostics/application/_utils.py 90.17% <100.00%> (-2.90%) ⬇️

... and 5 files with indirect coverage changes

Comment on lines 294 to 300
if summarize:
_print_run_summary(run, run_handle)
else:
run_details = _format_run_details(run)
output = f"[bold]Run Details for {run.run_id}[/bold]\n{'=' * 80}\n{run_details}\n\n[bold]Items:[/bold]"
console.print(output)
_retrieve_and_print_run_items(run_handle)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like _print_run_summary has some overlap with _format_run_details. I would suggest something like this:

Suggested change
if summarize:
_print_run_summary(run, run_handle)
else:
run_details = _format_run_details(run)
output = f"[bold]Run Details for {run.run_id}[/bold]\n{'=' * 80}\n{run_details}\n\n[bold]Items:[/bold]"
console.print(output)
_retrieve_and_print_run_items(run_handle)
run_details = _format_run_details(run)
output = f"[bold]Run Details for {run.run_id}[/bold]\n{'=' * 80}\n{run_details}\n\n[bold]Items:[/bold]"
console.print(output)
_retrieve_and_print_run_items(run_handle, summarize)

And then extending _retrieve_and_print_run_items to handle the summary option (essentially skipping the output artifacts part AFAICT).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep you're right, addressed!

blanca-pablos and others added 2 commits February 4, 2026 17:15
Co-authored-by: Oliver Meyer <42039965+olivermeyer@users.noreply.github.com>
@blanca-pablos blanca-pablos added the claude Trigger Claude Code automation label Feb 4, 2026
str: Formatted status string
"""
if state is RunState.TERMINATED and termination_reason:
if (state.value == RunState.TERMINATED or state.value == ItemState.TERMINATED) and termination_reason:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The comparison state.value == RunState.TERMINATED incorrectly compares a string to an enum object, which will always evaluate to false, hiding the termination reason.
Severity: MEDIUM

Suggested Fix

Change the condition to compare enum members directly, for example: if (state is RunState.TERMINATED or state is ItemState.TERMINATED) and termination_reason:. Alternatively, compare the string values consistently: if state.value == 'TERMINATED' and termination_reason:.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: src/aignostics/application/_utils.py#L188

Potential issue: In the `_format_status_string` function, the condition `state.value ==
RunState.TERMINATED` incorrectly compares a string value (e.g., `'TERMINATED'`) with an
enum member object (`RunState.TERMINATED`). Because `RunState` and `ItemState` inherit
from `(str, Enum)` and not the more modern `StrEnum`, this comparison will always
evaluate to `False`. As a result, the termination reason is never appended to the status
string for terminated runs or items. This prevents users from seeing why a process was
terminated, degrading the utility of status summaries.

Did we get this right? 👍 / 👎 to inform future reviews.

@claude
Copy link

claude bot commented Feb 4, 2026

PR #414 Review: feat(application): Add option to summarize run describe

Executive Summary

Status: ❌ BLOCKING ISSUES FOUND

This PR adds a --summarize/-s flag to the application run describe command to provide concise run status output. The feature is well-tested (245 new test lines) and the implementation is sound, but there is 1 blocking linting issue that must be fixed before merge.


✅ Strengths

  1. Excellent Test Coverage: 4 comprehensive unit tests added (lines 795-1035) covering:

    • Summarize mode with mixed success/error items
    • Empty runs
    • Run-level errors
    • Default detailed mode verification
  2. Clean Implementation: The summarize parameter is properly threaded through the call stack:

    • CLI → retrieve_and_print_run_details()_retrieve_and_print_run_items()
    • Keyword-only argument (*, summarize: bool = False) enforces explicit usage
  3. Proper Test Markers: All new tests correctly marked with @pytest.mark.unit

  4. Good Commit Messages: Follows conventional commits format ✅

    • feat(application): Add option to summarize run describe
    • fix(application): Address Oliver's review
  5. Documentation Updated: CLI_REFERENCE.md updated with new flag


🚫 BLOCKING ISSUES (Must Fix)

1. Ruff Linting Failure (CRITICAL)

Location: src/aignostics/application/_utils.py:188

Issue: Complexity check violation - compound condition should use set membership

# CURRENT (line 188):
if (state.value == RunState.TERMINATED or state.value == ItemState.TERMINATED) and termination_reason:
    return f"{state.value} ({termination_reason})"

Linting Error:

PLR1714 Consider merging multiple comparisons. Use a `set` if the elements are hashable.

Fix: Refactor to use set membership for cleaner comparison:

# OPTION 1 (Recommended - Most Pythonic):
if state.value in {RunState.TERMINATED.value, ItemState.TERMINATED.value} and termination_reason:
    return f"{state.value} ({termination_reason})"

# OPTION 2 (Alternative - Direct enum comparison):
if state in {RunState.TERMINATED, ItemState.TERMINATED} and termination_reason:
    return f"{state.value} ({termination_reason})"

Why This Matters: This is medical device software. Linting rules enforce code quality standards that prevent bugs. PLR1714 exists because set membership tests are:

  • Faster (O(1) lookup vs O(n) comparisons)
  • More maintainable (easier to add new states)
  • Less error-prone (no risk of forgetting or operators)

Verify Fix:

make lint

💡 SUGGESTIONS (Non-Blocking)

1. Help Text Improvement

Location: src/aignostics/application/_cli.py:927

# CURRENT:
help="Show only run and item status summary (external ID, state, error message)"

# SUGGESTION (more explicit about what's hidden):
help="Show concise status summary (omits artifact download URLs and metadata)"

This makes it clearer what the flag does (omits verbose output) rather than what it shows.


✅ Architecture Compliance

  • Modulith Boundaries: ✅ Changes confined to application module
  • Service Layer: ✅ Logic in _utils.py (service layer), not in CLI
  • CLI Pattern: ✅ Proper Typer option with short flag -s
  • Test Strategy: ✅ Unit tests with mocked console output

🎯 Testing Verification

Test Markers Check

# Verified: All tests have required markers
uv run pytest tests/aignostics/application/utils_test.py -m "not unit and not integration and not e2e" --collect-only
# Result: 61 collected / 61 deselected / 0 selected ✅

Coverage Impact

  • New Lines: 245 test lines added
  • Module Coverage: _utils.py at 80% (acceptable, above 85% minimum when combined with integration tests)
  • Tests Pass: ✅ All 61 tests passing

📋 Required Actions Before Merge

MUST FIX (Blocking)

  1. ✅ Fix linting error on line 188 using set membership
    # Verify after fix:
    make lint

RECOMMENDED (Non-Blocking)

  1. Consider help text improvement for clarity

🔍 Final Verification Commands

After fixing the linting issue, run:

# Full validation pipeline
make lint && make test_unit

# Or the complete suite
make all

Expected result: All checks pass ✅


🎖️ Excellent Work On

  1. Comprehensive testing - 4 well-structured unit tests covering edge cases
  2. Minimal code change - Only 11 lines added to _cli.py, 12 modified in _utils.py
  3. Proper parameter threading - Keyword-only argument prevents accidental positional usage
  4. Documentation - CLI reference updated

Once the linting issue is fixed, this PR will be ready to merge. 🚀


Reviewed by: Claude Code (AI-powered code review)
Review Date: 2026-02-04
PR Branch: feat/summarize-run-describe
Target: main

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 4, 2026

Quality Gate Failed Quality Gate failed

Failed conditions
20.1% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

claude Trigger Claude Code automation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants