Skip to content

Conversation

@helmut-hoffer-von-ankershoffen
Copy link
Contributor

No description provided.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 19, 2025

Claude finished @helmut-hoffer-von-ankershoffen's task —— View job


PR Review: Feat/custom pipeline nicegui 3_3_1

I've completed a comprehensive review of PR #268 following the Aignostics Python SDK standards for medical device software.


✅ Summary: APPROVED with Suggestions

This PR successfully adds configurable pipeline orchestration (GPU/CPU resource control) to the SDK metadata schema, with appropriate CLI and GUI interfaces. All critical checks pass.

Key Achievements:

  • ✅ All code quality checks pass (ruff, mypy, pyright)
  • ✅ All test markers properly set (7 new unit tests)
  • ✅ Architecture compliance verified (GUI→Service pattern)
  • ✅ No security concerns identified
  • ✅ Backward compatible (pipeline config is optional)
  • ✅ Access control implemented for pipeline configuration UI

📊 Changes Overview

Files Changed: 9 files (+499/-70 lines)

  • src/aignostics/platform/_sdk_metadata.py (+62/-1) - Core pipeline models
  • src/aignostics/application/_cli.py (+53/-1) - CLI parameters
  • src/aignostics/application/_gui/_page_application_describe.py (+93/-1) - GUI pipeline step
  • tests/aignostics/platform/sdk_metadata_test.py (+121) - New tests
  • tests/aignostics/platform/e2e_test.py (+14) - E2E test updates
  • pyproject.toml (+8/-8) - Dependency updates
  • Other test cleanups

🔍 Detailed Review

1. ✅ Test Markers (CRITICAL CHECK - PASSED)

Status: ✅ All tests properly marked

Verified no unmarked tests exist:

uv run pytest -m "not unit and not integration and not e2e" --collect-only
# Result: 0 tests selected ✅

All 7 new tests in TestPipelineConfiguration class correctly marked with @pytest.mark.unit:

  • test_pipeline_config_defaults()
  • test_pipeline_config_custom_values()
  • test_gpu_type_enum()
  • test_provisioning_mode_enum()
  • test_metadata_with_pipeline_config()
  • test_metadata_without_pipeline_config()
  • test_gpu_config_invalid_max_gpus()

2. ✅ Code Quality Checks (ALL PASSED)

Linting: ✅ PASSED

ruff check . → All checks passed!
ruff format --check . → 151 files already formatted
pyright → 0 errors, 0 warnings
mypy src → Success: no issues found in 79 source files

Tests: ✅ 7/7 new tests PASSED in 3.07s

3. ⚠️ Commit Messages (1 NEEDS IMPROVEMENT)

Status: 3/4 commits follow conventional format

Good commits:

  • 8eed64b - chore(deps): bump nicegui to 3.3.1
  • 35b5f56 - test(application): reactivate test_cli_run_submit_and_describe_and_cancel_and_download_and_delete against production
  • 5fef607 - feat(platform,application): configureable pipeline

Needs improvement:

  • ⚠️ c55a75d - chore(deps): some
    • Issue: Commit message is too vague ("some")
    • Fix: Should be more descriptive like chore(deps): update fastapi, boto3, and other dependencies

4. ✅ Architecture Compliance (PASSED)

Modulith Pattern: ✅ Correctly followed

  • GUI (_page_application_describe.py) imports Service via from .._service import Service (line 20)
  • No CLI↔GUI cross-dependencies
  • Both layers depend only on Service layer
  • Proper use of BaseService pattern

Dependency Injection: ✅ Correct

  • Service layer instantiated properly: Service().application_run_submit_from_metadata(...)

5. ✅ SDK Metadata Schema Design (EXCELLENT)

Schema Version: 0.0.4 (appropriate minor bump)

New Models:

class GPUType(StrEnum):
    L4 = "L4"
    A100 = "A100"

class ProvisioningMode(StrEnum):
    SPOT = "SPOT"
    ON_DEMAND = "ON_DEMAND"

class PipelineConfig(BaseModel):
    gpu: GPUConfig  # gpu_type, provisioning_mode, max_gpus_per_slide
    cpu: CPUConfig  # provisioning_mode

Strengths:

  • ✅ Uses Pydantic PositiveInt for validation (max_gpus_per_slide)
  • ✅ Enums prevent invalid values
  • ✅ Optional field (pipeline: PipelineConfig | None) - backward compatible
  • ✅ Comprehensive test coverage including validation edge cases
  • ✅ Proper defaults defined as constants

6. ✅ CLI Implementation (GOOD)

New Parameters Added:

  • --gpu-type (default: "A100")
  • --gpu-provisioning-mode (default: "ON_DEMAND")
  • --max-gpus-per-slide (min=1, max=8, default=1)
  • --cpu-provisioning-mode (default: "ON_DEMAND")

Applied to both:

  • run_execute() (line 369-385)
  • run_submit() (line 676-692)

Strengths:

  • ✅ Validation constraints on max_gpus_per_slide (1-8 range)
  • ✅ Clear help text for each parameter
  • ✅ Consistent defaults across CLI and GUI
  • ✅ Custom metadata properly constructed and passed

Minor Suggestion:

  • Consider adding validation for GPU type and provisioning mode enums in Typer options (see inline comment)

7. ✅ GUI Implementation (EXCELLENT)

Location: _page_application_describe.py lines 837-909

Access Control: ✅ Properly implemented

can_configure_pipeline = (
    user_info
    and user_info.organization
    and user_info.organization.name
    and user_info.organization.name.lower() in {"aignostics", "pre-alpha-org", "lmu", "charite"}
)

Strengths:

  • ✅ Organization-based feature gating (medical device compliance)
  • ✅ Clear UI explanations for GPU/CPU configuration
  • ✅ Proper binding to SubmitForm data model
  • ✅ Helpful descriptions for SPOT vs ON_DEMAND provisioning
  • ✅ Graceful degradation when feature unavailable
  • ✅ Proper NiceGUI element marking for testing

UI Elements:

  • GPU Type selector (L4, A100)
  • Max GPUs per slide (number input, 1-8)
  • GPU Provisioning Mode (SPOT, ON_DEMAND with descriptions)
  • CPU Provisioning Mode (SPOT, ON_DEMAND)

8. ✅ Testing Strategy (COMPREHENSIVE)

Test Categories: All properly marked as @pytest.mark.unit

Coverage:

  1. ✅ Default configuration values
  2. ✅ Custom configuration values
  3. ✅ Enum validation (GPUType, ProvisioningMode)
  4. ✅ Integration with Run SDK metadata
  5. ✅ Optional field validation (works with/without pipeline config)
  6. ✅ Edge cases (invalid max_gpus_per_slide values)

E2E Test Updates:

  • ✅ Added pipeline configuration to _submit_and_validate() helper
  • ✅ Uses constants from constants_test.py (different for staging vs production)

Test Constants Pattern (EXCELLENT):

# Production: Conservative settings
PIPELINE_GPU_TYPE = "A100"
PIPELINE_GPU_PROVISIONING_MODE = "ON_DEMAND"

# Staging: Aggressive testing settings
PIPELINE_GPU_TYPE = "L4"
PIPELINE_GPU_PROVISIONING_MODE = "SPOT"

9. ✅ Security & Medical Device Compliance

Access Control: ✅ Implemented

  • Organization-based feature gating in GUI
  • Hardcoded allowlist: {"aignostics", "pre-alpha-org", "lmu", "charite"}

Data Validation: ✅ Strong

  • Pydantic models with PositiveInt validation
  • Enum constraints prevent invalid values
  • CLI parameter ranges enforced (1-8 GPUs)

Backward Compatibility: ✅ Maintained

  • Pipeline config is optional field
  • Existing runs without pipeline config still valid

Concern - Organization Allowlist Hardcoded:

  • ⚠️ Hardcoded organization names in GUI (line 843) - see inline comment
  • Suggestion: Consider moving to configuration or backend authorization
  • Why: As more organizations need this feature, changing code for each org is not scalable
  • Alternative: Backend should control feature access, not frontend

10. ⚠️ Documentation Updates (MISSING)

Status: ⚠️ No CLAUDE.md updates

Issue: New pipeline configuration feature not documented in:

  • src/aignostics/platform/CLAUDE.md (SDK metadata section)
  • src/aignostics/application/CLAUDE.md (CLI usage examples)

Required Updates:

platform/CLAUDE.md should document:

### Pipeline Configuration (NEW in v0.2.201)

The SDK now supports configurable pipeline orchestration for GPU/CPU resources:

**Models:**
- `PipelineConfig`: Top-level pipeline configuration
- `GPUConfig`: GPU type (L4/A100), provisioning mode, max GPUs per slide
- `CPUConfig`: CPU provisioning mode

**Usage:**
\`\`\`python
custom_metadata = {
    "pipeline": {
        "gpu": {
            "gpu_type": "A100",
            "provisioning_mode": "ON_DEMAND",
            "max_gpus_per_slide": 1
        },
        "cpu": {
            "provisioning_mode": "ON_DEMAND"
        }
    }
}
\`\`\`

application/CLAUDE.md should document:

### CLI Pipeline Configuration

\`\`\`bash
aignostics application run submit \
  --application-id heta \
  --files "*.svs" \
  --gpu-type A100 \
  --gpu-provisioning-mode ON_DEMAND \
  --max-gpus-per-slide 2 \
  --cpu-provisioning-mode SPOT
\`\`\`

11. ✅ Dependency Updates (ROUTINE)

Major updates:

  • nicegui[native]: 3.1.0 → 3.3.1 (fixes regressions in 3.2.0)
  • fastapi: 0.121.1 → 0.121.3
  • boto3: 1.40.61 → 1.41.0
  • google-cloud-storage: 3.5.0 → 3.6.0
  • idc-index-data: 22.1.2 → 22.1.5
  • ruff: 0.14.4 → 0.14.5

Assessment: ✅ Routine patch/minor updates, no breaking changes expected

12. ✅ Test Cleanup (GOOD)

File: tests/aignostics/application/cli_test.py

Change: Removed @pytest.mark.skipif decorator that was skipping test in production

-@pytest.mark.skipif(
-    os.getenv("AIGNOSTICS_PLATFORM_ENVIRONMENT", "staging") == "production",
-    reason="Broken when targeting production",
-)

Assessment: ✅ Good - test now runs in both environments


🎯 Recommendations

Blocking Issues

None - All critical checks passed ✅

High Priority Suggestions

  1. 📝 Add Documentation (Recommended)

    • Update src/aignostics/platform/CLAUDE.md with pipeline configuration schema
    • Update src/aignostics/application/CLAUDE.md with CLI usage examples
    • Consider adding GUI usage guide with screenshots
  2. ♻️ Improve Commit Message

    • Reword c55a75d from "chore(deps): some" to be more descriptive
    • Can be done with interactive rebase before merge
  3. 🔒 Consider Backend Authorization for Pipeline Features

    • Current: Hardcoded org allowlist in GUI (line 843) - see inline comment
    • Suggested: Move authorization logic to backend API
    • Benefit: More maintainable and secure as feature rolls out

Low Priority Enhancements

  1. 🛡️ Add Enum Validation to CLI (see inline comment on _cli.py:369)

    • Use enum types directly instead of strings for better type safety
    • Current approach is acceptable - this is an enhancement
  2. 📊 Add JSON Schema Documentation Generation

    • Consider auto-generating docs from get_run_sdk_metadata_json_schema()
    • Publish schema to docs/source/_static/ as documented in root CLAUDE.md

🎉 Praise

Excellent work on:

  • ✅ Comprehensive test coverage (7 new unit tests) - see inline comment on test quality
  • ✅ Proper use of Pydantic validation with enums - see inline comment on constants
  • ✅ Clean architecture adherence (GUI→Service pattern)
  • ✅ Backward compatibility (optional pipeline field)
  • ✅ Access control implementation in GUI
  • ✅ Environment-specific test constants pattern
  • ✅ Clear UI descriptions for provisioning modes
  • ✅ Removing previous TODO comments

📋 PR Merge Checklist

Before merging, please:

  • Add documentation updates to CLAUDE.md files (recommended)
  • Consider improving commit c55a75d message (optional)
  • Verify CI/CD passes (ongoing)
  • Add PR description explaining the feature and use cases

Optional but recommended:

  • Consider backend authorization for pipeline features (see inline comment)
  • Add JSON schema documentation

🚀 CI/CD Notes

Recommendation: Add label skip:test_long_running to this PR

  • Why: Speeds up CI pipeline by skipping long-running E2E tests
  • Safe: All unit and integration tests pass, changes are well-tested

Expected CI Time:

  • With label: ~15 minutes
  • Without label: ~30-45 minutes

📝 Final Verdict

Status:APPROVED

This PR demonstrates high-quality medical device software engineering:

  • Strong type safety with Pydantic and enums
  • Comprehensive testing with proper markers
  • Architecture compliance and security considerations
  • Backward compatibility maintained

The implementation is production-ready with minor documentation gaps that can be addressed in follow-up or before merge.

Review complete with 4 inline comments providing detailed feedback on:

  • Security/scalability considerations for organization allowlist
  • CLI enum validation enhancement suggestion
  • Praise for excellent constant design
  • Praise for comprehensive test coverage

user_info
and user_info.organization
and user_info.organization.name
and user_info.organization.name.lower() in {"aignostics", "pre-alpha-org", "lmu", "charite"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security & Scalability Consideration:

Hardcoding organization names in the frontend for feature gating has maintainability implications:

user_info.organization.name.lower() in {"aignostics", "pre-alpha-org", "lmu", "charite"}

Suggestions:

  1. Backend Authorization: Move this logic to the API backend and check via a user permission/feature flag
  2. Configuration-based: Store allowed organizations in _settings.py or environment variables
  3. Feature Flag Service: Use a feature flag system for gradual rollout

Current Risk: Low (alpha feature with known orgs)
Future Risk: Medium (requires code deployment for each new organization)

Why it matters: As this feature rolls out to more organizations, changing code for each org request is not sustainable for a medical device SDK.

validate_only: Annotated[
bool, typer.Option(help="If True, cancel the run post validation, before analysis.")
] = False,
gpu_type: Annotated[
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enhancement Suggestion:

Consider using enum types directly for validation instead of strings:

from aignostics.platform._sdk_metadata import GPUType, ProvisioningMode

gpu_type: Annotated[
    GPUType,  # Use enum directly
    typer.Option(help="GPU type to use for processing."),
] = GPUType.A100

Benefits:

  • ✅ Type safety at runtime
  • ✅ IDE autocomplete support
  • ✅ Automatic validation (rejects invalid values like "A200")
  • ✅ Consistent with internal models

Current approach is acceptable - this is an enhancement, not a bug.

ITEM_SDK_METADATA_SCHEMA_VERSION = "0.0.3"

# Pipeline orchestration defaults
DEFAULT_GPU_TYPE = "A100"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent Design Choice

Defining pipeline defaults as module-level constants:

  • ✅ Makes defaults explicit and discoverable
  • ✅ Easier to update across all usages
  • ✅ Testable (constants can be imported in tests)
  • ✅ Self-documenting for SDK users

This follows best practices for configuration management in enterprise software.

validate_item_sdk_metadata(metadata)


class TestPipelineConfiguration:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent Test Coverage

This test class demonstrates best practices for medical device software testing:

  1. Comprehensive coverage: Defaults, custom values, enums, validation, edge cases
  2. Clear test names: Each test describes exactly what it validates
  3. Proper markers: All tests correctly marked as @pytest.mark.unit
  4. Edge case testing: Invalid values (max_gpus_per_slide=0, -1) properly tested
  5. Integration testing: Tests both with and without pipeline config (optional field)
  6. Validation testing: Ensures Pydantic PositiveInt constraint works

Test execution: All 7 tests pass in 3.07s ✅

This level of test quality is exactly what's needed for enterprise-grade medical device software.

@codecov
Copy link

codecov bot commented Nov 19, 2025

❌ 2 Tests Failed:

Tests completed Failed Passed Skipped
570 2 568 8
View the full list of 2 ❄️ flaky test(s)
tests.aignostics.application.gui_test::test_gui_run_download

Flake rate in main: 29.25% (Passed 104 times, Failed 43 times)

Stack Traces | 6.23s run time
user = <nicegui.testing.user.User object at 0x7f82ea79a780>
runner = <typer.testing.CliRunner object at 0x7f82c6b7b6f0>
tmp_path = PosixPath('.../pytest-of-runner/pytest-13/test_gui_run_download1')
silent_logging = None
record_property = <function record_property.<locals>.append_property at 0x7f82e9f625c0>

    @pytest.mark.e2e
    @pytest.mark.long_running
    @pytest.mark.flaky(retries=1, delay=5)
    @pytest.mark.timeout(timeout=60 * 5)
    @pytest.mark.sequential  # Helps on Linux with image analysis step otherwise timing out
    async def test_gui_run_download(  # noqa: PLR0915
        user: User, runner: CliRunner, tmp_path: Path, silent_logging: None, record_property
    ) -> None:
        """Test that the user can download a run result via the GUI."""
        record_property("tested-item-id", "SPEC-APPLICATION-SERVICE, SPEC-GUI-SERVICE")
        with patch(
            "aignostics.application._gui._page_application_run_describe.get_user_data_directory",
            return_value=tmp_path,
        ):
            # Find run
            runs = Service().application_runs(
                application_id=HETA_APPLICATION_ID,
                application_version=HETA_APPLICATION_VERSION,
                external_id=SPOT_0_GS_URL,
                has_output=True,
                limit=1,
            )
            if not runs:
                message = f"No matching runs found for application {HETA_APPLICATION_ID} ({HETA_APPLICATION_VERSION}). "
                message += "This test requires the scheduled test test_application_runs_heta_version passing first."
                pytest.skip(message)
    
            run_id = runs[0].run_id
    
            # Explore run
            run = Service().application_run(run_id).details()
            print(
                f"Found existing run: {run.run_id}\n"
                f"application: {run.application_id} ({run.version_number})\n"
                f"status: {run.state}, output: {run.output}\n"
                f"submitted at: {run.submitted_at}, terminated at: {run.terminated_at}\n"
                f"statistics: {run.statistics!r}\n",
                f"custom_metadata: {run.custom_metadata!r}\n",
            )
            # Step 1: Go to latest completed run
            await user.open(f"/application/run/{run.run_id}")
            await user.should_see(f"Run {run.run_id}", retries=100)
            await user.should_see(
                f"Run of {run.application_id} ({run.version_number})",
                retries=100,
            )
    
            # Step 2: Open Result Download dialog
            await user.should_see(marker="BUTTON_DOWNLOAD_RUN", retries=100)
            user.find(marker="BUTTON_DOWNLOAD_RUN").click()
    
            # Step 3: Select Data
            download_run_button: ui.button = user.find(marker="DIALOG_BUTTON_DOWNLOAD_RUN").elements.pop()
            assert not download_run_button.enabled, "Download button should be disabled before selecting target"
            await user.should_see(marker="BUTTON_DOWNLOAD_DESTINATION_DATA", retries=100)
            user.find(marker="BUTTON_DOWNLOAD_DESTINATION_DATA").click()
    
            # Step 3: Trigger Download
            await sleep(2)  # Wait a bit for button state to update so we can click
            download_run_button: ui.button = user.find(marker="DIALOG_BUTTON_DOWNLOAD_RUN").elements.pop()
>           assert download_run_button.enabled, "Download button should be enabled after selecting target"
E           AssertionError: Download button should be enabled after selecting target
E           assert False
E            +  where False = <nicegui.elements.button.Button object at 0x7f82c45e4c50>.enabled

.../aignostics/application/gui_test.py:395: AssertionError
tests.aignostics.qupath.gui_test::test_gui_run_qupath_install_to_inspect

Flake rate in main: 34.29% (Passed 92 times, Failed 48 times)

Stack Traces | 24.1s run time
user = <nicegui.testing.user.User object at 0x7f82ea0308d0>
runner = <typer.testing.CliRunner object at 0x7f82e9e95750>
tmp_path = PosixPath('.../pytest-of-runner/pytest-13/test_gui_run_qupath_install_to0')
silent_logging = None, qupath_teardown = None
record_property = <function record_property.<locals>.append_property at 0x7f82bbf19620>

    @pytest.mark.e2e
    @pytest.mark.long_running
    @pytest.mark.skipif(
        (platform.system() == "Linux" and platform.machine() in {"aarch64", "arm64"}) or platform.system() == "Windows",
        reason="QuPath is not supported on ARM64 Linux; Windows support is not fully tested yet",
    )
    @pytest.mark.timeout(timeout=60 * 15)
    @pytest.mark.sequential
    async def test_gui_run_qupath_install_to_inspect(  # noqa: C901, PLR0912, PLR0913, PLR0914, PLR0915, PLR0917
        user: User, runner: CliRunner, tmp_path: Path, silent_logging: None, qupath_teardown: None, record_property
    ) -> None:
        """Test installing QuPath, downloading run results, creating QuPath project from it, and inspecting results."""
        record_property("tested-item-id", "TC-QUPATH-01, SPEC-GUI-SERVICE")
    
        # Find run
        runs = Service().application_runs(
            application_id=HETA_APPLICATION_ID,
            application_version=HETA_APPLICATION_VERSION,
            external_id=SPOT_0_GS_URL,
            has_output=True,
            limit=1,
        )
        if not runs:
            message = f"No matching runs found for application {HETA_APPLICATION_ID} ({HETA_APPLICATION_VERSION}). "
            message += "This test requires the scheduled test test_application_runs_heta_version passing first."
            pytest.skip(message)
    
        run_id = runs[0].run_id
    
        # Explore run
        run = Service().application_run(run_id).details()
        print(
            f"Found existing run: {run.run_id}\n"
            f"application: {run.application_id} ({run.version_number})\n"
            f"status: {run.state}, output: {run.output}\n"
            f"submitted at: {run.submitted_at}, terminated at: {run.terminated_at}\n"
            f"statistics: {run.statistics!r}\n",
            f"custom_metadata: {run.custom_metadata!r}\n",
        )
    
        # Explore results
        results = list(Service().application_run(run_id).results())
        assert results, f"No results found for run {run_id}"
        for item in results:
            print(
                f"Found item: {item.item_id}, status: {item.state}, output: {item.output}, "
                f"external_id: {item.external_id}\n"
                f"custom_metadata: {item.custom_metadata!r}\n",
            )
    
        with patch(
            "aignostics.application._gui._page_application_run_describe.get_user_data_directory", return_value=tmp_path
        ):
            # Step 1: (Re)Install QuPath
            result = runner.invoke(cli, ["qupath", "uninstall"])
            assert result.exit_code in {0, 2}, f"Uninstall command failed with exit code {result.exit_code}"
            was_installed = not result.exit_code
    
            result = runner.invoke(cli, ["qupath", "install"])
            output = normalize_output(result.output, strip_ansi=True)
            assert f"QuPath v{QUPATH_VERSION} installed successfully" in output, (
                f"Expected 'QuPath v{QUPATH_VERSION} installed successfully' in output.\nOutput: {output}"
            )
            assert result.exit_code == 0
    
            # Step 2: Go to latest completed run via GUI
            await user.open(f"/application/run/{run.run_id}")
            await user.should_see(f"Run {run.run_id}")
            await user.should_see(f"Run of {HETA_APPLICATION_ID} ({HETA_APPLICATION_VERSION})")
    
            # Step 3: Open Result Download dialog
            await user.should_see(marker="BUTTON_OPEN_QUPATH", retries=100)
            user.find(marker="BUTTON_OPEN_QUPATH").click()
    
            # Step 4: Select Data destination
            await user.should_see(marker="BUTTON_DOWNLOAD_DESTINATION_DATA")
            download_destination_data_button: ui.button = user.find(
                marker="BUTTON_DOWNLOAD_DESTINATION_DATA"
            ).elements.pop()
            assert download_destination_data_button.enabled, "Download destination button should be enabled"
            user.find(marker="BUTTON_DOWNLOAD_DESTINATION_DATA").click()
            await assert_notified(user, "Using Launchpad results directory", 30)
    
            # Step 5: Trigger Download
            await user.should_see(marker="DIALOG_BUTTON_DOWNLOAD_RUN")
            download_run_button: ui.button = user.find(marker="DIALOG_BUTTON_DOWNLOAD_RUN").elements.pop()
>           assert download_run_button.enabled, "Download button should be enabled before downloading"
E           AssertionError: Download button should be enabled before downloading
E           assert False
E            +  where False = <nicegui.elements.button.Button object at 0x7f82bbc5c550>.enabled

.../aignostics/qupath/gui_test.py:235: AssertionError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
2 New issues
74.0% Coverage on New Code (required ≥ 80%)
9.0% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants