-
Notifications
You must be signed in to change notification settings - Fork 4
Feat/custom pipeline #267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/custom pipeline #267
Conversation
…ncel_and_download_and_delete against production
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
Codecov Report❌ Patch coverage is
|
|
Claude finished @helmut-hoffer-von-ankershoffen's task —— View job Post-Merge Review: PR #267 - Custom Pipeline Support ✅Status: ✅ PR Merged | 🟡 Quality Issues Identified Executive SummaryThis PR successfully introduces custom pipeline configuration for GPU/CPU resource orchestration, enabling dynamic resource allocation for computational pathology workflows. While the implementation is architecturally sound and passes all linting checks, coverage falls below the 75% threshold, primarily due to missing GUI tests for the new pipeline configuration UI. ✅ What Went Well1. Architecture Compliance ✓
2. Type Safety & Code Quality ✓✓ ruff check - All checks passed
✓ ruff format - 151 files already formatted
✓ pyright - 0 errors, 0 warnings
✓ mypy - Success: no issues found in 79 source files3. Test Markers Compliance ✓All 7 new pipeline configuration tests properly marked with @pytest.mark.unit
def test_pipeline_config_defaults() -> None: ...Verification: 4. Domain Model Design ✓Strong Pydantic models with proper validation: class PipelineConfig(BaseModel):
gpu: GPUConfig = Field(default_factory=GPUConfig)
cpu: CPUConfig = Field(default_factory=CPUConfig)
class GPUConfig(BaseModel):
gpu_type: GPUType # Enum: L4, A100
provisioning_mode: ProvisioningMode # Enum: SPOT, ON_DEMAND
max_gpus_per_slide: PositiveInt # Must be positive5. Security & Access Control ✓Organization-based feature gating: can_configure_pipeline = (
user_info.organization.name.lower()
in {"aignostics", "pre-alpha-org", "lmu", "charite"}
)🟡 Issues Requiring Attention1. Coverage Below Threshold (BLOCKING)Problem: Patch coverage is 72.91% (target: 75%) Root Cause: New GUI code in
Impact:
Recommendation: Add GUI integration tests (see inline comments for detailed examples) 2. SonarQube Code Duplication (WARNING)Problem: 9% duplication on new code (target: ≤ 3%) Likely Cause: Pipeline configuration defaults and organization allow-lists appear in multiple places - See inline comments for specific locations Recommendation: Extract to constants module (see inline comments) 3. Missing Documentation UpdatesProblem: No updates to module-specific CLAUDE.md Required Updates: File: Add section: ### Pipeline Configuration (NEW in v0.2.202)
**Purpose**: Dynamic resource orchestration for computational pathology workloads.
**Models** (`_sdk_metadata.py`):
- `PipelineConfig`: Top-level pipeline configuration
- `GPUConfig`: GPU type (L4/A100), provisioning mode, max GPUs per slide
- `CPUConfig`: CPU provisioning mode
**CLI Usage**:
```bash
# Submit with custom pipeline config via custom metadata
aignostics application run submit heta metadata.csv \
--custom-metadata '{"pipeline": {"gpu": {"gpu_type": "A100", "max_gpus_per_slide": 2}}}'GUI Usage:
Security: Pipeline configuration only available to authorized organizations. |
| # Build custom metadata with pipeline configuration | ||
| custom_metadata = { | ||
| "pipeline": { | ||
| "gpu": { | ||
| "gpu_type": submit_form.gpu_type, | ||
| "provisioning_mode": submit_form.gpu_provisioning_mode, | ||
| "max_gpus_per_slide": submit_form.max_gpus_per_slide, | ||
| }, | ||
| "cpu": { | ||
| "provisioning_mode": submit_form.cpu_provisioning_mode, | ||
| }, | ||
| }, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coverage Issue: These 13 lines are uncovered by tests (contributing to 72.91% < 75% threshold).
Recommendation: Add GUI integration test that verifies custom_metadata includes pipeline configuration:
@pytest.mark.integration
async def test_submit_with_pipeline_config(app: nicegui_testing.Screen) -> None:
"""Test pipeline configuration is included in run submission."""
# Mock user with pipeline access
app.storage.tab["user_info"] = UserInfo(
organization=Organization(name="aignostics")
)
await app.open("/application/heta")
# Set pipeline config values
app.find("SELECT_GPU_TYPE").set_value("A100")
app.find("NUMBER_MAX_GPUS_PER_SLIDE").set_value(2)
with patch.object(Service, "application_run_submit_from_metadata") as mock:
app.find("BUTTON_SUBMISSION_UPLOAD").click()
assert mock.call_args.kwargs["custom_metadata"]["pipeline"]["gpu"]["gpu_type"] == "A100"This would cover the custom_metadata building logic and improve patch coverage.
| user_info: UserInfo | None = app.storage.tab.get("user_info", None) | ||
| can_configure_pipeline = ( | ||
| user_info | ||
| and user_info.organization | ||
| and user_info.organization.name | ||
| and user_info.organization.name.lower() in {"aignostics", "pre-alpha-org", "lmu", "charite"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplication Issue: Organization allow-list appears multiple times in this file (lines 796, 808, 846).
Recommendation: Extract to module constant:
# src/aignostics/application/_constants.py
PIPELINE_CONFIG_ALLOWED_ORGS = {"aignostics", "pre-alpha-org", "lmu", "charite"}
# Then use:
can_configure_pipeline = (
user_info
and user_info.organization
and user_info.organization.name
and user_info.organization.name.lower() in PIPELINE_CONFIG_ALLOWED_ORGS
)This reduces the 9% code duplication flagged by SonarQube.
| # Pipeline orchestration defaults | ||
| DEFAULT_GPU_TYPE = "A100" | ||
| DEFAULT_MAX_GPUS_PER_SLIDE = 1 | ||
| DEFAULT_GPU_PROVISIONING_MODE = "ON_DEMAND" | ||
| DEFAULT_CPU_PROVISIONING_MODE = "ON_DEMAND" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Quality: Excellent use of module-level constants for defaults!
Minor Suggestion: These constants appear in multiple places. Consider referencing them from a shared constants module to reduce duplication:
# src/aignostics/constants.py (already exists)
# Add pipeline defaults here so both CLI and GUI can reference them
from aignostics.constants import (
DEFAULT_GPU_TYPE,
DEFAULT_MAX_GPUS_PER_SLIDE,
DEFAULT_GPU_PROVISIONING_MODE,
DEFAULT_CPU_PROVISIONING_MODE,
)This would help with the 9% code duplication issue flagged by SonarQube.
| class PipelineConfig(BaseModel): | ||
| """Pipeline configuration for dynamic orchestration.""" | ||
|
|
||
| gpu: GPUConfig = Field( | ||
| default_factory=GPUConfig, | ||
| description="GPU resource configuration", | ||
| ) | ||
| cpu: CPUConfig = Field( | ||
| default_factory=CPUConfig, | ||
| description="CPU resource configuration", | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent Design: 👍 PipelineConfig model with proper Pydantic validation
Strong points:
- ✅ Enum-based types (GPUType, ProvisioningMode) prevent typos
- ✅
PositiveIntvalidation onmax_gpus_per_slide - ✅ Default factory pattern for mutable defaults
- ✅ Clear field descriptions for documentation
This is a great example of domain-driven design with strong type safety!
| from aignostics.platform._sdk_metadata import GPUConfig | ||
|
|
||
| with pytest.raises(ValidationError): | ||
| GPUConfig(max_gpus_per_slide=0) # Must be positive | ||
|
|
||
| with pytest.raises(ValidationError): | ||
| GPUConfig(max_gpus_per_slide=-1) # Must be positive |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great Test Coverage: 👍 Proper validation testing for max_gpus_per_slide
This test correctly verifies that:
- Zero is rejected (not positive)
- Negative values are rejected
Suggestion: Consider adding upper bound test:
@pytest.mark.unit
def test_max_gpus_per_slide_reasonable_upper_bound() -> None:
"""Test that unreasonably high GPU counts are handled."""
from aignostics.platform._sdk_metadata import GPUConfig
# Should accept reasonable values
config = GPUConfig(max_gpus_per_slide=8)
assert config.max_gpus_per_slide == 8
# Warn or document if >8 GPUs is supported
|


No description provided.