Skip to content

chore: audit and fix skipped/xfailed tests across codebase #807

@hartym

Description

@hartym

Summary

TLDR - Comprehensive audit of all skipped and xfailed tests in the codebase to identify which need fixing, removal, or documentation. This maintenance task ensures test suite integrity by systematically reviewing every skipped/xfailed test and taking appropriate action - fixing broken tests, removing obsolete ones, and properly documenting legitimate environment-dependent skips.

Context

The codebase contains multiple skipped and xfailed tests that have accumulated over time. Some represent actual bugs that need fixing, others are environment-dependent and legitimate, while some may be obsolete. This technical debt reduces confidence in the test suite and may hide real issues.

Current categories of skipped/xfailed tests found:

  • Cache functionality xfails - Tests failing due to cache key mismatch issues (related to feat(http-client): migrate to hishel 1.x #784)
  • Service override skip - Test skipped pending sequential bind event handling implementation
  • UV integration tests - Entire test class marked as skip, testing UV tool integration
  • Environment-dependent skips - Tests legitimately skipped based on CI mode, git availability
  • Storage dependency xfail - Test expecting storage to be optional but it's actually required
  • Redis janitor skip - Feature not yet implemented for Redis storage

Test Audit Findings

1. Critical Functionality Issues (xfail)

File: harp_apps/http_cache/tests/test_cache_headers.py

  • Tests: test_cache_miss_header, test_cache_hit_header
  • Reason: Cache key mismatch - keys include backend URL which changes per request in test environment
  • Action: Fix cache key generation logic or test setup

File: tests/integration/test_application_filtering_integration.py

  • Test: test_abuild_with_disabled_app_in_config
  • Reason: Storage is required dependency but test expects it to be optional
  • Action: Fix test expectations or make storage truly optional

2. Missing Implementations (skip)

File: harp_apps/http_cache/tests/test_settings.py

File: harp_apps/janitor/tests/test_worker.py

  • Test: Conditional skip for Redis blob cleanup
  • Reason: Feature not implemented for Redis storage
  • Action: Implement feature or document as known limitation

3. Obsolete Tests (remove)

File: tests/test_uv_integration.py

  • Class: TestUVIntegration (entire class with @pytest.mark.skip)
  • Reason: UV integration testing no longer a priority
  • Action: Remove entire test file

4. Environment Dependencies (keep as-is)

File: tests/test_version_detection.py

  • Tests: Git version tests skipped in CI mode
  • Reason: Legitimate - CI doesn't have git history
  • Action: Keep skips, ensure skip messages are clear

File: tests/test_cookiecutter_integration.py

  • Tests: Git-dependent tests
  • Reason: Legitimate - git may not be available
  • Action: Keep conditional skips

Possible Implementation

Phase 1: Documentation (Immediate)

Create tracking document tests/SKIPPED_TESTS.md with:

  • Complete inventory of all skipped/xfailed tests
  • Categorization by reason (environment, missing feature, bug)
  • Priority assignments (P1: broken features, P2: missing features, P3: cleanup)
  • Links to related issues

Phase 2: Quick Wins

  1. Remove tests/test_uv_integration.py entirely
  2. Update skip messages to be more descriptive
  3. Add issue references to xfail decorators

Phase 3: Fix Critical Issues

  1. Investigate and fix cache key generation for http_cache tests
  2. Review storage dependency requirements and fix test expectations
  3. Assess if service override feature (feat: support service overrides from dependent applications #806) is still planned

Phase 4: Feature Completion

  1. Implement Redis blob cleanup in janitor or document as unsupported
  2. Complete service override implementation if planned

Testing Scenarios

  1. Verification: After changes, run full test suite to ensure no regressions
  2. CI Check: Verify environment-dependent skips work correctly in CI
  3. Coverage: Check if test coverage improves after fixing xfails

Current Challenges

Metadata

Metadata

Assignees

No one assigned

    Labels

    BackendbugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions