DM-54789: Add Codecov and mypy-coverage#189
Merged
mfisherlevine merged 8 commits intomainfrom May 4, 2026
Merged
Conversation
Runs the test suite inside the LSST scipipe ``w_latest`` container on every PR and push to main, then uploads the resulting coverage.xml to Codecov so PRs surface coverage deltas. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`getSite()` previously returned the string `"UNKOWN"` (sic) as its fallback when none of the known Rubin sites could be detected. That return value was never intended to be used positively — no caller handled it, several callers pinned a narrow whitelist of sites and failed loudly when the fallback was returned, and a dead-code check in `efdUtils._makeEfdClient` even tested for `"UNKNOWN"` (the correctly-spelled mirror, which the fallback could never produce). Replace the `"UNKOWN"` fallback with `"local"`, with the new semantics: - `local` means "none of the known Rubin sites could be detected, this is probably a developer's laptop". - Callers that need a real site (talking to S3, EFD, a Butler repo) should treat `local` as "no functioning site" and either raise a clear error or skip the operation. - Callers that only branch on `site == "summit"` / `site == "tucson"` can treat `local` as a transparent no-op fallthrough — which is the existing behaviour of most of them. Update the docstring to list `local` as a valid return value and explain the intended handling. Also fix the dead-code check in `efdUtils._makeEfdClient`: it was comparing the return value of `getSite()` against `"UNKNOWN"` (with an N), which could never match the old `"UNKOWN"` (without an N) fallback. The branch now checks for `"local"` and raises a clearer error message that spells out why an EFD client cannot be created off-site. Other call sites are left unchanged: - `butlerUtils.makeDefaultButler` already raises `FileNotFoundError` for any site not in its `SUPPORTED_SITES` whitelist, and `"local"` is correctly not in that list. - `butlerUtils._configureForSite` and `getLatissDefaultCollections` only branch on `"tucson"`, so `"local"` falls through cleanly. - The four test files in `tests/` that check `site == "jenkins"` are unaffected. This unblocks running `rubintv_production`'s unit tests on a developer laptop without having to fake a site through env vars. Written by Claude Code 4.6 Opus (high-effort).
The rubintv_production CI workflow needs a way to flag "we are running under GHA" so that butler / uploader-bound tests can skip themselves the same way they would on a developer laptop, but without conflating CI with ``local`` (the laptop case). Pattern follows the existing TTS/BTS/SUMMIT/USDF mapping inside the ``RAPID_ANALYSIS_LOCATION`` block: the workflow exports ``RAPID_ANALYSIS_LOCATION=gha`` and getSite() picks it up here. Compared case-insensitively because the existing values above are all upper-case shouts and "gha" reads more naturally lower-case in the workflow file -- accept either. Also adds RAPID_ANALYSIS_LOCATION to tests/SConscript's pass-through list so scons-driven test runs see the same env var that getSite() now consults. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two reasons RAPID_ANALYSIS_LOCATION was the wrong signal: 1. Layering. RAPID_ANALYSIS_LOCATION is set (and read for site dispatch) by rubintv_production, which is downstream of summit_utils in the build graph. summit_utils consulting an env var owned by a downstream package is a layering inversion. 2. Coverage. summit_utils' own GHA workflow runs do not set (and should not have to set) RAPID_ANALYSIS_LOCATION, so the previous check would not fire there -- yet those runs are exactly the case where ``site == "gha"`` is most useful. GitHub Actions sets ``GITHUB_ACTIONS=true`` automatically in every workflow run, container or not, regardless of which repo's CI is running. Switching the check to that fixes both problems with a one-line edit. Also adds GITHUB_ACTIONS (alongside the existing site-detection env vars) to tests/SConscript's passThrough list so scons-driven test runs see the same env var that getSite() now consults. The RAPID_ANALYSIS_LOCATION entry stays because the K8s pod block above still reads it for TTS/BTS/SUMMIT/USDF. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirrors the pattern just landed in rubintv_production tickets/DM-54789: - mypy-coverage.yaml runs mfisherlevine/mypy_coverage on every PR and push to main, posting a markdown report as a sticky PR comment via marocchino/sticky-pull-request-comment with ``recreate: true`` so the comment's timestamp updates each push and the PR conversation renders it at the bottom (after every commit). The walled-off "Excluded files" section is dropped (``include-excluded: false`` / ``--no-include-excluded``) because it's noise reviewers don't act on. Complementary to the existing mypy_check.yaml -- one is pass/fail; this one quantifies annotation coverage. - codecov.yml sets ``comment.behavior: new`` so the Codecov bot delete-and-recreates its PR comment on every upload, achieving the same "always at the bottom" behaviour. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirrors the matching change in rubintv_production tickets/DM-54789:
the Codecov upload was authenticating fine but coming back with
``{"message":"Repository not found"}``, almost certainly because
the action's auto-detection was resolving to the pre-rename
lsst-sitcom slug instead of the current lsst-so one. Pass the slug
explicitly via ``${{ github.repository }}`` so the lookup is
unambiguous.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
048430f to
8e017a1
Compare
The cassettes were recorded with HTTPS_PROXY active at USDF, baking sdfproxy.sdf.slac.stanford.edu:3128 into every URI. Outside USDF (and seemingly now also at USDF, after some library or NO_PROXY drift) the host/port matchers reject every replay. The recorded paths uniquely identify each service, so matching on method/path/query/body is enough and makes replay independent of proxy state. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
mypy-coverage report❌ 165 unannotated, 25 partial definition(s).
Summary
Files with gaps
Unannotated definitions (165)
|
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. Thanks for integrating Codecov - We've got you covered ☂️ |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Runs the test suite inside the LSST scipipe
w_latestcontainer on every PR and push to main, then uploads the resulting coverage.xml to Codecov so PRs surface coverage deltas.