fix(tests): fix integration test filter with dynamic job determination #589
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Claude Code helped me with this refactor and the nice explanation. I've never used the github 'actions' folder before but this seems like a nice way to organize the logic to avoid the workflow file blowing up. I also opted to put it into the action.yml instead of elevating it to a separate shell script because it seemed like logic that was only relevant for github actions, and wouldn't be used by people outside of that env.
Problem
The GitHub Actions workflow filtering logic was causing integration tests to be skipped incorrectly when using the
filterinput parameter withworkflow_dispatch.Root Cause
The workflow used simple substring matching with
contains()to determine whether jobs should run:This approach had a critical issue: test function filters were broken. When passing pytest test filters like
"test_completion"or"test_streaming", the jobs would be skipped entirely because these strings don't exist in the provider list. The jobs would skip before pytest ever had a chance to run.The workflow conflated two different types of filtering:
Example Failure Scenario
Expected: Both test jobs run, pytest filters to only completion tests
Actual: Both jobs skip because
contains("anthropic,bedrock,...,minimax", "test_completion")returnsfalseSolution
Created a reusable composite action (
.github/actions/determine-jobs) that intelligently analyzes the filter to determine which test jobs should run:Provider matching: Checks if the filter contains any provider name from either the integration or local provider lists
Smart fallback: If the filter doesn't match any provider name, assumes it's a test function filter and runs all relevant jobs, letting pytest handle the filtering
"test_completion"→ runs both jobs, pytest filters the tests ✅"test_streaming"→ runs both jobs, pytest filters the tests ✅Provider-specific optimization: When a provider is explicitly named, only the relevant job runs
"minimax"→ only runs integration tests job ✅"ollama"→ only runs local tests job ✅Testing Scenarios
minimaxollamatest_completiontest_streaming and minimaxllamaChanges
.github/actions/determine-jobs/action.yml- Composite action with smart filter analysis logicdetermine-jobs-to-runjob to use the new composite actionrun-integration-testsjob condition (already correctly configured)run-local-integration-testsjob condition (already correctly configured)Benefits
PR Type
🐛 Bug Fix ## Relevant issuesChecklist