Skip to content

Extract BigQuery into DataProvider interface for Component Readiness#3438

Merged
openshift-merge-bot[bot] merged 3 commits intoopenshift:mainfrom
stbenjam:extract-bq-provider
Apr 16, 2026
Merged

Extract BigQuery into DataProvider interface for Component Readiness#3438
openshift-merge-bot[bot] merged 3 commits intoopenshift:mainfrom
stbenjam:extract-bq-provider

Conversation

@stbenjam
Copy link
Copy Markdown
Member

@stbenjam stbenjam commented Apr 15, 2026

Introduces a DataProvider abstraction layer that decouples Component Readiness from direct BigQuery access. BigQuery query logic moves from pkg/api/componentreadiness into pkg/api/componentreadiness/dataprovider/bigquery, behind a clean interface defined in dataprovider/interface.go. All callers (server, metrics, jira automator, job annotator, cache loader) now use the DataProvider interface instead of raw BigQuery clients.

The DataProvider interface (pkg/api/componentreadiness/dataprovider/interface.go) is probably not how I would designed it from scratch -- base and sample have difference semantics because of cache optimizations we've made in where results get filtered, but overall I think this is a positive change.

I have proven in another branch that the interface is sufficient to build off of Postgres, but I am extracting these into multiple PR's to reduce the review surface, and impacts once merged.

Summary by CodeRabbit

  • New Features

    • Added --data-provider CLI flag to select the component-readiness data backend (default: bigquery).
  • Refactor

    • Replaced BigQuery-specific plumbing with a pluggable data provider used across reports, server, metrics, Jira automation, and loaders — enabling alternate backends and simplifying configuration.

Introduces a DataProvider abstraction layer that decouples Component
Readiness from direct BigQuery access. BigQuery query logic moves from
pkg/api/componentreadiness into pkg/api/componentreadiness/dataprovider/bigquery,
behind a clean interface defined in dataprovider/interface.go. All callers
(server, metrics, jira automator, job annotator, cache loader) now use
the DataProvider interface instead of raw BigQuery clients.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@stbenjam
Copy link
Copy Markdown
Member Author

/test e2e

@openshift-ci openshift-ci bot requested review from deads2k and petr-muller April 15, 2026 19:16
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 15, 2026

Walkthrough

Introduces a dataprovider abstraction for Component Readiness, a BigQuery-backed provider implementation, and backend-agnostic crstatus types; rewires callers (server, CLI, metrics, loaders, middleware, regressions, Jira automator, tests) to use the provider instead of direct BigQuery clients and BQ-specific types.

Changes

Cohort / File(s) Summary
Data Provider & Interface
pkg/api/componentreadiness/dataprovider/interface.go
Adds DataProvider and constituent query interfaces (TestStatusQuerier, TestDetailsQuerier, MetadataQuerier, JobQuerier) plus JobRunStats type.
BigQuery Provider Implementation
pkg/api/componentreadiness/dataprovider/bigquery/provider.go, .../dataprovider/bigquery/querygenerators.go, .../dataprovider/bigquery/releasedates.go
New BigQuery-backed provider implementation and SQL query generator/executor modules; provides methods for status, job-run, metadata, release dates, variant and job-run stats queries; includes override-merging and error aggregation logic.
CR Status Types
pkg/apis/api/componentreport/crstatus/types.go, pkg/apis/api/componentreport/testdetails/types.go
Adds crstatus DTOs (ReportTestStatus, TestStatus, TestJobRunStatuses, TestJobRunRows, Variant, JobVariant); changes JobRunStats.StartTime to time.Time.
Component Readiness Core
pkg/api/componentreadiness/component_report.go, pkg/api/componentreadiness/test_details.go, pkg/api/componentreadiness/regressiontracker.go, pkg/api/componentreadiness/utils/utils.go
Replaces BigQuery client usage with DataProvider calls, updates function signatures, delegates query logic to provider, and migrates types from bq.* to crstatus.*.
Middleware
pkg/api/componentreadiness/middleware/... (interface.go, linkinjector.go, list.go, regressionallowances.go, regressiontracker.go, releasefallback/releasefallback.go)
Updates middleware signatures to use crstatus types; ReleaseFallback now accepts DataProvider and delegates queries to it; removes BigQuery-specific caching/query plumbing.
Server & Metrics Integration
pkg/sippyserver/server.go, pkg/sippyserver/metrics/metrics.go
Adds crDataProvider to server and metrics refresh paths; prefetches variants/releases via provider; component-readiness metrics now use provider when available.
CLI / Commands / Loaders
cmd/sippy/*.go, pkg/dataloader/crcacheloader/crcacheloader.go, cmd/sippy/serve.go
Wires bqprovider.NewBigQueryProvider(...) where appropriate, adds --data-provider server flag, and passes provider into regressions, loaders, Jira automator, and server initialization.
Jira Automator & Annotator
pkg/componentreadiness/jiraautomator/jiraautomator.go, cmd/sippy/automatejira.go, cmd/sippy/annotatejobruns.go, pkg/componentreadiness/jobrunannotator/jobrunannotator.go
Jira automator now accepts DataProvider; job-run annotator and CLI change to crstatus.Variant types and call provider-backed report generation.
Tests & Test Helpers
pkg/api/componentreadiness/..._test.go, pkg/api/componentreadiness/dataprovider/bigquery/provider_test.go, pkg/api/componentreadiness/dataprovider/bigquery/querygenerators_test.go, pkg/api/componentreadiness/middleware/releasefallback/releasefallback_test.go
Adds provider-specific tests; updates existing tests to use crstatus types; moves some tests to provider package.
Miscellaneous
pkg/bigquery/bqlabel/labels.go, other imports/typedef tweaks across codebase
Adds CRViewJobs label constant; updates imports and small type/field changes across many files to adopt provider and crstatus replacements.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant Server
  participant DataProvider
  participant BigQuery
  participant Cache
  Client->>Server: HTTP/CLI request (component report / test details / metrics)
  Server->>DataProvider: QueryJobVariants / QueryReleases / QueryBaseTestStatus...
  DataProvider->>Cache: Get cached results (if present)
  alt cache miss
    DataProvider->>BigQuery: Execute SQL via query generators
    BigQuery-->>DataProvider: Rows / results
    DataProvider->>Cache: Store generated results
  end
  DataProvider-->>Server: Aggregated crstatus types
  Server-->>Client: JSON response
  Note over Server,DataProvider: Metrics refresh and Jira automator use same flow asynchronously
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes


Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (1 error, 1 warning)

Check name Status Explanation Resolution
Sql Injection Prevention ❌ Error Implementation uses fmt.Sprintf to construct SQL queries with dynamic values interpolated directly, violating SQL injection prevention principles. Refactor query construction to eliminate fmt.Sprintf for SQL building and use only BigQuery's QueryParameter mechanism with allowlist validation.
Docstring Coverage ⚠️ Warning Docstring coverage is 33.87% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (12 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Extract BigQuery into DataProvider interface for Component Readiness' clearly and accurately summarizes the main change: introducing a DataProvider abstraction to decouple Component Readiness from direct BigQuery access.
Go Error Handling ✅ Passed PR implements proper Go error handling patterns with error accumulation in slices and correct return types.
Excessive Css In React Should Use Styles ✅ Passed This custom check for React CSS styling patterns is not applicable to this pull request. The PR is a Go backend refactoring that introduces a DataProvider abstraction layer for BigQuery integration in Component Readiness.
Single Responsibility And Clear Naming ✅ Passed PR demonstrates good adherence to Single Responsibility and Clear Naming principles with focused, cohesive interfaces and specific, action-oriented method names.
Stable And Deterministic Test Names ✅ Passed The custom check for stable and deterministic Ginkgo test names is not applicable to this pull request. The PR modifies standard Go unit tests using the testing.T package with t.Run() subtests (e.g., in provider_test.go), not Ginkgo's behavioral testing DSL. The test names examined (such as "No overrides, no variants removed", "Single override removes matching variant") are all static, descriptive strings without dynamic content, timestamps, UUIDs, or other variable information. The codebase uses standard Go testing conventions throughout, as evidenced by the absence of Ginkgo syntax patterns across all modified test files.
Test Structure And Quality ✅ Passed The custom check for Ginkgo test code quality is not applicable to this pull request. The PR introduces and modifies standard Go unit tests using the testing package, not Ginkgo BDD tests.
Microshift Test Compatibility ✅ Passed This PR does not add any Ginkgo e2e tests; it only modifies unit tests using Go's standard testing framework.
Single Node Openshift (Sno) Test Compatibility ✅ Passed This PR does not introduce any new Ginkgo e2e tests. All test files use standard Go testing package, not Ginkgo patterns.
Topology-Aware Scheduling Compatibility ✅ Passed Go source code refactoring in cmd/ and pkg/ directories extracting BigQuery query logic into DataProvider interface abstraction without introducing scheduling constraints.
Ote Binary Stdout Contract ✅ Passed Pull request introduces DataProvider abstraction for Component Readiness without adding process-level stdout writes or Ginkgo suite setup code that could corrupt OTE binary contract.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed This pull request does not add any new Ginkgo e2e tests. All test files are standard Go unit tests using testing.T framework, not Ginkgo patterns.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 15, 2026
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
cmd/sippy/automatejira.go (1)

168-174: ⚠️ Potential issue | 🔴 Critical

Guard config before dereferencing overrides.

Line 170 treats config load failure as non-fatal, but Line 173 dereferences config.ComponentReadinessConfig... unconditionally. A missing or unreadable config file now turns that warning into a panic before Jira automation starts.

🛠️ Proposed fix
-			provider := bqprovider.NewBigQueryProvider(bigQueryClient, config.ComponentReadinessConfig.VariantJunitTableOverrides)
+			provider := bqprovider.NewBigQueryProvider(bigQueryClient, nil)
+			if config != nil {
+				provider = bqprovider.NewBigQueryProvider(bigQueryClient, config.ComponentReadinessConfig.VariantJunitTableOverrides)
+			}
 			allVariants, errs := componentreadiness.GetJobVariants(ctx, provider)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/sippy/automatejira.go` around lines 168 - 174, The code calls
f.ConfigFlags.GetConfig and logs an error but then unconditionally dereferences
config.ComponentReadinessConfig when constructing
bqprovider.NewBigQueryProvider, which can panic if config is nil; update the
logic in the function containing these lines so that after GetConfig fails you
either return the error (stop startup) or create/assign a safe default config
before using config.ComponentReadinessConfig; specifically guard the config
variable before passing
config.ComponentReadinessConfig.VariantJunitTableOverrides into
bqprovider.NewBigQueryProvider and ensure componentreadiness.GetJobVariants
receives a valid provider only when config is non-nil (or the default is
applied).
pkg/api/componentreadiness/middleware/releasefallback/releasefallback.go (1)

195-200: ⚠️ Potential issue | 🟠 Major

Treat empty error slices as success in both release-date lookups.

Both checks use errs != nil. A provider that returns []error{} on success will bail out here before any fallback data is loaded.

Suggested fix
 	timeRanges, errs := r.dataProvider.QueryReleaseDates(ctx, r.reqOptions)
 	for _, err := range errs {
 		errCh <- err
 	}
-	if errs != nil {
+	if len(errs) > 0 {
 		return
 	}
 	timeRanges, errs := f.dataProvider.QueryReleaseDates(ctx, f.ReqOptions)
 
-	if errs != nil {
+	if len(errs) > 0 {
 		return nil, errs
 	}

Also applies to: 362-366

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/api/componentreadiness/middleware/releasefallback/releasefallback.go`
around lines 195 - 200, The loop over errors from
r.dataProvider.QueryReleaseDates uses "errs != nil" to decide failure, which
treats empty slices as errors; change the check to test length (len(errs) > 0)
so only non-empty error slices cause an early return, and do the same fix for
the second lookup block around the code referenced at 362-366; ensure you still
send each err into errCh before returning when len(errs) > 0 and keep using the
existing symbols (timeRanges, errs, r.dataProvider.QueryReleaseDates, errCh,
r.reqOptions) so behavior is preserved for true errors.
pkg/sippyserver/server.go (1)

408-416: ⚠️ Potential issue | 🟠 Major

Don't advertise full component-readiness capability from crDataProvider alone.

This now enables ComponentReadinessCapability for provider-only deployments, but handlers like jsonPullRequestTestResults, jsonTestRunsAndOutputsFromBigQuery, jsonJobRunPayload, jsonTestCapabilitiesFromDB, and jsonTestLifecyclesFromDB still hard-require s.bigQueryClient. Those endpoints will show up in /api and then fail at runtime.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/sippyserver/server.go` around lines 408 - 416, The capability detection
in determineCapabilities currently adds ComponentReadinessCapability when either
s.bigQueryClient or s.crDataProvider is present, which wrongly advertises
endpoints that actually require BigQuery; update determineCapabilities to only
append ComponentReadinessCapability when s.bigQueryClient != nil (or otherwise
ensure s.bigQueryClient is present alongside s.crDataProvider) so handlers like
jsonPullRequestTestResults, jsonTestRunsAndOutputsFromBigQuery,
jsonJobRunPayload, jsonTestCapabilitiesFromDB and jsonTestLifecyclesFromDB are
not advertised unless s.bigQueryClient is available.
pkg/api/componentreadiness/component_report.go (1)

337-405: ⚠️ Potential issue | 🟠 Major

Remove the unused baseStatusCh.

Line 337 creates baseStatusCh and line 348 passes it into c.middlewares.Query, but nothing reads from that channel. The TODO comment confirms it is intentionally unused. None of the middleware implementations (LinkInjector, RegressionTracker) send to it—they only use errCh. The channel is created, passed, and closed, but never consumed. Remove it from the channel creation, the Query call signature, and the cleanup goroutine to eliminate dead code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/api/componentreadiness/component_report.go` around lines 337 - 405,
Remove the unused baseStatusCh and its related operations: stop creating
baseStatusCh, remove it from the c.middlewares.Query call, and stop closing it
in the cleanup goroutine (remove close(baseStatusCh)). Search for the symbol
baseStatusCh and the call site c.middlewares.Query(...) in component_report.go
to update the Query invocation signature accordingly and remove the now-dead
channel cleanup that references close(baseStatusCh).
🧹 Nitpick comments (3)
pkg/dataloader/crcacheloader/crcacheloader.go (1)

203-203: Consider using l.dataProvider.Cache() for consistency.

The code uses l.bqClient.Cache directly for cache operations, but the DataProvider interface provides a Cache() method. Using l.dataProvider.Cache() would be more consistent with the abstraction pattern introduced in this PR.

♻️ Suggested change
-		api.CacheSet(ctx, l.bqClient.Cache, report, cacheKey, cacheDuration)
+		api.CacheSet(ctx, l.dataProvider.Cache(), report, cacheKey, cacheDuration)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/dataloader/crcacheloader/crcacheloader.go` at line 203, The call to
api.CacheSet uses the concrete l.bqClient.Cache instead of the DataProvider
abstraction; change the cache argument to use l.dataProvider.Cache() so cache
operations go through the DataProvider interface (update the invocation of
api.CacheSet(ctx, l.bqClient.Cache, report, cacheKey, cacheDuration) to pass
l.dataProvider.Cache() instead), ensuring consistency with the DataProvider
abstraction in crcacheloader.go and related methods that expect
DataProvider-provided cache.
pkg/apis/api/componentreport/crstatus/types.go (1)

10-14: Move package documentation before the package declaration.

The package-level documentation comment (lines 10-13) is placed after the imports, but Go conventions place package documentation immediately before the package keyword so that go doc and documentation tools render it correctly.

📝 Suggested fix
+// Package crstatus contains data-transfer types used between the data layer and
+// the Component Readiness analysis pipeline. These types were originally in the
+// bq package but are backend-agnostic — any data provider (BigQuery, PostgreSQL,
+// mock) populates them identically.
 package crstatus
 
 import (
 	"math/big"
 	"time"
 
 	"github.com/openshift/sippy/pkg/apis/api/componentreport/crtest"
 )
-
-// Package crstatus contains data-transfer types used between the data layer and
-// the Component Readiness analysis pipeline. These types were originally in the
-// bq package but are backend-agnostic — any data provider (BigQuery, PostgreSQL,
-// mock) populates them identically.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/apis/api/componentreport/crstatus/types.go` around lines 10 - 14, The
package-level documentation block for the crstatus types should be moved so it
immediately precedes the package declaration; relocate the current comment block
(the explanatory lines about "Package crstatus contains data-transfer types...")
to sit directly above the `package crstatus` line in types.go so that `go doc`
and other tooling pick it up as the package comment.
pkg/componentreadiness/jiraautomator/jiraautomator.go (1)

71-105: Remove the constructor's unconditional BigQuery gate.

The report path now goes through provider, but Lines 100-105 still reject any caller without a bqClient. That keeps JiraAutomator unusable with a non-BigQuery DataProvider, which is exactly the abstraction this PR is introducing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/componentreadiness/jiraautomator/jiraautomator.go` around lines 71 - 105,
The NewJiraAutomator constructor currently rejects callers with a nil BigQuery
client via the checks on bqClient/BQ and bqClient.Cache; remove those
unconditional guards (the if blocks referencing bqClient == nil || bqClient.BQ
== nil and bqClient.Cache == nil) so JiraAutomator can be constructed when the
DataProvider path is used; ensure NewJiraAutomator simply assigns bqClient into
the JiraAutomator struct and defer any BigQuery-specific validation to the code
paths that actually call bqClient (or add guarded nil checks where bqClient is
used) so the provider-based report flow works without requiring a BigQuery
client upfront.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/api/componentreadiness/dataprovider/bigquery/provider.go`:
- Around line 615-624: The function shouldSkipVariant currently returns false as
soon as it hits the current override index, which stops scanning later
overrides; change the logic in shouldSkipVariant to skip the comparison for i ==
currOverride (use continue/skip that iteration) instead of returning, so the
loop can still detect and return true if any later override
(override.VariantName == key && override.VariantValue == value) matches; keep
the final return false if no matches are found.

In `@pkg/api/componentreadiness/dataprovider/bigquery/querygenerators.go`:
- Around line 664-676: The loop wrongly reads from
c.VariantOption.IncludeVariants instead of using the passed includeVariants map;
update the body to use includeVariants[key] when building the filter so the
caller's overridden include-variants are respected. Specifically, inside the
loop that iterates sortedKeys(includeVariants) and checks
testIDOption.RequestedVariants and c.VariantOption.VariantCrossCompare, replace
the reference to c.VariantOption.IncludeVariants[key] with includeVariants[key]
(still use param.Cleanse for group and FormatStringSliceForBigQuery to format
the slice) so the constructed query uses the provided map.
- Around line 1088-1091: The code directly type-asserts row[i] for the
"jira_component" and "jira_component_id" cases (setting cts.JiraComponent and
cts.JiraComponentID) which will panic if BigQuery returns NULL; update the
switch handling in the function that iterates columns (the block with cases
"jira_component" and "jira_component_id") to first check for nil (e.g., if
row[i] == nil) and only then perform the type assertion, otherwise leave
cts.JiraComponent empty/zero-value and cts.JiraComponentID nil; use safe type
assertions (value, ok) or nil checks to avoid panics and ensure downstream
optionality is preserved.
- Around line 270-313: The dedup partition currently groups by file_path,
test_name, testsuite which collapses identical tests across different runs;
update the ROW_NUMBER PARTITION BY in the dedupedCTE (variable dedupedCTE) to
include the job run identifier (e.g., junit.prowjob_build_id or
prowjob_build_id) so deduping happens per job run: change PARTITION BY
file_path, test_name, testsuite to PARTITION BY prowjob_build_id, file_path,
test_name, testsuite (keep the ORDER BY logic and rest of the SELECT the same).

In `@pkg/api/componentreadiness/dataprovider/bigquery/releasedates.go`:
- Around line 14-20: The cache key for GetReleaseDatesFromBigQuery is a global
singleton ("CRReleaseDates~") but QueryReleaseDates (on releaseDateQuerier) uses
reqOptions.CacheOption.CRTimeRoundingFactor to compute Start, so different
rounding factors can collide; update the cache spec key created in
GetReleaseDatesFromBigQuery (api.NewCacheSpec call) to include the
CRTimeRoundingFactor (and any other relevant reqOptions.CacheOption fields) so
the cache key is unique per rounding factor, e.g. append or interpolate
reqOptions.CacheOption.CRTimeRoundingFactor into the "CRReleaseDates~" key
generation before calling api.NewCacheSpec.

In `@pkg/api/componentreadiness/test_details.go`:
- Around line 199-202: The error handling currently checks errs != nil after
calling c.dataProvider.QueryReleaseDates in test_details.go; because
QueryReleaseDates returns a []error, update the branch to check len(errs) > 0
(or len(errs) != 0) instead of errs != nil so empty slices don't trigger an
early return; locate the call to QueryReleaseDates and the variable errs and
change the condition before returning testdetails.Report{} to use the length
check.

In `@pkg/sippyserver/metrics/metrics.go`:
- Around line 122-129: The code only fills the releases slice when crProvider !=
nil, causing job/payload/disruption metrics (which use releases) to be empty if
component readiness is disabled; change the logic so releases is always
populated: call crProvider.QueryReleases(ctx) when crProvider is non-nil,
otherwise call a fallback release loader (e.g., your cluster/cached release
fetch function) to populate the same releases slice, and propagate/handle errors
the same way; update the block around the releases variable (the releases slice,
crProvider, and QueryReleases(ctx)) to ensure releases is non-nil before
downstream metric functions use it.

---

Outside diff comments:
In `@cmd/sippy/automatejira.go`:
- Around line 168-174: The code calls f.ConfigFlags.GetConfig and logs an error
but then unconditionally dereferences config.ComponentReadinessConfig when
constructing bqprovider.NewBigQueryProvider, which can panic if config is nil;
update the logic in the function containing these lines so that after GetConfig
fails you either return the error (stop startup) or create/assign a safe default
config before using config.ComponentReadinessConfig; specifically guard the
config variable before passing
config.ComponentReadinessConfig.VariantJunitTableOverrides into
bqprovider.NewBigQueryProvider and ensure componentreadiness.GetJobVariants
receives a valid provider only when config is non-nil (or the default is
applied).

In `@pkg/api/componentreadiness/component_report.go`:
- Around line 337-405: Remove the unused baseStatusCh and its related
operations: stop creating baseStatusCh, remove it from the c.middlewares.Query
call, and stop closing it in the cleanup goroutine (remove close(baseStatusCh)).
Search for the symbol baseStatusCh and the call site c.middlewares.Query(...) in
component_report.go to update the Query invocation signature accordingly and
remove the now-dead channel cleanup that references close(baseStatusCh).

In `@pkg/api/componentreadiness/middleware/releasefallback/releasefallback.go`:
- Around line 195-200: The loop over errors from
r.dataProvider.QueryReleaseDates uses "errs != nil" to decide failure, which
treats empty slices as errors; change the check to test length (len(errs) > 0)
so only non-empty error slices cause an early return, and do the same fix for
the second lookup block around the code referenced at 362-366; ensure you still
send each err into errCh before returning when len(errs) > 0 and keep using the
existing symbols (timeRanges, errs, r.dataProvider.QueryReleaseDates, errCh,
r.reqOptions) so behavior is preserved for true errors.

In `@pkg/sippyserver/server.go`:
- Around line 408-416: The capability detection in determineCapabilities
currently adds ComponentReadinessCapability when either s.bigQueryClient or
s.crDataProvider is present, which wrongly advertises endpoints that actually
require BigQuery; update determineCapabilities to only append
ComponentReadinessCapability when s.bigQueryClient != nil (or otherwise ensure
s.bigQueryClient is present alongside s.crDataProvider) so handlers like
jsonPullRequestTestResults, jsonTestRunsAndOutputsFromBigQuery,
jsonJobRunPayload, jsonTestCapabilitiesFromDB and jsonTestLifecyclesFromDB are
not advertised unless s.bigQueryClient is available.

---

Nitpick comments:
In `@pkg/apis/api/componentreport/crstatus/types.go`:
- Around line 10-14: The package-level documentation block for the crstatus
types should be moved so it immediately precedes the package declaration;
relocate the current comment block (the explanatory lines about "Package
crstatus contains data-transfer types...") to sit directly above the `package
crstatus` line in types.go so that `go doc` and other tooling pick it up as the
package comment.

In `@pkg/componentreadiness/jiraautomator/jiraautomator.go`:
- Around line 71-105: The NewJiraAutomator constructor currently rejects callers
with a nil BigQuery client via the checks on bqClient/BQ and bqClient.Cache;
remove those unconditional guards (the if blocks referencing bqClient == nil ||
bqClient.BQ == nil and bqClient.Cache == nil) so JiraAutomator can be
constructed when the DataProvider path is used; ensure NewJiraAutomator simply
assigns bqClient into the JiraAutomator struct and defer any BigQuery-specific
validation to the code paths that actually call bqClient (or add guarded nil
checks where bqClient is used) so the provider-based report flow works without
requiring a BigQuery client upfront.

In `@pkg/dataloader/crcacheloader/crcacheloader.go`:
- Line 203: The call to api.CacheSet uses the concrete l.bqClient.Cache instead
of the DataProvider abstraction; change the cache argument to use
l.dataProvider.Cache() so cache operations go through the DataProvider interface
(update the invocation of api.CacheSet(ctx, l.bqClient.Cache, report, cacheKey,
cacheDuration) to pass l.dataProvider.Cache() instead), ensuring consistency
with the DataProvider abstraction in crcacheloader.go and related methods that
expect DataProvider-provided cache.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 8b12e567-2f09-4d8a-a823-7326837b3fbd

📥 Commits

Reviewing files that changed from the base of the PR and between e274adc and e80be6b.

📒 Files selected for processing (27)
  • cmd/sippy/annotatejobruns.go
  • cmd/sippy/automatejira.go
  • cmd/sippy/component_readiness.go
  • cmd/sippy/load.go
  • cmd/sippy/serve.go
  • pkg/api/componentreadiness/component_report.go
  • pkg/api/componentreadiness/dataprovider/bigquery/provider.go
  • pkg/api/componentreadiness/dataprovider/bigquery/querygenerators.go
  • pkg/api/componentreadiness/dataprovider/bigquery/releasedates.go
  • pkg/api/componentreadiness/dataprovider/interface.go
  • pkg/api/componentreadiness/middleware/interface.go
  • pkg/api/componentreadiness/middleware/linkinjector/linkinjector.go
  • pkg/api/componentreadiness/middleware/list.go
  • pkg/api/componentreadiness/middleware/regressionallowances/regressionallowances.go
  • pkg/api/componentreadiness/middleware/regressiontracker/regressiontracker.go
  • pkg/api/componentreadiness/middleware/releasefallback/releasefallback.go
  • pkg/api/componentreadiness/regressiontracker.go
  • pkg/api/componentreadiness/test_details.go
  • pkg/api/componentreadiness/utils/utils.go
  • pkg/apis/api/componentreport/crstatus/types.go
  • pkg/apis/api/componentreport/testdetails/types.go
  • pkg/bigquery/bqlabel/labels.go
  • pkg/componentreadiness/jiraautomator/jiraautomator.go
  • pkg/componentreadiness/jobrunannotator/jobrunannotator.go
  • pkg/dataloader/crcacheloader/crcacheloader.go
  • pkg/sippyserver/metrics/metrics.go
  • pkg/sippyserver/server.go

Comment thread pkg/api/componentreadiness/dataprovider/bigquery/provider.go
Comment thread pkg/api/componentreadiness/dataprovider/bigquery/querygenerators.go
Comment thread pkg/api/componentreadiness/dataprovider/bigquery/querygenerators.go
Comment thread pkg/api/componentreadiness/dataprovider/bigquery/querygenerators.go
Comment thread pkg/api/componentreadiness/dataprovider/bigquery/releasedates.go
Comment thread pkg/api/componentreadiness/test_details.go
Comment thread pkg/sippyserver/metrics/metrics.go
@stbenjam
Copy link
Copy Markdown
Member Author

/test e2e

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

@neisw
Copy link
Copy Markdown
Contributor

neisw commented Apr 16, 2026

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Apr 16, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 16, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: neisw, stbenjam

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 16, 2026

@stbenjam: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 1728590 into openshift:main Apr 16, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants