Skip to content

TRT-1989: add benchmarking db query tests#3533

Open
neisw wants to merge 2 commits into
openshift:mainfrom
neisw:trt-1989-benchmarking
Open

TRT-1989: add benchmarking db query tests#3533
neisw wants to merge 2 commits into
openshift:mainfrom
neisw:trt-1989-benchmarking

Conversation

@neisw
Copy link
Copy Markdown
Contributor

@neisw neisw commented May 15, 2026

Looking to establish a baseline set of tests we can use to measure improvement

=== RUN   Test_BenchmarkIndividual
=== RUN   Test_BenchmarkIndividual/FindTestsByRelease
time="2026-05-15T10:01:21-04:00" level=info msg="Found 20 tests matching pattern for release 4.22"
  FindTestsByRelease iteration 1: 30m5.740776388s
  FindTestsByRelease total: 30m5.740776388s, avg: 30m5.740776388s (1 iterations)
--- PASS: Test_BenchmarkIndividual/FindTestsByRelease (1805.74s)
=== RUN   Test_BenchmarkIndividual/TestDurations
time="2026-05-15T10:01:54-04:00" level=info msg="Found 15 test durations"
  TestDurations iteration 1: 33.027261411s
  TestDurations total: 33.027261411s, avg: 33.027261411s (1 iterations)
--- PASS: Test_BenchmarkIndividual/TestDurations (33.03s)
=== RUN   Test_BenchmarkIndividual/TestOutputs
time="2026-05-15T10:02:05-04:00" level=info msg="Found 10 test outputs"
  TestOutputs iteration 1: 10.988421519s
  TestOutputs total: 10.988421519s, avg: 10.988421519s (1 iterations)
--- PASS: Test_BenchmarkIndividual/TestOutputs (10.99s)
=== RUN   Test_BenchmarkIndividual/JobDetails
time="2026-05-15T10:02:06-04:00" level=info msg="loaded ProwJobRuns from db" prowJobRuns=0 since="2026-05-01 10:02:05.931420503 -0400 EDT m=-1207741.435266770"
time="2026-05-15T10:02:06-04:00" level=info msg="Found 0 job runs"
  JobDetails iteration 1: 135.474547ms
  JobDetails total: 135.474547ms, avg: 135.474547ms (1 iterations)
--- PASS: Test_BenchmarkIndividual/JobDetails (0.14s)
--- PASS: Test_BenchmarkIndividual (1858.68s)
PASS

Summary by CodeRabbit

  • Refactor
    • Job-details reporting query logic restructured; report now loads recent failed test results and surfaces query errors for clearer reporting.
  • Tests
    • Added database benchmarking tests to measure and track query and analysis performance across representative workloads.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label May 15, 2026
@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented May 15, 2026

@neisw: This pull request references TRT-1989 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "5.0.0" version, but no target version was set.

Details

In response to this:

Looking to establish a baseline set of tests we can use to measure improvement

=== RUN   Test_BenchmarkIndividual
=== RUN   Test_BenchmarkIndividual/FindTestsByRelease
time="2026-05-15T10:01:21-04:00" level=info msg="Found 20 tests matching pattern for release 4.22"
 FindTestsByRelease iteration 1: 30m5.740776388s
 FindTestsByRelease total: 30m5.740776388s, avg: 30m5.740776388s (1 iterations)
--- PASS: Test_BenchmarkIndividual/FindTestsByRelease (1805.74s)
=== RUN   Test_BenchmarkIndividual/TestDurations
time="2026-05-15T10:01:54-04:00" level=info msg="Found 15 test durations"
 TestDurations iteration 1: 33.027261411s
 TestDurations total: 33.027261411s, avg: 33.027261411s (1 iterations)
--- PASS: Test_BenchmarkIndividual/TestDurations (33.03s)
=== RUN   Test_BenchmarkIndividual/TestOutputs
time="2026-05-15T10:02:05-04:00" level=info msg="Found 10 test outputs"
 TestOutputs iteration 1: 10.988421519s
 TestOutputs total: 10.988421519s, avg: 10.988421519s (1 iterations)
--- PASS: Test_BenchmarkIndividual/TestOutputs (10.99s)
=== RUN   Test_BenchmarkIndividual/JobDetails
time="2026-05-15T10:02:06-04:00" level=info msg="loaded ProwJobRuns from db" prowJobRuns=0 since="2026-05-01 10:02:05.931420503 -0400 EDT m=-1207741.435266770"
time="2026-05-15T10:02:06-04:00" level=info msg="Found 0 job runs"
 JobDetails iteration 1: 135.474547ms
 JobDetails total: 135.474547ms, avg: 135.474547ms (1 iterations)
--- PASS: Test_BenchmarkIndividual/JobDetails (0.14s)
--- PASS: Test_BenchmarkIndividual (1858.68s)
PASS

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 15, 2026

Walkthrough

This PR extracts ProwJobRun query logic into a new exported JobDetailsReport helper, updates PrintJobDetailsReportFromDB to use it, and adds a Postgres benchmarking test suite exercising several DB queries and analysis cases.

Changes

DB Query Refactoring and Performance Testing

Layer / File(s) Summary
JobDetailsReport Helper Extraction
pkg/api/jobs.go
Introduces exported JobDetailsReport that runs a 14-day ProwJobRun query filtered by job name substring and release, preloading failed test results; updates PrintJobDetailsReportFromDB to call this helper.
Benchmark scaffolding and cases
pkg/flags/postgres_benchmarking_test.go
Adds benchmark identifiers and benchmarkCase type; implements FindTestsByRelease raw SQL case and multiple analysis/workload cases (TestDurations, TestOutputs, JobDetails, TestAnalysis*).
Benchmark DB client and test runners
pkg/flags/postgres_benchmarking_test.go
Adds getBenchmarkDBClient to obtain DB client from db_benchmarking_dsn and three test entrypoints: Test_BenchmarkIndividual, Test_BenchmarkFindTestsByRelease, and Test_BenchmarkGroup that run cases with timing and reporting.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 14 | ❌ 3

❌ Failed checks (3 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Go Error Handling ⚠️ Warning Errors not wrapped with fmt.Errorf %w; missing nil checks before dereferencing pointer fields; DB client resource leak in test helper Wrap errors with fmt.Errorf; add nil pointer checks; register t.Cleanup for DB client closure per review
Test Coverage For New Features ⚠️ Warning New exported function JobDetailsReport lacks dedicated unit tests. It's only tested via integration/benchmarking tests requiring live database, not isolated unit tests. Add unit tests for JobDetailsReport in jobs_test.go with mocked DB dependencies to test query behavior, filters, and error handling.
✅ Passed checks (14 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main change: adding benchmarking database query tests. It is concise, specific, and directly matches the pull request's core objective.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Sql Injection Prevention ✅ Passed All SQL queries use parameterized placeholders with ? placeholders. User input validated via strict regex nameRegexp. No SQL injection vulnerabilities detected.
Excessive Css In React Should Use Styles ✅ Passed Check not applicable. PR modifies only Go backend files (pkg/api/jobs.go, pkg/flags/postgres_benchmarking_test.go), not React components. The check targets React inline CSS styling.
Single Responsibility And Clear Naming ✅ Passed JobDetailsReport has clear naming and single responsibility. Test helpers use descriptive names and each has one responsibility. No generic names. Consistent naming patterns indicate purpose clearly.
Stable And Deterministic Test Names ✅ Passed All test names in the PR are stable and deterministic. No Ginkgo BDD patterns found. All test names and case names are static, descriptive strings with no dynamic content.
Test Structure And Quality ✅ Passed Custom check requires Ginkgo test assessment. PR contains standard Go tests using testing.T only. Repository has no Ginkgo tests. Check not applicable.
Microshift Test Compatibility ✅ Passed No Ginkgo e2e tests were added in this PR. The new tests are standard Go unit tests for database benchmarking, not Ginkgo e2e tests. This check is not applicable.
Single Node Openshift (Sno) Test Compatibility ✅ Passed Custom check is not applicable. The PR adds database benchmarking tests (standard Go test functions) and a database API helper, not Ginkgo e2e tests. No cluster topology assumptions present.
Topology-Aware Scheduling Compatibility ✅ Passed The PR adds database query helpers and benchmark tests—not deployment manifests, operator code, or controllers. No scheduling constraints or topology-unaware code detected. Check not applicable.
Ote Binary Stdout Contract ✅ Passed All stdout writes are within test functions (captured by framework) or use stderr. No process-level code writes to stdout violating the OTE contract.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed This custom check targets Ginkgo e2e tests. The PR adds standard Go benchmarking tests, not Ginkgo e2e tests. No IPv4 assumptions or external connectivity issues detected. Check not applicable.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot requested review from deepsm007 and petr-muller May 15, 2026 15:08
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 15, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: neisw

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 15, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
pkg/flags/postgres_benchmarking_test.go (2)

48-50: ⚡ Quick win

Remove per-row logging from timed benchmark execution.

Printing each result row is included in measured duration and can overshadow the query signal.

Suggested fix
 				log.Printf("Found %d tests matching pattern for release %s", len(results), benchmarkRelease)
-				for _, r := range results {
-					log.Printf("  [%d] %s", r.ID, r.Name)
-				}
 				return nil
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pkg/flags/postgres_benchmarking_test.go` around lines 48 - 50, The test
currently logs each row inside the timed benchmark loop (the for _, r := range
results { log.Printf("  [%d] %s", r.ID, r.Name) }), which adds I/O to the
measured duration; remove or move this per-row logging out of the timed section
so the benchmark only measures query work (either delete the log.Printf call
from the timed block or defer printing until after benchmarking using the same
results slice).

83-85: ⚡ Quick win

Use a fixed reportEnd for stable benchmark baselines.

Calling time.Now() inside the case makes the queried window drift across runs, which hurts comparability.

Suggested fix
-func getBenchmarkCases() []benchmarkCase {
+func getBenchmarkCases(reportEnd time.Time) []benchmarkCase {
@@
-				jobRuns, err := api.JobDetailsReport(dbc, benchmarkRelease,
+				jobRuns, err := api.JobDetailsReport(dbc, benchmarkRelease,
 					"periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn",
-					time.Now())
+					reportEnd)

(Then pass a single captured value from each test, e.g. reportEnd := time.Now().)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pkg/flags/postgres_benchmarking_test.go` around lines 83 - 85, The test calls
api.JobDetailsReport(..., time.Now()) directly which makes the report window
vary across runs; capture a single fixed timestamp at the start of the test
(e.g. reportEnd := time.Now()) and pass that variable into api.JobDetailsReport
instead of calling time.Now() inline so the queried window is stable; update the
call site where JobDetailsReport is invoked to use reportEnd and ensure any
other places in the same test that need the same end time use the same captured
variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@pkg/api/jobs.go`:
- Line 267: The two places that return raw errors (the statements returning
"nil, res.Error") should wrap the propagated error with context using fmt.Errorf
and %w; replace each raw return with something like fmt.Errorf("failed to
<describe operation> while calling res: %w", res.Error) so callers get
actionable context. Locate the returns that use the local variable res (the
"return nil, res.Error" occurrences) and wrap res.Error with fmt.Errorf
including a short description of the operation (e.g., "fetching job", "creating
job" or similar) and %w to preserve the original error.

---

Nitpick comments:
In `@pkg/flags/postgres_benchmarking_test.go`:
- Around line 48-50: The test currently logs each row inside the timed benchmark
loop (the for _, r := range results { log.Printf("  [%d] %s", r.ID, r.Name) }),
which adds I/O to the measured duration; remove or move this per-row logging out
of the timed section so the benchmark only measures query work (either delete
the log.Printf call from the timed block or defer printing until after
benchmarking using the same results slice).
- Around line 83-85: The test calls api.JobDetailsReport(..., time.Now())
directly which makes the report window vary across runs; capture a single fixed
timestamp at the start of the test (e.g. reportEnd := time.Now()) and pass that
variable into api.JobDetailsReport instead of calling time.Now() inline so the
queried window is stable; update the call site where JobDetailsReport is invoked
to use reportEnd and ensure any other places in the same test that need the same
end time use the same captured variable.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 13790764-57ce-47a6-bf6a-01ef331293f8

📥 Commits

Reviewing files that changed from the base of the PR and between e99b97d and 7eaee42.

📒 Files selected for processing (2)
  • pkg/api/jobs.go
  • pkg/flags/postgres_benchmarking_test.go

Comment thread pkg/api/jobs.go
if res.Error != nil {
log.Errorf("error querying %s ProwJobRuns from db: %v", jobSearchStr, res.Error)
return res.Error
return nil, res.Error
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify raw error returns in the modified file.
rg -n -C2 'return nil, res\.Error|return err' pkg/api/jobs.go

Repository: openshift/sippy

Length of output: 396


🏁 Script executed:

# Check the file content around the error return lines and verify imports
head -n 30 pkg/api/jobs.go && echo -e "\n---MIDDLE SECTION---\n" && sed -n '260,285p' pkg/api/jobs.go

Repository: openshift/sippy

Length of output: 1765


🏁 Script executed:

# Get the full function signatures to verify return types
sed -n '240,285p' pkg/api/jobs.go

Repository: openshift/sippy

Length of output: 1650


Wrap propagated errors with context before returning.

Both error returns pass raw errors upstream. Add fmt.Errorf with context and %w wrapping to make failures diagnosable.

Suggested fix
 	if res.Error != nil {
-		return nil, res.Error
+		return nil, fmt.Errorf("querying prow job runs for release %q and job pattern %q: %w", release, jobSearchStr, res.Error)
 	}
@@
 	if err != nil {
 		log.Errorf("error querying %s ProwJobRuns from db: %v", jobSearchStr, err)
-		return err
+		return fmt.Errorf("building job details report for %q: %w", jobSearchStr, err)
 	}

Applies to lines 267 and 280 per coding guidelines: "Errors should be wrapped with context using fmt.Errorf with %w."

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pkg/api/jobs.go` at line 267, The two places that return raw errors (the
statements returning "nil, res.Error") should wrap the propagated error with
context using fmt.Errorf and %w; replace each raw return with something like
fmt.Errorf("failed to <describe operation> while calling res: %w", res.Error) so
callers get actionable context. Locate the returns that use the local variable
res (the "return nil, res.Error" occurrences) and wrap res.Error with fmt.Errorf
including a short description of the operation (e.g., "fetching job", "creating
job" or similar) and %w to preserve the original error.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
pkg/flags/postgres_benchmarking_test.go (1)

59-168: ⚡ Quick win

Stabilize benchmark time windows to keep timings comparable.

Each case evaluates time.Now() at call time, so the queried window shifts between cases/iterations. That adds data drift/noise to results. Capture one reference timestamp per test run and pass it into cases.

Proposed refactor
 type benchmarkCase struct {
 	name string
-	fn   func(dbc *db.DB) error
+	fn   func(dbc *db.DB, now time.Time) error
 }

-func getBenchmarkCases() []benchmarkCase {
+func getBenchmarkCases(now time.Time) []benchmarkCase {
 	return []benchmarkCase{
 		{
 			name: "JobDetails",
-			fn: func(dbc *db.DB) error {
+			fn: func(dbc *db.DB, now time.Time) error {
 				jobRuns, err := api.JobDetailsReport(dbc, benchmarkRelease,
-					benchmarkJobName, time.Now())
+					benchmarkJobName, now)
 				...
 			},
 		},
 		{
 			name: "TestAnalysisOverall",
-			fn: func(dbc *db.DB) error {
+			fn: func(dbc *db.DB, now time.Time) error {
 				results, err := api.GetTestAnalysisOverallFromDB(dbc, nil,
-					benchmarkRelease, benchmarkTestName, time.Now())
+					benchmarkRelease, benchmarkTestName, now)
 				...
 			},
 		},
 		{
 			name: "TestAnalysisPassRate",
-			fn: func(dbc *db.DB) error {
+			fn: func(dbc *db.DB, now time.Time) error {
 				...
 				res := dbc.DB.Raw(query.QueryTestAnalysis,
-					time.Now().Add(-24*14*time.Hour),
+					now.Add(-24*14*time.Hour),
 					benchmarkTestName,
 					[]string{benchmarkJobName}).Scan(&result)
 				...
 			},
 		},
 	}
 }

 func Test_BenchmarkIndividual(t *testing.T) {
 	dbc := getBenchmarkDBClient(t)
+	now := time.Now()
 	iterations := 3
-	cases := getBenchmarkCases()
+	cases := getBenchmarkCases(now)
 	...
-				err := bc.fn(dbc)
+				err := bc.fn(dbc, now)
 	...
 }

 func Test_BenchmarkGroup(t *testing.T) {
 	dbc := getBenchmarkDBClient(t)
+	now := time.Now()
 	iterations := 1
-	cases := getBenchmarkCases()
+	cases := getBenchmarkCases(now)
 	...
-			err := bc.fn(dbc)
+			err := bc.fn(dbc, now)
 	...
 }

Also applies to: 189-256

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pkg/flags/postgres_benchmarking_test.go` around lines 59 - 168, The benchmark
cases call time.Now() inside each case, causing shifting windows and noisy
results; capture a single reference timestamp per test run and use it in all
cases. Modify getBenchmarkCases to accept a reference time (e.g., func
getBenchmarkCases(ref time.Time) []benchmarkCase) or capture a refTime variable
from the caller and close over it; replace all time.Now() uses inside the cases
(JobDetails -> api.JobDetailsReport, GetTestAnalysisOverallFromDB,
GetTestAnalysisByJobFromDB calls, and the TestAnalysisPassRate raw query) with
the provided ref time (and adjust the PassRate window using ref.Add(...)).
Update the test harness that constructs/iterates these benchmark cases to pass
the same reference timestamp into getBenchmarkCases (or create refTime before
calling getBenchmarkCases if you chose closure).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@pkg/flags/postgres_benchmarking_test.go`:
- Around line 170-187: The helper getBenchmarkDBClient currently returns a
*db.DB without closing it; after successfully creating dbc in
getBenchmarkDBClient register a t.Cleanup that closes the client (e.g.
t.Cleanup(func(){ _ = dbc.Close() } ) or log any close error) so the connection
is closed when the test finishes; place the t.Cleanup call immediately after the
dbc, err check and before returning dbc.

---

Nitpick comments:
In `@pkg/flags/postgres_benchmarking_test.go`:
- Around line 59-168: The benchmark cases call time.Now() inside each case,
causing shifting windows and noisy results; capture a single reference timestamp
per test run and use it in all cases. Modify getBenchmarkCases to accept a
reference time (e.g., func getBenchmarkCases(ref time.Time) []benchmarkCase) or
capture a refTime variable from the caller and close over it; replace all
time.Now() uses inside the cases (JobDetails -> api.JobDetailsReport,
GetTestAnalysisOverallFromDB, GetTestAnalysisByJobFromDB calls, and the
TestAnalysisPassRate raw query) with the provided ref time (and adjust the
PassRate window using ref.Add(...)). Update the test harness that
constructs/iterates these benchmark cases to pass the same reference timestamp
into getBenchmarkCases (or create refTime before calling getBenchmarkCases if
you chose closure).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 386c330a-b97f-4948-90e3-1eb11c7841d2

📥 Commits

Reviewing files that changed from the base of the PR and between 7eaee42 and a141d6a.

📒 Files selected for processing (1)
  • pkg/flags/postgres_benchmarking_test.go

Comment on lines +170 to +187
func getBenchmarkDBClient(t *testing.T) *db.DB {
t.Helper()
dsn := os.Getenv("db_benchmarking_dsn")
if dsn == "" {
t.Skip("skipping: set db_benchmarking_dsn to run")
}

dbFlags := &PostgresFlags{
LogLevel: 4,
DSN: dsn,
}

dbc, err := dbFlags.GetDBClient()
if err != nil {
t.Fatalf("couldn't get DB client: %v", err)
}
return dbc
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Close the benchmark DB client with t.Cleanup to avoid leaked connections.

The helper opens a DB client but never closes it. Register cleanup when the client is created.

Proposed fix
 func getBenchmarkDBClient(t *testing.T) *db.DB {
 	t.Helper()
 	dsn := os.Getenv("db_benchmarking_dsn")
 	if dsn == "" {
 		t.Skip("skipping: set db_benchmarking_dsn to run")
 	}
@@
 	dbc, err := dbFlags.GetDBClient()
 	if err != nil {
 		t.Fatalf("couldn't get DB client: %v", err)
 	}
+	sqlDB, err := dbc.DB.DB()
+	if err != nil {
+		t.Fatalf("couldn't get sql DB handle: %v", err)
+	}
+	t.Cleanup(func() {
+		if err := sqlDB.Close(); err != nil {
+			t.Logf("failed to close DB client: %v", err)
+		}
+	})
 	return dbc
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@pkg/flags/postgres_benchmarking_test.go` around lines 170 - 187, The helper
getBenchmarkDBClient currently returns a *db.DB without closing it; after
successfully creating dbc in getBenchmarkDBClient register a t.Cleanup that
closes the client (e.g. t.Cleanup(func(){ _ = dbc.Close() } ) or log any close
error) so the connection is closed when the test finishes; place the t.Cleanup
call immediately after the dbc, err check and before returning dbc.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 15, 2026

@neisw: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants