test: consolidate core service integration tests#73
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughConverts multiple mock-based unit tests into DB-backed integration suites with nested subtests, adds per-test PostgreSQL schema isolation and dynamic cleanup, expands secret/event/audit test scenarios, and adds console log path handling to the libvirt adapter. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/core/services/secret_test.go (1)
104-104:⚠️ Potential issue | 🟡 MinorSilent failure: error from
CreateSecretis discarded.If secret creation fails, the delete test becomes meaningless and may pass or fail for the wrong reasons.
Proposed fix
t.Run("Delete", func(t *testing.T) { - secret, _ := svc.CreateSecret(ctx, "DEL_ME", "val", "") + secret, err := svc.CreateSecret(ctx, "DEL_ME", "val", "") + require.NoError(t, err) err := svc.DeleteSecret(ctx, secret.ID)Note: After this change, use
err =instead oferr :=on line 106.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/services/secret_test.go` at line 104, The test currently ignores the error returned by svc.CreateSecret (call: secret, _ := svc.CreateSecret(ctx, "DEL_ME", "val", "")); update the test to capture and assert the error (e.g., secret, err := svc.CreateSecret(...) and fail the test if err != nil using t.Fatalf or your test assertion helper), and adjust the subsequent assignment on line 106 to use err = instead of err := as noted so you don't shadow the captured err.
🧹 Nitpick comments (1)
internal/core/services/secret_test.go (1)
132-143: Limited test coverage: services created with nil dependencies cannot be exercised.These tests verify
NewSecretServicereturns a non-nil pointer, but passingnilforrepo,eventSvc, andauditSvcmeans any subsequent method call would panic. Consider either:
- Adding a brief functional assertion (e.g., call
Encrypt/Decryptwhich only usemasterKey), or- Documenting that these tests only validate constructor behavior with key fallback/warning logic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/services/secret_test.go` around lines 132 - 143, The tests for NewSecretService currently pass nil for repo, eventSvc, and auditSvc so the returned SecretService cannot be exercised; update the tests to either (A) construct lightweight stubs/mocks for the dependencies used by other methods or (B) add a minimal functional assertion that exercises only the key handling code by calling SecretService.Encrypt and SecretService.Decrypt (these use the masterKey and do not require repo/eventSvc/auditSvc) to confirm encryption/decryption round-trips for the "DefaultKeyDevelopment" and "ShortKeyWarning" cases, or alternatively add a comment in the test clearly stating it only validates constructor behavior and won’t exercise other methods.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/core/services/audit_test.go`:
- Around line 49-51: The test currently discards errors from svc.Log in the
setup loop; change the loop in audit_test.go so each call to svc.Log checks the
returned error (from method svc.Log) and fails the test if non-nil (e.g.,
t.Fatalf or t.Fatalf-like assertion) instead of assigning to the blank
identifier; this ensures any setup failures are surfaced rather than silently
ignored when invoking svc.Log within the for loop.
- Around line 67-78: In the UserIsolation subtest, stop discarding errors from
svc.Log and remove the redundant assertion: replace the two ignored calls to
svc.Log with calls that assert success (e.g., require.NoError(t, err) after
calling svc.Log) so failures surface, and delete the assert.NotEqual(t,
otherUserID, log.UserID) check since assert.Equal(t, userID, log.UserID) already
guarantees it; ensure references to svc.Log, svc.ListLogs, UserIsolation,
assert.Equal and require.NoError are updated accordingly.
In `@internal/core/services/event_test.go`:
- Line 58: The test silently ignores errors from svc.RecordEvent; capture and
assert the result instead of assigning to the blank identifier. Replace `_ =
svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil)` with an error check such
as `err := svc.RecordEvent(...)` followed by a test assertion (e.g.,
t.Fatalf("RecordEvent failed: %v", err) or require.NoError(t, err)) so any setup
or invocation failure surfaces; reference the svc.RecordEvent call and the
subCtx used in the test when making the change.
- Around line 51-59: The test is inserting events into the wrong DB because svc
was created with the outer scope repository; recreate the service with the fresh
DB before calling RecordEvent: after db := setupDB(t) and cleanDB(t, db) create
a new service instance (the same constructor used originally that produced svc,
e.g., NewEventService/NewService with the new db's repo) and assign it to svc
(or use a new variable like subSvc) so RecordEvent(subCtx, ...) writes to the
fresh DB that subCtx points to; alternatively remove the fresh DB creation and
reuse the original setup consistently.
In `@internal/core/services/secret_test.go`:
- Line 124: The call to repo.Create in the test discards its error so a failed
insert can masquerade as a missing secret; update the test in secret_test.go to
check the returned error from repo.Create(secret) and fail the test if it
returns one (e.g., use t.Fatalf/t.Fatalf or require.NoError) before proceeding
to call GetSecret, ensuring the test only passes when the secret was actually
stored and the subsequent decryption error is observed.
- Around line 81-82: The test currently discards errors from svc.CreateSecret
(two calls) which can hide failures; update the secret_test.go test to capture
and assert the error for both CreateSecret calls (e.g., check err != nil and
fail the test or use the test helper/assertion lib) so failures stop the test
instead of proceeding with stale state; refer to the svc.CreateSecret calls and
the ctx variable in the test to locate the lines to modify.
---
Outside diff comments:
In `@internal/core/services/secret_test.go`:
- Line 104: The test currently ignores the error returned by svc.CreateSecret
(call: secret, _ := svc.CreateSecret(ctx, "DEL_ME", "val", "")); update the test
to capture and assert the error (e.g., secret, err := svc.CreateSecret(...) and
fail the test if err != nil using t.Fatalf or your test assertion helper), and
adjust the subsequent assignment on line 106 to use err = instead of err := as
noted so you don't shadow the captured err.
---
Nitpick comments:
In `@internal/core/services/secret_test.go`:
- Around line 132-143: The tests for NewSecretService currently pass nil for
repo, eventSvc, and auditSvc so the returned SecretService cannot be exercised;
update the tests to either (A) construct lightweight stubs/mocks for the
dependencies used by other methods or (B) add a minimal functional assertion
that exercises only the key handling code by calling SecretService.Encrypt and
SecretService.Decrypt (these use the masterKey and do not require
repo/eventSvc/auditSvc) to confirm encryption/decryption round-trips for the
"DefaultKeyDevelopment" and "ShortKeyWarning" cases, or alternatively add a
comment in the test clearly stating it only validates constructor behavior and
won’t exercise other methods.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
internal/core/services/audit_test.gointernal/core/services/audit_unit_test.gointernal/core/services/event_test.gointernal/core/services/event_unit_test.gointernal/core/services/secret_test.gointernal/core/services/secret_unit_test.go
💤 Files with no reviewable changes (3)
- internal/core/services/event_unit_test.go
- internal/core/services/audit_unit_test.go
- internal/core/services/secret_unit_test.go
| for i := 0; i < 5; i++ { | ||
| _ = svc.Log(ctx, userID, "action", "res", "id", nil) | ||
| } |
There was a problem hiding this comment.
Silent failure: errors from svc.Log are discarded.
Discarding errors in test setup can hide failures and produce misleading test results.
Proposed fix
for i := 0; i < 5; i++ {
- _ = svc.Log(ctx, userID, "action", "res", "id", nil)
+ err := svc.Log(ctx, userID, "action", "res", "id", nil)
+ require.NoError(t, err)
}As per coding guidelines: "Do not use silent failures - avoid blank identifier assignment like _ = someFunc()"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for i := 0; i < 5; i++ { | |
| _ = svc.Log(ctx, userID, "action", "res", "id", nil) | |
| } | |
| for i := 0; i < 5; i++ { | |
| err := svc.Log(ctx, userID, "action", "res", "id", nil) | |
| require.NoError(t, err) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/core/services/audit_test.go` around lines 49 - 51, The test
currently discards errors from svc.Log in the setup loop; change the loop in
audit_test.go so each call to svc.Log checks the returned error (from method
svc.Log) and fails the test if non-nil (e.g., t.Fatalf or t.Fatalf-like
assertion) instead of assigning to the blank identifier; this ensures any setup
failures are surfaced rather than silently ignored when invoking svc.Log within
the for loop.
| t.Run("UserIsolation", func(t *testing.T) { | ||
| otherUserID := uuid.New() | ||
| _ = svc.Log(ctx, userID, "user1-action", "res", "1", nil) | ||
| _ = svc.Log(ctx, otherUserID, "user2-action", "res", "2", nil) | ||
|
|
||
| logs, err := svc.ListLogs(ctx, userID, 10) | ||
| require.NoError(t, err) | ||
| assert.Len(t, logs, 2) | ||
| logs, err := svc.ListLogs(ctx, userID, 10) | ||
| require.NoError(t, err) | ||
|
|
||
| for _, log := range logs { | ||
| assert.Equal(t, userID, log.UserID) | ||
| assert.NotEqual(t, otherUserID, log.UserID) | ||
| } |
There was a problem hiding this comment.
Silent failures in UserIsolation subtest and redundant assertion.
Two issues:
- Lines 69-70 discard errors from
svc.Log, violating the silent failure guideline. - Line 77 (
assert.NotEqual(t, otherUserID, log.UserID)) is redundant—if Line 76 passes, Line 77 is guaranteed to pass.
Proposed fix
t.Run("UserIsolation", func(t *testing.T) {
otherUserID := uuid.New()
- _ = svc.Log(ctx, userID, "user1-action", "res", "1", nil)
- _ = svc.Log(ctx, otherUserID, "user2-action", "res", "2", nil)
+ err := svc.Log(ctx, userID, "user1-action", "res", "1", nil)
+ require.NoError(t, err)
+ err = svc.Log(ctx, otherUserID, "user2-action", "res", "2", nil)
+ require.NoError(t, err)
logs, err := svc.ListLogs(ctx, userID, 10)
require.NoError(t, err)
for _, log := range logs {
assert.Equal(t, userID, log.UserID)
- assert.NotEqual(t, otherUserID, log.UserID)
}
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| t.Run("UserIsolation", func(t *testing.T) { | |
| otherUserID := uuid.New() | |
| _ = svc.Log(ctx, userID, "user1-action", "res", "1", nil) | |
| _ = svc.Log(ctx, otherUserID, "user2-action", "res", "2", nil) | |
| logs, err := svc.ListLogs(ctx, userID, 10) | |
| require.NoError(t, err) | |
| assert.Len(t, logs, 2) | |
| logs, err := svc.ListLogs(ctx, userID, 10) | |
| require.NoError(t, err) | |
| for _, log := range logs { | |
| assert.Equal(t, userID, log.UserID) | |
| assert.NotEqual(t, otherUserID, log.UserID) | |
| } | |
| t.Run("UserIsolation", func(t *testing.T) { | |
| otherUserID := uuid.New() | |
| err := svc.Log(ctx, userID, "user1-action", "res", "1", nil) | |
| require.NoError(t, err) | |
| err = svc.Log(ctx, otherUserID, "user2-action", "res", "2", nil) | |
| require.NoError(t, err) | |
| logs, err := svc.ListLogs(ctx, userID, 10) | |
| require.NoError(t, err) | |
| for _, log := range logs { | |
| assert.Equal(t, userID, log.UserID) | |
| } | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/core/services/audit_test.go` around lines 67 - 78, In the
UserIsolation subtest, stop discarding errors from svc.Log and remove the
redundant assertion: replace the two ignored calls to svc.Log with calls that
assert success (e.g., require.NoError(t, err) after calling svc.Log) so failures
surface, and delete the assert.NotEqual(t, otherUserID, log.UserID) check since
assert.Equal(t, userID, log.UserID) already guarantees it; ensure references to
svc.Log, svc.ListLogs, UserIsolation, assert.Equal and require.NoError are
updated accordingly.
| t.Run("ListEvents_Limits", func(t *testing.T) { | ||
| // Fresh setup for limits | ||
| db := setupDB(t) | ||
| cleanDB(t, db) | ||
| subCtx := setupTestUser(t, db) | ||
|
|
||
| _ = svc.RecordEvent(ctx, "A1", "r1", "T1", nil) | ||
| _ = svc.RecordEvent(ctx, "A2", "r2", "T2", nil) | ||
| for i := 0; i < 5; i++ { | ||
| _ = svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) | ||
| } |
There was a problem hiding this comment.
Bug: svc still references the original database, not the fresh one.
The subtest creates a fresh database (db := setupDB(t)) and user context (subCtx), but svc was instantiated with the original database's repository from the outer scope (line 29). The RecordEvent calls insert into the old database while assertions query the new one via subCtx.
Either recreate the service with the new DB or reuse the original setup without creating a fresh DB.
Proposed fix: recreate service with fresh DB
t.Run("ListEvents_Limits", func(t *testing.T) {
- // Fresh setup for limits
- db := setupDB(t)
- cleanDB(t, db)
- subCtx := setupTestUser(t, db)
+ // Fresh setup for limits with its own service instance
+ subSvc, _, subCtx := setupEventServiceTest(t)
for i := 0; i < 5; i++ {
- _ = svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil)
+ err := subSvc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil)
+ require.NoError(t, err)
}
t.Run("SpecificLimit", func(t *testing.T) {
- result, err := svc.ListEvents(subCtx, 2)
+ result, err := subSvc.ListEvents(subCtx, 2)
require.NoError(t, err)
assert.Len(t, result, 2)
})
t.Run("DefaultLimit", func(t *testing.T) {
// Implementation uses 50 as default if limit <= 0
- result, err := svc.ListEvents(subCtx, 0)
+ result, err := subSvc.ListEvents(subCtx, 0)
require.NoError(t, err)
assert.Len(t, result, 5)
})
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| t.Run("ListEvents_Limits", func(t *testing.T) { | |
| // Fresh setup for limits | |
| db := setupDB(t) | |
| cleanDB(t, db) | |
| subCtx := setupTestUser(t, db) | |
| _ = svc.RecordEvent(ctx, "A1", "r1", "T1", nil) | |
| _ = svc.RecordEvent(ctx, "A2", "r2", "T2", nil) | |
| for i := 0; i < 5; i++ { | |
| _ = svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) | |
| } | |
| t.Run("ListEvents_Limits", func(t *testing.T) { | |
| // Fresh setup for limits with its own service instance | |
| subSvc, _, subCtx := setupEventServiceTest(t) | |
| for i := 0; i < 5; i++ { | |
| err := subSvc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) | |
| require.NoError(t, err) | |
| } | |
| t.Run("SpecificLimit", func(t *testing.T) { | |
| result, err := subSvc.ListEvents(subCtx, 2) | |
| require.NoError(t, err) | |
| assert.Len(t, result, 2) | |
| }) | |
| t.Run("DefaultLimit", func(t *testing.T) { | |
| // Implementation uses 50 as default if limit <= 0 | |
| result, err := subSvc.ListEvents(subCtx, 0) | |
| require.NoError(t, err) | |
| assert.Len(t, result, 5) | |
| }) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/core/services/event_test.go` around lines 51 - 59, The test is
inserting events into the wrong DB because svc was created with the outer scope
repository; recreate the service with the fresh DB before calling RecordEvent:
after db := setupDB(t) and cleanDB(t, db) create a new service instance (the
same constructor used originally that produced svc, e.g.,
NewEventService/NewService with the new db's repo) and assign it to svc (or use
a new variable like subSvc) so RecordEvent(subCtx, ...) writes to the fresh DB
that subCtx points to; alternatively remove the fresh DB creation and reuse the
original setup consistently.
| _ = svc.RecordEvent(ctx, "A1", "r1", "T1", nil) | ||
| _ = svc.RecordEvent(ctx, "A2", "r2", "T2", nil) | ||
| for i := 0; i < 5; i++ { | ||
| _ = svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) |
There was a problem hiding this comment.
Silent failure: error from RecordEvent is discarded.
Using _ = svc.RecordEvent(...) hides potential setup failures, which could cause misleading test results.
Proposed fix
for i := 0; i < 5; i++ {
- _ = svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil)
+ err := svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil)
+ require.NoError(t, err)
}As per coding guidelines: "Do not use silent failures - avoid blank identifier assignment like _ = someFunc()"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| _ = svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) | |
| for i := 0; i < 5; i++ { | |
| err := svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) | |
| require.NoError(t, err) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/core/services/event_test.go` at line 58, The test silently ignores
errors from svc.RecordEvent; capture and assert the result instead of assigning
to the blank identifier. Replace `_ = svc.RecordEvent(subCtx, "ACTION", "res",
"TYPE", nil)` with an error check such as `err := svc.RecordEvent(...)` followed
by a test assertion (e.g., t.Fatalf("RecordEvent failed: %v", err) or
require.NoError(t, err)) so any setup or invocation failure surfaces; reference
the svc.RecordEvent call and the subCtx used in the test when making the change.
| _, _ = svc.CreateSecret(ctx, "S1", "V1", "") | ||
| _, _ = svc.CreateSecret(ctx, "S2", "V2", "") |
There was a problem hiding this comment.
Silent failures: errors from CreateSecret are discarded.
If either CreateSecret call fails, the test will assert on stale data from previous subtests rather than the intended secrets, potentially masking bugs.
Proposed fix
t.Run("List", func(t *testing.T) {
- _, _ = svc.CreateSecret(ctx, "S1", "V1", "")
- _, _ = svc.CreateSecret(ctx, "S2", "V2", "")
+ _, err := svc.CreateSecret(ctx, "S1", "V1", "")
+ require.NoError(t, err)
+ _, err = svc.CreateSecret(ctx, "S2", "V2", "")
+ require.NoError(t, err)
secrets, err := svc.ListSecrets(ctx)As per coding guidelines: "Do not use silent failures - avoid blank identifier assignment like _ = someFunc()"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| _, _ = svc.CreateSecret(ctx, "S1", "V1", "") | |
| _, _ = svc.CreateSecret(ctx, "S2", "V2", "") | |
| _, err := svc.CreateSecret(ctx, "S1", "V1", "") | |
| require.NoError(t, err) | |
| _, err = svc.CreateSecret(ctx, "S2", "V2", "") | |
| require.NoError(t, err) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/core/services/secret_test.go` around lines 81 - 82, The test
currently discards errors from svc.CreateSecret (two calls) which can hide
failures; update the secret_test.go test to capture and assert the error for
both CreateSecret calls (e.g., check err != nil and fail the test or use the
test helper/assertion lib) so failures stop the test instead of proceeding with
stale state; refer to the svc.CreateSecret calls and the ctx variable in the
test to locate the lines to modify.
| Name: "corrupt", | ||
| EncryptedValue: "not-a-valid-cipher", | ||
| } | ||
| _ = repo.Create(ctx, secret) |
There was a problem hiding this comment.
Silent failure: error from repo.Create is discarded.
If inserting the corrupt secret fails, GetSecret will return a not-found error rather than a decryption error, and the test will pass for the wrong reason.
Proposed fix
- _ = repo.Create(ctx, secret)
+ err := repo.Create(ctx, secret)
+ require.NoError(t, err)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| _ = repo.Create(ctx, secret) | |
| err := repo.Create(ctx, secret) | |
| require.NoError(t, err) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/core/services/secret_test.go` at line 124, The call to repo.Create
in the test discards its error so a failed insert can masquerade as a missing
secret; update the test in secret_test.go to check the returned error from
repo.Create(secret) and fail the test if it returns one (e.g., use
t.Fatalf/t.Fatalf or require.NoError) before proceeding to call GetSecret,
ensuring the test only passes when the secret was actually stored and the
subsequent decryption error is observed.
There was a problem hiding this comment.
Pull request overview
This pull request consolidates integration tests for SecretService, EventService, and AuditService by removing redundant mock-based unit tests and enhancing real-dependency integration tests. The changes align with the codebase's established pattern of using real database dependencies for integration testing.
Changes:
- Removed three unit test files (secret_unit_test.go, event_unit_test.go, audit_unit_test.go) that used mocks
- Enhanced integration tests with additional test cases including error scenarios, edge cases, and isolation testing
- Improved test coverage with new scenarios like DuplicateName, GetNotFound, UserIsolation, and DecryptionFailure
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/core/services/secret_unit_test.go | Deleted mock-based unit tests, replaced by enhanced integration tests |
| internal/core/services/secret_test.go | Added comprehensive integration test cases including duplicate handling, not-found scenarios, decryption failures, and service initialization variants |
| internal/core/services/event_unit_test.go | Deleted mock-based unit tests, replaced by enhanced integration tests |
| internal/core/services/event_test.go | Restructured into Integration test with subtests for event recording and limit testing (contains critical bug) |
| internal/core/services/audit_unit_test.go | Deleted mock-based unit tests, replaced by enhanced integration tests |
| internal/core/services/audit_test.go | Enhanced with subtests for audit logging, limit testing, and user isolation verification |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| _, _ = svc.CreateSecret(ctx, "S1", "V1", "") | ||
| _, _ = svc.CreateSecret(ctx, "S2", "V2", "") |
There was a problem hiding this comment.
Errors from CreateSecret operations are being silently ignored. These are prerequisite operations that set up test data, and if they fail, the subsequent assertions in this subtest would fail in confusing ways (e.g., the List assertion might fail because the secrets weren't created). According to codebase conventions, test setup operations should use require.NoError to ensure test failures are clear and debuggable.
| _ = repo.Create(ctx, secret) | ||
|
|
||
| _, err := svc.GetSecret(ctx, secret.ID) |
There was a problem hiding this comment.
Error from repo.Create is being silently ignored. This is a prerequisite operation that sets up corrupted test data for the decryption failure test. If the Create operation fails, the subsequent GetSecret call won't test what's intended. According to codebase conventions, test setup operations should use require.NoError to ensure test failures are clear and debuggable.
| _ = repo.Create(ctx, secret) | |
| _, err := svc.GetSecret(ctx, secret.ID) | |
| err := repo.Create(ctx, secret) | |
| require.NoError(t, err) | |
| _, err = svc.GetSecret(ctx, secret.ID) |
| for i := 0; i < 5; i++ { | ||
| _ = svc.Log(ctx, userID, "action", "res", "id", nil) | ||
| } |
There was a problem hiding this comment.
Errors from Log operations are being silently ignored in this loop. These are prerequisite operations that set up test data for the limit testing. If any Log operation fails, the assertions about the number of logs returned will fail in confusing ways. According to codebase conventions, test setup operations should use require.NoError to ensure test failures are clear and debuggable.
| _ = svc.Log(ctx, userID, "user1-action", "res", "1", nil) | ||
| _ = svc.Log(ctx, otherUserID, "user2-action", "res", "2", nil) |
There was a problem hiding this comment.
Errors from Log operations are being silently ignored. These are prerequisite operations that set up test data for the user isolation test. If any Log operation fails, the subsequent assertions about user isolation will fail in confusing ways. According to codebase conventions, test setup operations should use require.NoError to ensure test failures are clear and debuggable.
| _ = svc.RecordEvent(ctx, "A1", "r1", "T1", nil) | ||
| _ = svc.RecordEvent(ctx, "A2", "r2", "T2", nil) | ||
| for i := 0; i < 5; i++ { | ||
| _ = svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) |
There was a problem hiding this comment.
Errors from RecordEvent operations are being silently ignored in this loop. These are prerequisite operations that set up test data for the limit testing. If any RecordEvent operation fails, the assertions about the number of events returned will fail in confusing ways. According to codebase conventions, test setup operations should use require.NoError to ensure test failures are clear and debuggable.
| _ = svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) | |
| err := svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) | |
| require.NoError(t, err) |
| t.Run("ListEvents_Limits", func(t *testing.T) { | ||
| // Fresh setup for limits | ||
| db := setupDB(t) | ||
| cleanDB(t, db) | ||
| subCtx := setupTestUser(t, db) | ||
|
|
||
| _ = svc.RecordEvent(ctx, "A1", "r1", "T1", nil) | ||
| _ = svc.RecordEvent(ctx, "A2", "r2", "T2", nil) | ||
| for i := 0; i < 5; i++ { | ||
| _ = svc.RecordEvent(subCtx, "ACTION", "res", "TYPE", nil) | ||
| } |
There was a problem hiding this comment.
The ListEvents_Limits subtest creates a fresh database setup (setupDB, cleanDB, setupTestUser) but uses the parent test's svc which is still connected to the parent's DB and context. This creates a mismatch: the service is bound to one database connection while subCtx references a different user/tenant context. This will likely cause incorrect behavior or test failures. Either use the parent test's context consistently across all subtests, or create a completely independent test function with its own service instance.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
internal/core/services/setup_test.go (1)
142-150: Prefer a parameterized query fortable_schemafiltering.The
information_schemaquery can avoid string interpolation and quoting edge-cases by bindingschemaas an argument.💡 Suggested refactor
- query := fmt.Sprintf(` - SELECT table_name - FROM information_schema.tables - WHERE table_schema = '%s' - AND table_type = 'BASE TABLE' - AND table_name != 'schema_migrations' - `, schema) - - rows, err := db.Query(ctx, query) + rows, err := db.Query(ctx, ` + SELECT table_name + FROM information_schema.tables + WHERE table_schema = $1 + AND table_type = 'BASE TABLE' + AND table_name != 'schema_migrations' + `, schema)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/services/setup_test.go` around lines 142 - 150, The SQL built with fmt.Sprintf into the variable query risks injection/quoting issues; change it to a parameterized query (e.g. use a placeholder like $1 or ? depending on the DB driver) instead of fmt.Sprintf and pass schema as a bound argument to db.Query (replace the current db.Query(ctx, query) call with db.Query(ctx, query, schema)), updating the query string and call sites in this test (variables: query, schema, db.Query) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/core/services/setup_test.go`:
- Around line 132-137: The code currently defaults schema = "public" when
db.QueryRow(ctx, "SHOW search_path").Scan(&schema) fails; instead, change
behavior so that the error is propagated/handled and the caller returns early
(do not assume "public"). Update the logic in the setup_test.go routine that
runs the query (the variable schema and its QueryRow/Scan call) to log the error
with context and return the error up to the test setup (or fail the test) so
cleanDB (or any cleanup routine that uses schema) is not run with a guessed
schema; do not provide a fallback value of "public".
- Around line 71-74: The test cleanup currently ignores errors from pgxpool.New
and cleanupDB.Exec which can hide failures and leak test schemas; update the
cleanup in setup_test.go to check and handle errors returned by pgxpool.New(ctx,
dbURL) and by cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema))
(and handle cleanupDB.Close() error if any), logging or failing the test when
these return non-nil errors; specifically, replace the blank identifier usage
around pgxpool.New and Exec with proper error variables and call t.Fatalf/t.Logf
or test helper cleanup functions when errors occur, referencing the cleanupDB
variable, pgxpool.New, Exec, Close, ctx, and schema to locate the code to
change.
In `@internal/core/services/shared_test.go`:
- Line 11: Remove the unused import
"github.com/poyrazk/thecloud/internal/repositories/postgres" from the imports in
shared_test.go to fix the failing go test/lint; locate the import block in
internal/core/services/shared_test.go and delete the postgres import line (or
run goimports/gofmt to automatically clean unused imports) so the file only
imports packages actually referenced by the tests.
---
Nitpick comments:
In `@internal/core/services/setup_test.go`:
- Around line 142-150: The SQL built with fmt.Sprintf into the variable query
risks injection/quoting issues; change it to a parameterized query (e.g. use a
placeholder like $1 or ? depending on the DB driver) instead of fmt.Sprintf and
pass schema as a bound argument to db.Query (replace the current db.Query(ctx,
query) call with db.Query(ctx, query, schema)), updating the query string and
call sites in this test (variables: query, schema, db.Query) accordingly.
| cleanupDB, _ := pgxpool.New(ctx, dbURL) | ||
| if cleanupDB != nil { | ||
| _, _ = cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) | ||
| cleanupDB.Close() |
There was a problem hiding this comment.
Handle cleanup errors explicitly instead of discarding them.
Lines 71-74 ignore both pool creation and schema-drop errors, so cleanup failures are invisible and can leak test schemas.
💡 Suggested fix
t.Cleanup(func() {
db.Close()
// Clean up the schema
- cleanupDB, _ := pgxpool.New(ctx, dbURL)
- if cleanupDB != nil {
- _, _ = cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema))
- cleanupDB.Close()
- }
+ cleanupDB, err := pgxpool.New(ctx, dbURL)
+ if err != nil {
+ t.Logf("Warning: failed to create cleanup DB pool: %v", err)
+ return
+ }
+ defer cleanupDB.Close()
+ if _, err := cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)); err != nil {
+ t.Logf("Warning: failed to drop schema %s: %v", schema, err)
+ }
})As per coding guidelines, "Do not use silent failures - avoid blank identifier assignment like _ = someFunc()."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cleanupDB, _ := pgxpool.New(ctx, dbURL) | |
| if cleanupDB != nil { | |
| _, _ = cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) | |
| cleanupDB.Close() | |
| t.Cleanup(func() { | |
| db.Close() | |
| // Clean up the schema | |
| cleanupDB, err := pgxpool.New(ctx, dbURL) | |
| if err != nil { | |
| t.Logf("Warning: failed to create cleanup DB pool: %v", err) | |
| return | |
| } | |
| defer cleanupDB.Close() | |
| if _, err := cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)); err != nil { | |
| t.Logf("Warning: failed to drop schema %s: %v", schema, err) | |
| } | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/core/services/setup_test.go` around lines 71 - 74, The test cleanup
currently ignores errors from pgxpool.New and cleanupDB.Exec which can hide
failures and leak test schemas; update the cleanup in setup_test.go to check and
handle errors returned by pgxpool.New(ctx, dbURL) and by cleanupDB.Exec(ctx,
fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) (and handle cleanupDB.Close()
error if any), logging or failing the test when these return non-nil errors;
specifically, replace the blank identifier usage around pgxpool.New and Exec
with proper error variables and call t.Fatalf/t.Logf or test helper cleanup
functions when errors occur, referencing the cleanupDB variable, pgxpool.New,
Exec, Close, ctx, and schema to locate the code to change.
| // Get current schema from search_path | ||
| var schema string | ||
| err := db.QueryRow(ctx, "SHOW search_path").Scan(&schema) | ||
| if err != nil { | ||
| schema = "public" | ||
| } |
There was a problem hiding this comment.
Do not default to public when schema discovery fails.
If SHOW search_path fails, Lines 132-137 fallback to public, which can truncate unintended tables in cleanDB. Safer behavior is to log and return early.
💡 Suggested fix
var schema string
err := db.QueryRow(ctx, "SHOW search_path").Scan(&schema)
if err != nil {
- schema = "public"
+ t.Logf("Warning: failed to detect search_path, skipping cleanup: %v", err)
+ return
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Get current schema from search_path | |
| var schema string | |
| err := db.QueryRow(ctx, "SHOW search_path").Scan(&schema) | |
| if err != nil { | |
| schema = "public" | |
| } | |
| // Get current schema from search_path | |
| var schema string | |
| err := db.QueryRow(ctx, "SHOW search_path").Scan(&schema) | |
| if err != nil { | |
| t.Logf("Warning: failed to detect search_path, skipping cleanup: %v", err) | |
| return | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/core/services/setup_test.go` around lines 132 - 137, The code
currently defaults schema = "public" when db.QueryRow(ctx, "SHOW
search_path").Scan(&schema) fails; instead, change behavior so that the error is
propagated/handled and the caller returns early (do not assume "public"). Update
the logic in the setup_test.go routine that runs the query (the variable schema
and its QueryRow/Scan call) to log the error with context and return the error
up to the test setup (or fail the test) so cleanDB (or any cleanup routine that
uses schema) is not run with a guessed schema; do not provide a fallback value
of "public".
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Clean up the schema | ||
| cleanupDB, _ := pgxpool.New(ctx, dbURL) | ||
| if cleanupDB != nil { | ||
| _, _ = cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) |
There was a problem hiding this comment.
SQL injection vulnerability: the schema name is not properly escaped before being interpolated into the SQL statement. Use pgx's identifier quoting mechanism instead. Replace fmt.Sprintf("DROP SCHEMA %s CASCADE", schema) with proper parameterization using pgx.Identifier or QuoteIdentifier to safely handle SQL identifiers.
| query := fmt.Sprintf(` | ||
| SELECT table_name | ||
| FROM information_schema.tables | ||
| WHERE table_schema = 'public' | ||
| WHERE table_schema = '%s' | ||
| AND table_type = 'BASE TABLE' | ||
| AND table_name != 'schema_migrations' | ||
| ` | ||
| `, schema) |
There was a problem hiding this comment.
SQL injection vulnerability: the schema name is not properly escaped before being interpolated into the SQL statement. Replace string formatting with parameterized queries. Use $1 placeholder and pass schema as a parameter: db.Query(ctx, "SELECT table_name FROM information_schema.tables WHERE table_schema = $1 AND table_type = 'BASE TABLE' AND table_name != 'schema_migrations'", schema)
| require.NoError(t, err) | ||
| defer baseDB.Close() | ||
|
|
||
| _, err = baseDB.Exec(ctx, fmt.Sprintf("CREATE SCHEMA %s", schema)) |
There was a problem hiding this comment.
SQL injection vulnerability: the schema name is not properly escaped before being interpolated into the SQL statement. While the schema name is generated from a UUID which is safe, this pattern establishes a dangerous precedent. Use pgx's identifier quoting mechanism instead. Replace fmt.Sprintf("CREATE SCHEMA %s", schema) with proper parameterization using pgx.Identifier or QuoteIdentifier to safely handle SQL identifiers.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
internal/core/services/event_test.go (1)
65-67:⚠️ Potential issue | 🟡 MinorDon’t ignore
RecordEventerrors in the setup loop.Line 66 discards failures (
_ = svc.RecordEvent(...)), which can mask setup issues and produce misleading assertions.As per coding guidelines, "Do not use silent failures - avoid blank identifier assignment like
_ = someFunc()".Suggested fix
for i := 0; i < 5; i++ { - _ = svc.RecordEvent(ctx, "ACTION", "res", uniqueType, nil) + err := svc.RecordEvent(ctx, "ACTION", "res", uniqueType, nil) + require.NoError(t, err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/services/event_test.go` around lines 65 - 67, The test currently ignores errors from svc.RecordEvent in the setup loop; update the loop to capture the error (e.g., err := svc.RecordEvent(ctx, "ACTION", "res", uniqueType, nil)) and fail the test on any error (use t.Fatalf or t.Fatal with a clear message including err) so setup failures are not silenced; ensure you reference svc.RecordEvent, ctx and uniqueType when adding the error check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/core/services/event_test.go`:
- Line 5: Remove the unused import "fmt" from the imports block in event_test.go
to fix the compiler error; locate the imports at the top of the file (where
"fmt" is listed alongside other packages used by tests in Test... functions) and
delete the "fmt" entry so only actually used packages remain imported.
In `@internal/repositories/libvirt/adapter.go`:
- Around line 992-1001: getLogPath currently ignores errors from os.MkdirAll;
change its signature to getLogPath(id string) (string, error), call os.MkdirAll
without assigning to blank, check and return a wrapped error if it fails, and
only return the path on success; then update the call site(s) that use
getLogPath (including the caller at the noted location) to handle the returned
error (propagate or log and abort domain creation) so directory creation
failures are surfaced instead of being silently ignored.
---
Duplicate comments:
In `@internal/core/services/event_test.go`:
- Around line 65-67: The test currently ignores errors from svc.RecordEvent in
the setup loop; update the loop to capture the error (e.g., err :=
svc.RecordEvent(ctx, "ACTION", "res", uniqueType, nil)) and fail the test on any
error (use t.Fatalf or t.Fatal with a clear message including err) so setup
failures are not silenced; ensure you reference svc.RecordEvent, ctx and
uniqueType when adding the error check.
| func (a *LibvirtAdapter) getLogPath(id string) string { | ||
| // Use home directory for logs to avoid permission issues in CI /tmp | ||
| home := os.Getenv("HOME") | ||
| if home == "" { | ||
| home = "/tmp" | ||
| } | ||
| logDir := filepath.Join(home, ".cache", "thecloud", "logs") | ||
| _ = os.MkdirAll(logDir, 0755) | ||
| return filepath.Join(logDir, id+"-console.log") | ||
| } |
There was a problem hiding this comment.
Silent failure on directory creation violates coding guidelines.
Line 999 uses _ = os.MkdirAll(...) which silently ignores errors. If directory creation fails (e.g., permission denied, disk full), the function returns a path that may not exist, causing confusing errors later during domain creation.
As per coding guidelines: "Do not use silent failures - avoid blank identifier assignment like _ = someFunc()".
Proposed fix: return error or log the failure
-func (a *LibvirtAdapter) getLogPath(id string) string {
+func (a *LibvirtAdapter) getLogPath(id string) (string, error) {
// Use home directory for logs to avoid permission issues in CI /tmp
home := os.Getenv("HOME")
if home == "" {
home = "/tmp"
}
logDir := filepath.Join(home, ".cache", "thecloud", "logs")
- _ = os.MkdirAll(logDir, 0755)
- return filepath.Join(logDir, id+"-console.log")
+ if err := os.MkdirAll(logDir, 0755); err != nil {
+ return "", fmt.Errorf("failed to create log directory: %w", err)
+ }
+ return filepath.Join(logDir, id+"-console.log"), nil
}Then update the call site at line 229:
- consoleLog := a.getLogPath(name)
+ consoleLog, err := a.getLogPath(name)
+ if err != nil {
+ a.logger.Warn("failed to get log path, proceeding without console log", "error", err)
+ consoleLog = ""
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/repositories/libvirt/adapter.go` around lines 992 - 1001, getLogPath
currently ignores errors from os.MkdirAll; change its signature to getLogPath(id
string) (string, error), call os.MkdirAll without assigning to blank, check and
return a wrapped error if it fails, and only return the path on success; then
update the call site(s) that use getLogPath (including the caller at the noted
location) to handle the returned error (propagate or log and abort domain
creation) so directory creation failures are surfaced instead of being silently
ignored.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| require.NoError(t, err) | ||
| defer baseDB.Close() | ||
|
|
||
| _, err = baseDB.Exec(ctx, fmt.Sprintf("CREATE SCHEMA %s", schema)) |
There was a problem hiding this comment.
The schema name is directly interpolated into the SQL query using fmt.Sprintf without proper quoting. While currently safe (UUID-based), use pq.QuoteIdentifier() to properly quote SQL identifiers and prevent potential SQL injection if the naming logic changes.
| t.Run("SpecificLimit", func(t *testing.T) { | ||
| result, err := svc.ListEvents(ctx, 2) | ||
| require.NoError(t, err) | ||
| assert.Len(t, result, 2) | ||
| }) | ||
|
|
||
| require.NoError(t, err) | ||
| assert.Len(t, result, 2) | ||
| t.Run("DefaultLimit", func(t *testing.T) { | ||
| // Implementation uses 50 as default if limit <= 0 | ||
| result, err := svc.ListEvents(ctx, 0) | ||
| require.NoError(t, err) | ||
|
|
||
| // Count only our uniqueType events | ||
| count := 0 | ||
| for _, e := range result { | ||
| if e.ResourceType == uniqueType { | ||
| count++ | ||
| } | ||
| } | ||
| assert.Equal(t, 5, count) | ||
| }) |
There was a problem hiding this comment.
Nested t.Run subtests ("SpecificLimit" and "DefaultLimit") are creating their own test scopes within "ListEvents_Limits", which could lead to test isolation issues. According to codebase conventions (internal/core/services/autoscaling_test.go:35-162, container_test.go:37-105), integration test subtests should share the parent test's DB connection and services rather than creating nested test hierarchies. Consider flattening these to be sibling subtests like "ListEvents_SpecificLimit" and "ListEvents_DefaultLimit" at the same level as "RecordEventSuccess".
| t.Run("ListLogs_Limits", func(t *testing.T) { | ||
| for i := 0; i < 5; i++ { | ||
| _ = svc.Log(ctx, userID, "action", "res", "id", nil) | ||
| } | ||
|
|
||
| t.Run("SpecificLimit", func(t *testing.T) { | ||
| logs, err := svc.ListLogs(ctx, userID, 2) | ||
| require.NoError(t, err) | ||
| assert.Len(t, logs, 2) | ||
| }) | ||
|
|
||
| t.Run("DefaultLimit", func(t *testing.T) { | ||
| // 0 should trigger default limit (50) | ||
| logs, err := svc.ListLogs(ctx, userID, 0) | ||
| require.NoError(t, err) | ||
| assert.GreaterOrEqual(t, len(logs), 5) | ||
| }) |
There was a problem hiding this comment.
Nested t.Run subtests ("SpecificLimit" and "DefaultLimit") create unnecessary test hierarchy. According to codebase conventions (internal/core/services/autoscaling_test.go:35-162, container_test.go:37-105), integration test subtests should be siblings sharing the parent test's context. Consider flattening these to "ListLogs_SpecificLimit" and "ListLogs_DefaultLimit" at the same level as the other test cases.
| t.Run("ListLogs_Limits", func(t *testing.T) { | |
| for i := 0; i < 5; i++ { | |
| _ = svc.Log(ctx, userID, "action", "res", "id", nil) | |
| } | |
| t.Run("SpecificLimit", func(t *testing.T) { | |
| logs, err := svc.ListLogs(ctx, userID, 2) | |
| require.NoError(t, err) | |
| assert.Len(t, logs, 2) | |
| }) | |
| t.Run("DefaultLimit", func(t *testing.T) { | |
| // 0 should trigger default limit (50) | |
| logs, err := svc.ListLogs(ctx, userID, 0) | |
| require.NoError(t, err) | |
| assert.GreaterOrEqual(t, len(logs), 5) | |
| }) | |
| t.Run("ListLogs_SpecificLimit", func(t *testing.T) { | |
| for i := 0; i < 5; i++ { | |
| _ = svc.Log(ctx, userID, "action", "res", "id", nil) | |
| } | |
| logs, err := svc.ListLogs(ctx, userID, 2) | |
| require.NoError(t, err) | |
| assert.Len(t, logs, 2) | |
| }) | |
| t.Run("ListLogs_DefaultLimit", func(t *testing.T) { | |
| for i := 0; i < 5; i++ { | |
| _ = svc.Log(ctx, userID, "action", "res", "id", nil) | |
| } | |
| // 0 should trigger default limit (50) | |
| logs, err := svc.ListLogs(ctx, userID, 0) | |
| require.NoError(t, err) | |
| assert.GreaterOrEqual(t, len(logs), 5) |
| home = "/tmp" | ||
| } | ||
| logDir := filepath.Join(home, ".cache", "thecloud", "logs") | ||
| _ = os.MkdirAll(logDir, 0755) |
There was a problem hiding this comment.
The error from os.MkdirAll is silently ignored. If directory creation fails (e.g., due to permissions), the function will return a path that cannot be written to, potentially causing silent failures later. Consider logging the error at minimum, or returning the error to the caller so it can be handled appropriately.
| func (a *LibvirtAdapter) getLogPath(id string) string { | ||
| // Use home directory for logs to avoid permission issues in CI /tmp | ||
| home := os.Getenv("HOME") | ||
| if home == "" { | ||
| home = "/tmp" | ||
| } | ||
| logDir := filepath.Join(home, ".cache", "thecloud", "logs") | ||
| _ = os.MkdirAll(logDir, 0755) | ||
| return filepath.Join(logDir, id+"-console.log") | ||
| } |
There was a problem hiding this comment.
The libvirt adapter changes (adding console log functionality) are not mentioned in the PR description, which focuses solely on test consolidation for SecretService, EventService, and AuditService. These infrastructure changes should either be moved to a separate PR or explicitly documented in the PR description to explain why they're included.
| consoleLog := a.getLogPath(name) | ||
| domainXML := generateDomainXML(name, diskPath, networkID, isoPath, memMB, vcpu, additionalDisks, allocatedPorts, consoleLog) |
There was a problem hiding this comment.
The consoleLog variable is being passed as the last parameter to generateDomainXML, but this parameter is actually named arch in the function definition (see templates.go:35) and is used to determine CPU architecture (x86_64 vs aarch64). This means the console log path will be incorrectly interpreted as an architecture string, and the actual console logging won't work as intended since templates.go:106-108 still hardcodes the log path to /tmp/%s-console.log.
Either the generateDomainXML function signature needs to be updated to accept a consoleLog parameter, or this code should pass an empty string for arch (like line 592 does) and remove the unused getLogPath call.
| consoleLog := a.getLogPath(name) | |
| domainXML := generateDomainXML(name, diskPath, networkID, isoPath, memMB, vcpu, additionalDisks, allocatedPorts, consoleLog) | |
| domainXML := generateDomainXML(name, diskPath, networkID, isoPath, memMB, vcpu, additionalDisks, allocatedPorts, "") |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| _, err = baseDB.Exec(ctx, fmt.Sprintf("CREATE SCHEMA %s", schema)) | ||
| require.NoError(t, err) | ||
|
|
||
| // New connection pool scoped to this schema | ||
| scopedURL := dbURL | ||
| if strings.Contains(dbURL, "?") { | ||
| scopedURL += "&search_path=" + schema | ||
| } else { | ||
| scopedURL += "?search_path=" + schema | ||
| } | ||
|
|
||
| db, err := pgxpool.New(ctx, scopedURL) | ||
| require.NoError(t, err) | ||
|
|
||
| err = db.Ping(ctx) | ||
| if err != nil { | ||
| t.Skipf("Skipping integration test: database not available: %v", err) | ||
| } | ||
|
|
||
| // Run migrations | ||
| err = postgres.RunMigrations(ctx, db, slog.Default()) | ||
| // Ensure the search_path is set for migrations | ||
| _, err = db.Exec(ctx, fmt.Sprintf("SET search_path TO %s", schema)) | ||
| require.NoError(t, err) | ||
|
|
||
| // Run migrations with Discard logger to keep stdout clean for benchmarks | ||
| discardLogger := slog.New(slog.NewTextHandler(io.Discard, nil)) | ||
| err = postgres.RunMigrations(ctx, db, discardLogger) | ||
| require.NoError(t, err, "Failed to run migrations") | ||
|
|
||
| t.Cleanup(func() { | ||
| db.Close() | ||
| // Clean up the schema | ||
| cleanupDB, _ := pgxpool.New(ctx, dbURL) | ||
| if cleanupDB != nil { | ||
| _, _ = cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) |
There was a problem hiding this comment.
SQL injection vulnerability: The schema name is directly interpolated into the SQL statements using fmt.Sprintf without proper escaping or parameterization. Although the schema name is currently derived from a UUID, this is a dangerous pattern. Use parameterized queries or properly quote the identifier using pgx's Identifier type or pq.QuoteIdentifier.
| _, err := svc.CreateSecret(ctx, "DUP", "v1", "") | ||
| require.NoError(t, err) | ||
| _, err = svc.CreateSecret(ctx, "DUP", "v2", "") | ||
| require.Error(t, err) |
There was a problem hiding this comment.
Weak error assertion: The test only checks that an error is returned but doesn't verify the specific error message or type. Consider using assert.ErrorContains or checking for a specific error type to ensure the correct error is returned for duplicate name scenarios. This will make the test more robust and catch regressions where a different error is returned.
| require.Error(t, err) | |
| require.ErrorContains(t, err, "already") |
| func TestNewSecretService_Variants(t *testing.T) { | ||
| logger := slog.New(slog.NewTextHandler(io.Discard, nil)) | ||
|
|
||
| t.Run("DefaultKeyDevelopment", func(t *testing.T) { | ||
| svc := services.NewSecretService(nil, nil, nil, logger, "", "development") | ||
| assert.NotNil(t, svc) | ||
| }) | ||
|
|
||
| t.Run("ShortKeyWarning", func(t *testing.T) { | ||
| svc := services.NewSecretService(nil, nil, nil, logger, "short", "development") | ||
| assert.NotNil(t, svc) | ||
| }) |
There was a problem hiding this comment.
Missing test assertions: These tests only verify that the service is not nil, but don't validate the actual behavior (e.g., that a default key is used in development mode, or that a warning is logged for short keys). Consider adding assertions to verify the expected logging behavior or test that encryption/decryption works with the generated key to ensure these code paths function correctly.
| query := fmt.Sprintf(` | ||
| SELECT table_name | ||
| FROM information_schema.tables | ||
| WHERE table_schema = 'public' | ||
| WHERE table_schema = '%s' | ||
| AND table_type = 'BASE TABLE' | ||
| AND table_name != 'schema_migrations' | ||
| ` | ||
| `, schema) |
There was a problem hiding this comment.
SQL injection vulnerability: The schema name is directly interpolated into the SQL query using fmt.Sprintf without proper escaping or parameterization. While the schema name is currently derived from a UUID (which is safe), this pattern is dangerous and could lead to SQL injection if the schema derivation logic changes in the future. Use parameterized queries or properly quote the identifier using pq.QuoteIdentifier or pgx's Identifier type.
There was a problem hiding this comment.
♻️ Duplicate comments (3)
internal/core/services/setup_test.go (2)
75-79:⚠️ Potential issue | 🟠 MajorHandle cleanup errors explicitly instead of discarding them.
Lines 75-78 silently ignore errors from
pgxpool.NewandcleanupDB.Exec, masking cleanup failures and potentially leaking test schemas.t.Cleanup(func() { db.Close() // Clean up the schema - cleanupDB, _ := pgxpool.New(ctx, dbURL) - if cleanupDB != nil { - _, _ = cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) - cleanupDB.Close() - } + cleanupDB, err := pgxpool.New(ctx, dbURL) + if err != nil { + t.Logf("Warning: failed to create cleanup DB pool: %v", err) + return + } + defer cleanupDB.Close() + if _, err := cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)); err != nil { + t.Logf("Warning: failed to drop schema %s: %v", schema, err) + } })As per coding guidelines, "Do not use silent failures - avoid blank identifier assignment like
_ = someFunc()."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/services/setup_test.go` around lines 75 - 79, The cleanup currently discards errors from pgxpool.New and cleanupDB.Exec which hides failures; change the code around pgxpool.New(ctx, dbURL) to capture the error (err) instead of assigning to _, check err and fail the test (use t.Fatalf or t.Errorf) or log it, then when calling cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) capture its returned error and handle it explicitly (fail the test or t.Logf/t.Errorf), and ensure cleanupDB.Close() is still called in a defer; reference the pgxpool.New call, the cleanupDB variable, and cleanupDB.Exec to locate and fix the code.
136-141:⚠️ Potential issue | 🟠 MajorDo not default to
publicwhen schema discovery fails.If
SHOW search_pathfails, falling back topubliccould truncate tables in an unintended schema. Safer to log and return early.var schema string err := db.QueryRow(ctx, "SHOW search_path").Scan(&schema) if err != nil { - schema = "public" + t.Logf("Warning: failed to detect search_path, skipping cleanup: %v", err) + return }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/services/setup_test.go` around lines 136 - 141, Do not default to "public" when SHOW search_path fails: remove the schema = "public" fallback after err != nil; instead log the error (including context such as the fact that schema discovery failed) and return the error to the caller (or propagate a wrapped error) so the setup routine aborts rather than continuing with an unsafe default; locate the db.QueryRow(ctx, "SHOW search_path").Scan(&schema) call and update its error handling to log and return early.internal/core/services/event_test.go (1)
64-66:⚠️ Potential issue | 🟡 MinorSilent failure: error from
RecordEventis discarded.Line 65 uses
_ = svc.RecordEvent(...)which hides potential failures during test setup, leading to misleading test results.for i := 0; i < 5; i++ { - _ = svc.RecordEvent(ctx, "ACTION", "res", uniqueType, nil) + err := svc.RecordEvent(ctx, "ACTION", "res", uniqueType, nil) + require.NoError(t, err) }As per coding guidelines, "Do not use silent failures - avoid blank identifier assignment like
_ = someFunc()."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/core/services/event_test.go` around lines 64 - 66, The test silently discards errors from svc.RecordEvent; change the loop to capture the returned error from RecordEvent (e.g., err := svc.RecordEvent(...)) and fail the test on error (t.Fatalf or require.NoError/require.NoErrorf) so any setup failures surface; update the loop that calls svc.RecordEvent to handle and assert the error instead of using the blank identifier.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@internal/core/services/event_test.go`:
- Around line 64-66: The test silently discards errors from svc.RecordEvent;
change the loop to capture the returned error from RecordEvent (e.g., err :=
svc.RecordEvent(...)) and fail the test on error (t.Fatalf or
require.NoError/require.NoErrorf) so any setup failures surface; update the loop
that calls svc.RecordEvent to handle and assert the error instead of using the
blank identifier.
In `@internal/core/services/setup_test.go`:
- Around line 75-79: The cleanup currently discards errors from pgxpool.New and
cleanupDB.Exec which hides failures; change the code around pgxpool.New(ctx,
dbURL) to capture the error (err) instead of assigning to _, check err and fail
the test (use t.Fatalf or t.Errorf) or log it, then when calling
cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) capture its
returned error and handle it explicitly (fail the test or t.Logf/t.Errorf), and
ensure cleanupDB.Close() is still called in a defer; reference the pgxpool.New
call, the cleanupDB variable, and cleanupDB.Exec to locate and fix the code.
- Around line 136-141: Do not default to "public" when SHOW search_path fails:
remove the schema = "public" fallback after err != nil; instead log the error
(including context such as the fact that schema discovery failed) and return the
error to the caller (or propagate a wrapped error) so the setup routine aborts
rather than continuing with an unsafe default; locate the db.QueryRow(ctx, "SHOW
search_path").Scan(&schema) call and update its error handling to log and return
early.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 9 out of 9 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…and robust event isolation
1f47be1 to
e4a9411
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 9 out of 9 changed files in this pull request and generated 6 comments.
Comments suppressed due to low confidence (1)
internal/core/services/secret_test.go:83
- This test ignores errors from
CreateSecretcalls. If secret creation fails, the test may still pass and hide a regression. Capture andrequire.NoErroron each create so failures are surfaced deterministically.
t.Run("List", func(t *testing.T) {
_, _ = svc.CreateSecret(ctx, "S1", "V1", "")
_, _ = svc.CreateSecret(ctx, "S2", "V2", "")
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| consoleLog := a.getLogPath(name) | ||
| domainXML := generateDomainXML(name, diskPath, networkID, isoPath, memMB, vcpu, additionalDisks, allocatedPorts, consoleLog) |
There was a problem hiding this comment.
a.getLogPath is called here but no such method exists on LibvirtAdapter in this package, which will break compilation. Either add the missing helper on the adapter (and its tests), or compute the console log path inline using existing utilities in this file/package.
| uniqueType := "LIMIT_TEST_" + uuid.New().String()[:8] | ||
|
|
||
| for i := 0; i < 5; i++ { | ||
| _ = svc.RecordEvent(ctx, "ACTION", "res", uniqueType, nil) |
There was a problem hiding this comment.
Errors returned from svc.RecordEvent are discarded in this loop. If event recording starts failing, the limit assertions can become misleading. Please require.NoError (or at least assert.NoError) for each call inside the loop.
| _ = svc.RecordEvent(ctx, "ACTION", "res", uniqueType, nil) | |
| err := svc.RecordEvent(ctx, "ACTION", "res", uniqueType, nil) | |
| require.NoError(t, err) |
| for i := 0; i < 5; i++ { | ||
| _ = svc.Log(ctx, userID, "action", "res", "id", nil) | ||
| } |
There was a problem hiding this comment.
Errors returned from svc.Log are discarded in this loop. If auditing fails, the list-limit assertions could still pass (e.g., due to prior data). Please assert the error is nil for each Log call.
| otherUserID := uuid.New() | ||
| _ = svc.Log(ctx, userID, "user1-action", "res", "1", nil) | ||
| _ = svc.Log(ctx, otherUserID, "user2-action", "res", "2", nil) | ||
|
|
There was a problem hiding this comment.
In the user isolation test, both svc.Log calls ignore returned errors. If the second insert fails, the test could still pass trivially. Please check errors for both calls so the isolation assertion is meaningful.
| cleanupDB, _ := pgxpool.New(ctx, dbURL) | ||
| if cleanupDB != nil { | ||
| _, _ = cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) | ||
| cleanupDB.Close() | ||
| } |
There was a problem hiding this comment.
In t.Cleanup, errors from creating cleanupDB and dropping the schema are silently ignored. On shared DATABASE_URL runs this can leak many test_* schemas over time. Please handle the error (e.g., require.NoError when opening the cleanup pool, and at least t.Logf/require.NoError on DROP SCHEMA).
| cleanupDB, _ := pgxpool.New(ctx, dbURL) | |
| if cleanupDB != nil { | |
| _, _ = cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) | |
| cleanupDB.Close() | |
| } | |
| cleanupDB, err := pgxpool.New(ctx, dbURL) | |
| if err != nil { | |
| t.Logf("failed to create cleanup DB pool for schema %s: %v", schema, err) | |
| return | |
| } | |
| defer cleanupDB.Close() | |
| if _, err := cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)); err != nil { | |
| t.Logf("failed to drop schema %s: %v", schema, err) | |
| } |
| // Ensure redaction | ||
| assert.Equal(t, "[REDACTED]", secrets[0].EncryptedValue) |
There was a problem hiding this comment.
The redaction assertion uses secrets[0], but list ordering is typically not guaranteed and can make this test flaky. Prefer asserting that all returned secrets have EncryptedValue == "[REDACTED]", or locate the specific secrets you created by name before asserting on them.
| // Ensure redaction | |
| assert.Equal(t, "[REDACTED]", secrets[0].EncryptedValue) | |
| // Ensure redaction for all returned secrets | |
| for _, s := range secrets { | |
| assert.Equal(t, "[REDACTED]", s.EncryptedValue) | |
| } |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 13 out of 13 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| query := fmt.Sprintf(` | ||
| SELECT table_name | ||
| FROM information_schema.tables | ||
| WHERE table_schema = '%s' | ||
| AND table_type = 'BASE TABLE' | ||
| AND table_name != 'schema_migrations' | ||
| `, schema) | ||
|
|
||
| rows, err := db.Query(ctx, query) |
There was a problem hiding this comment.
The schema name is directly interpolated into the SQL query using string formatting instead of parameterized queries. While the schema name is generated from a UUID (which is safe), this pattern is still vulnerable to SQL injection if the schema variable is ever derived from external input in the future. Use parameterized queries with $1 placeholder instead, as demonstrated in line 92 of internal/core/services/setup_test.go.
| query := fmt.Sprintf(` | |
| SELECT table_name | |
| FROM information_schema.tables | |
| WHERE table_schema = '%s' | |
| AND table_type = 'BASE TABLE' | |
| AND table_name != 'schema_migrations' | |
| `, schema) | |
| rows, err := db.Query(ctx, query) | |
| query := ` | |
| SELECT table_name | |
| FROM information_schema.tables | |
| WHERE table_schema = $1 | |
| AND table_type = 'BASE TABLE' | |
| AND table_name != 'schema_migrations' | |
| ` | |
| rows, err := db.Query(ctx, query, schema) |
| query := fmt.Sprintf(` | ||
| SELECT table_name | ||
| FROM information_schema.tables | ||
| WHERE table_schema = 'public' | ||
| WHERE table_schema = '%s' | ||
| AND table_type = 'BASE TABLE' | ||
| AND table_name != 'schema_migrations' | ||
| ` | ||
| `, schema) |
There was a problem hiding this comment.
The schema name is directly interpolated into the SQL query using string formatting instead of parameterized queries. While the schema name is generated from a UUID (which is safe), this pattern is still vulnerable to SQL injection if the schema variable is ever derived from external input in the future. Use parameterized queries with $1 placeholder instead, as demonstrated in line 92 of this same file.
| // This should NOT match (letters instead of numbers) | ||
| req, _ = http.NewRequest("GET", fmt.Sprintf("%s/gw/status-%d/abc", testutil.TestBaseURL, ts), nil) | ||
| resp, err = client.Do(req) | ||
| req, _ := http.NewRequest("GET", fmt.Sprintf("%s/gw/status-%d/abc", testutil.TestBaseURL, ts), nil) |
There was a problem hiding this comment.
The URL pattern used for the negative test case ("/gw/status-%d/abc") doesn't match the pattern that was just registered ("/gw/gw-status-%d/{code:[0-9]+}"). The test URL should use "/gw/gw-status-%d/abc" to properly verify that non-numeric codes are rejected by the registered route. Currently, this test is checking a completely different route pattern that was never registered.
| req, _ := http.NewRequest("GET", fmt.Sprintf("%s/gw/status-%d/abc", testutil.TestBaseURL, ts), nil) | |
| req, _ := http.NewRequest("GET", fmt.Sprintf("%s/gw/gw-status-%d/abc", testutil.TestBaseURL, ts), nil) |
| targetURL := "https://httpbin.org" // Use base URL | ||
|
|
||
| payload := map[string]interface{}{ | ||
| "name": routeName, | ||
| "path_prefix": pattern, | ||
| "target_url": targetURL, | ||
| "strip_prefix": true, | ||
| "rate_limit": 100, | ||
| "strip_prefix": false, // Don't strip, let it pass /status-ts/201 to target if we use prefix | ||
| // Actually, if we use pattern matching, the whole path matched is usually handled. | ||
| // Let's use a simpler one: | ||
| } | ||
| // Redefine for clarity | ||
| pattern = fmt.Sprintf("/gw-status-%d/{code:[0-9]+}", ts) | ||
| payload["path_prefix"] = pattern | ||
| payload["target_url"] = "https://httpbin.org/status" | ||
| payload["strip_prefix"] = true |
There was a problem hiding this comment.
The payload is initially constructed on lines 120-127, then key fields (path_prefix, target_url, strip_prefix) are redefined on lines 128-132. The initial values on lines 118, 123-124 are never used. Consider removing lines 118 and 123-124, or restructuring to define the payload once with the correct values to improve code clarity.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 19 out of 19 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ALTER TABLE object_versions DROP TABLE IF EXISTS object_versions; | ||
| ALTER TABLE objects DROP COLUMN IF EXISTS current_version_id; | ||
| ALTER TABLE objects DROP COLUMN IF EXISTS is_versioned; | ||
|
|
||
| -- Restore unique constraint if it was removed (it shouldn't be if we just drop columns, but for completeness) | ||
| DO $$ | ||
| BEGIN | ||
| IF NOT EXISTS (SELECT 1 FROM information_schema.table_constraints WHERE table_schema = current_schema() AND constraint_name = 'objects_bucket_key_key') THEN | ||
| ALTER TABLE objects ADD CONSTRAINT objects_bucket_key_key UNIQUE (bucket_id, key); | ||
| END IF; | ||
| END $$; |
There was a problem hiding this comment.
The DOWN migration is currently invalid and doesn’t reverse the UP migration: ALTER TABLE object_versions DROP TABLE is not valid SQL; the UP migration adds version_id and is_latest on objects (not current_version_id/is_versioned), and it changes the unique constraint on (bucket, key) (not bucket_id). Please rewrite the DOWN to drop idx_objects_latest, restore objects_bucket_key_key on (bucket, key), drop objects_bucket_key_version_unique, and drop the version_id/is_latest columns.
| ALTER TABLE object_versions DROP TABLE IF EXISTS object_versions; | |
| ALTER TABLE objects DROP COLUMN IF EXISTS current_version_id; | |
| ALTER TABLE objects DROP COLUMN IF EXISTS is_versioned; | |
| -- Restore unique constraint if it was removed (it shouldn't be if we just drop columns, but for completeness) | |
| DO $$ | |
| BEGIN | |
| IF NOT EXISTS (SELECT 1 FROM information_schema.table_constraints WHERE table_schema = current_schema() AND constraint_name = 'objects_bucket_key_key') THEN | |
| ALTER TABLE objects ADD CONSTRAINT objects_bucket_key_key UNIQUE (bucket_id, key); | |
| END IF; | |
| END $$; | |
| -- Reverse creation of object_versions table | |
| DROP TABLE IF EXISTS object_versions; | |
| -- Remove versioning columns added to objects | |
| ALTER TABLE objects | |
| DROP COLUMN IF EXISTS version_id, | |
| DROP COLUMN IF EXISTS is_latest; | |
| -- Drop index on latest objects if it was created | |
| DROP INDEX IF EXISTS idx_objects_latest; | |
| -- Drop versioned unique constraint on (bucket, key, version_id) if it exists | |
| ALTER TABLE objects | |
| DROP CONSTRAINT IF EXISTS objects_bucket_key_version_unique; | |
| -- Restore original unique constraint on (bucket, key) if it was removed | |
| DO $$ | |
| BEGIN | |
| IF NOT EXISTS ( | |
| SELECT 1 | |
| FROM information_schema.table_constraints | |
| WHERE table_schema = current_schema() | |
| AND table_name = 'objects' | |
| AND constraint_name = 'objects_bucket_key_key' | |
| ) THEN | |
| ALTER TABLE objects | |
| ADD CONSTRAINT objects_bucket_key_key UNIQUE (bucket, key); | |
| END IF; | |
| END | |
| $$; |
| req, _ := http.NewRequest("GET", url, nil) | ||
| if token != "" { | ||
| req.Header.Set(testutil.TestHeaderAPIKey, token) | ||
| } | ||
| resp, err = client.Do(req) | ||
| if err == nil && resp.StatusCode >= 200 && resp.StatusCode < 300 { | ||
| return resp | ||
| } | ||
| if resp != nil { | ||
| resp.Body.Close() | ||
| } | ||
| time.Sleep(1 * time.Second) | ||
| } | ||
| require.NoError(t, err, "Timed out waiting for route: "+url) | ||
| require.True(t, resp.StatusCode >= 200 && resp.StatusCode < 300, "Route returned non-success status: %d for URL: %s", resp.StatusCode, url) |
There was a problem hiding this comment.
http.NewRequest errors are ignored here, and the post-loop assertions can dereference resp even when all attempts failed before a response was returned. Please handle/require the NewRequest error, and guard against resp == nil before reading StatusCode (or fail with a clearer timeout error).
| if err == nil && resp.StatusCode == http.StatusOK { | ||
| var res struct { | ||
| URL string `json:"url"` | ||
| } | ||
| _ = json.NewDecoder(resp.Body).Decode(&res) | ||
| resp.Body.Close() | ||
| if finalURL = res.URL; finalURL != "" && (finalURL == httpbinAnything+"/specific" || finalURL == httpbinAnything+"/general") { | ||
| if finalURL == httpbinAnything+"/specific" { | ||
| break | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
In this retry loop, response bodies are only closed on the 200 path. If the gateway returns a non-200 response (but no transport error), the body is left open and can leak connections across retries. Ensure resp.Body.Close() runs for any non-nil resp on every iteration (and check JSON decode errors rather than ignoring them).
| if err == nil && resp.StatusCode == http.StatusOK { | |
| var res struct { | |
| URL string `json:"url"` | |
| } | |
| _ = json.NewDecoder(resp.Body).Decode(&res) | |
| resp.Body.Close() | |
| if finalURL = res.URL; finalURL != "" && (finalURL == httpbinAnything+"/specific" || finalURL == httpbinAnything+"/general") { | |
| if finalURL == httpbinAnything+"/specific" { | |
| break | |
| } | |
| } | |
| } | |
| if resp != nil { | |
| func() { | |
| defer resp.Body.Close() | |
| if err != nil || resp.StatusCode != http.StatusOK { | |
| return | |
| } | |
| var res struct { | |
| URL string `json:"url"` | |
| } | |
| decErr := json.NewDecoder(resp.Body).Decode(&res) | |
| require.NoError(t, decErr) | |
| if finalURL = res.URL; finalURL != "" && (finalURL == httpbinAnything+"/specific" || finalURL == httpbinAnything+"/general") { | |
| // If we hit the specific target, we will break the loop below. | |
| return | |
| } | |
| }() | |
| } | |
| if finalURL == httpbinAnything+"/specific" { | |
| break | |
| } |
| // Use a unique schema for this test run to allow parallel execution in CI | ||
| schema := "test_repo_" + strings.ReplaceAll(uuid.New().String(), "-", "_") | ||
|
|
||
| // Create base connection to initialize schema | ||
| baseDB, err := pgxpool.New(ctx, dbURL) | ||
| require.NoError(t, err) | ||
| defer baseDB.Close() | ||
|
|
||
| _, err = baseDB.Exec(ctx, fmt.Sprintf("CREATE SCHEMA %s", schema)) | ||
| require.NoError(t, err) | ||
|
|
||
| // Configure pool to use the schema for ALL connections | ||
| // We include 'public' for extensions like uuid-ossp | ||
| config, err := pgxpool.ParseConfig(dbURL) | ||
| require.NoError(t, err) | ||
|
|
||
| if config.ConnConfig.RuntimeParams == nil { | ||
| config.ConnConfig.RuntimeParams = make(map[string]string) | ||
| } | ||
| config.ConnConfig.RuntimeParams["search_path"] = fmt.Sprintf("%s, public", schema) | ||
|
|
||
| db, err := pgxpool.NewWithConfig(ctx, config) | ||
| require.NoError(t, err) | ||
|
|
||
| err = db.Ping(ctx) | ||
| if err != nil { | ||
| t.Skipf("Skipping integration test: database not available: %v", err) | ||
| } |
There was a problem hiding this comment.
Schema cleanup is registered only after Ping + migrations succeed. If Ping triggers t.Skip (or a require.NoError fails before the cleanup is registered), the newly created schema will be left behind. Register cleanup immediately after creating the schema/pool (and drop the schema on skip/failure), so test runs don't leak schemas in developer/CI databases.
| -- +goose Up | ||
| ALTER TABLE gateway_routes ADD COLUMN IF NOT EXISTS methods TEXT[] DEFAULT '{GET,POST,PUT,DELETE,PATCH,OPTIONS}'; | ||
|
|
||
| -- Remove the unique constraint on path_prefix to allow multiple methods on the same path | ||
| ALTER TABLE gateway_routes DROP CONSTRAINT IF EXISTS gateway_routes_path_prefix_key; | ||
|
|
||
| -- Create a composite unique constraint (path_pattern, methods) | ||
| DO $$ | ||
| BEGIN | ||
| IF NOT EXISTS (SELECT 1 FROM information_schema.table_constraints WHERE constraint_name = 'gateway_routes_pattern_methods_key') THEN | ||
| IF NOT EXISTS (SELECT 1 FROM information_schema.table_constraints WHERE table_schema = current_schema() AND constraint_name = 'gateway_routes_pattern_methods_key') THEN | ||
| ALTER TABLE gateway_routes ADD CONSTRAINT gateway_routes_pattern_methods_key UNIQUE (path_pattern, methods); | ||
| END IF; | ||
| END $$; |
There was a problem hiding this comment.
This migration no longer drops the existing UNIQUE(path_prefix) constraint from the original gateway_routes table. As a result, you still cannot create multiple routes for the same path with different methods, which defeats the purpose of adding methods + the composite unique constraint. Reintroduce a guarded DROP CONSTRAINT IF EXISTS gateway_routes_path_prefix_key (or equivalent) before adding the new constraint.
| IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name='clusters' AND column_name='kubeconfig') THEN | ||
| UPDATE clusters SET kubeconfig_encrypted = kubeconfig WHERE kubeconfig_encrypted IS NULL; | ||
|
|
||
| IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_schema = current_schema() AND table_name='clusters' AND column_name='kubeconfig') THEN |
There was a problem hiding this comment.
This migration now drops the legacy kubeconfig column without migrating data into kubeconfig_encrypted, and it no longer adds the columns that the existing 081_fix_clusters_table.down.sql drops on rollback. Please either restore the data-migration/add-column steps or update the corresponding down migration so rollback doesn’t drop unrelated core columns (and doesn’t lose kubeconfig data for older schemas).
| IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_schema = current_schema() AND table_name='clusters' AND column_name='kubeconfig') THEN | |
| IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_schema = current_schema() AND table_name='clusters' AND column_name='kubeconfig') THEN | |
| -- Ensure the new kubeconfig_encrypted column exists so data can be migrated | |
| IF NOT EXISTS ( | |
| SELECT 1 | |
| FROM information_schema.columns | |
| WHERE table_schema = current_schema() | |
| AND table_name = 'clusters' | |
| AND column_name = 'kubeconfig_encrypted' | |
| ) THEN | |
| ALTER TABLE clusters ADD COLUMN kubeconfig_encrypted TEXT; | |
| END IF; | |
| -- Migrate any existing kubeconfig data into kubeconfig_encrypted, preserving existing encrypted values | |
| UPDATE clusters | |
| SET kubeconfig_encrypted = kubeconfig | |
| WHERE kubeconfig_encrypted IS NULL; | |
| -- Drop the legacy kubeconfig column after data migration |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 32 out of 32 changed files in this pull request and generated 7 comments.
Comments suppressed due to low confidence (1)
internal/core/services/instance_test.go:72
setupDB(t)already creates a fresh per-test schema, so callingcleanDB(t, db)immediately afterwards is redundant work and adds time to every instance-service test. Consider removing this call (or only using it in tests that intentionally reuse a shared schema).
db := setupDB(t)
cleanDB(t, db)
ctx := setupTestUser(t, db)
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| BEGIN | ||
| IF NOT EXISTS (SELECT 1 FROM information_schema.table_constraints WHERE constraint_name = 'objects_bucket_key_key') THEN | ||
| ALTER TABLE objects ADD CONSTRAINT objects_bucket_key_key UNIQUE (bucket, key); | ||
| ALTER TABLE object_versions DROP TABLE IF EXISTS object_versions; |
There was a problem hiding this comment.
ALTER TABLE object_versions DROP TABLE IF EXISTS object_versions; is invalid SQL and will fail rollback. This should be a DROP TABLE IF EXISTS object_versions statement (and likely should also drop any indexes/constraints created in the up migration).
| ALTER TABLE object_versions DROP TABLE IF EXISTS object_versions; | |
| DROP TABLE IF EXISTS object_versions; |
| -- Add methods column to gateway_routes | ||
| ALTER TABLE gateway_routes ADD COLUMN IF NOT EXISTS methods TEXT[] DEFAULT '{}'; | ||
| -- +goose Up | ||
| ALTER TABLE gateway_routes ADD COLUMN IF NOT EXISTS methods TEXT[] DEFAULT '{GET,POST,PUT,DELETE,PATCH,OPTIONS}'; |
There was a problem hiding this comment.
This migration no longer drops the existing UNIQUE(path_prefix) constraint created in 022_create_gateway.up.sql, so the DB will still prevent multiple routes sharing a path even with different methods. If the intent is to support method-specific routes on the same prefix, keep the DROP CONSTRAINT IF EXISTS gateway_routes_path_prefix_key (or equivalent) before adding the composite unique constraint.
| ALTER TABLE gateway_routes ADD COLUMN IF NOT EXISTS methods TEXT[] DEFAULT '{GET,POST,PUT,DELETE,PATCH,OPTIONS}'; | |
| ALTER TABLE gateway_routes ADD COLUMN IF NOT EXISTS methods TEXT[] DEFAULT '{GET,POST,PUT,DELETE,PATCH,OPTIONS}'; | |
| ALTER TABLE gateway_routes DROP CONSTRAINT IF EXISTS gateway_routes_path_prefix_key; |
| require.NoError(t, err, "Timed out waiting for route: "+url) | ||
| require.True(t, resp.StatusCode >= 200 && resp.StatusCode < 300, "Route returned non-success status: %d for URL: %s", resp.StatusCode, url) | ||
| return resp |
There was a problem hiding this comment.
require.True is called with a printf-style format string and arguments, but require.True doesn't format messages. Use require.Truef (or pre-format the message) so failures show the actual status code and URL.
| req, _ := http.NewRequest("GET", fmt.Sprintf("%s/gw/status-%d/abc", testutil.TestBaseURL, ts), nil) | ||
| resp, err := client.Do(req) | ||
| require.NoError(t, err) | ||
| assert.Equal(t, http.StatusNotFound, resp.StatusCode) | ||
| _ = resp.Body.Close() |
There was a problem hiding this comment.
The "should NOT match" request is hitting /gw/status-.../abc, but this test created the regex route under /gw-status-.... As written, the 404 can occur even if the regex route is broken, because the prefix doesn't match at all. Use the same /gw-status-.../abc path to validate the numeric-regex mismatch behavior.
| // It returns both the connection pool and the unique schema name created for the test. | ||
| func SetupDB(t *testing.T) (*pgxpool.Pool, string) { | ||
| t.Helper() |
There was a problem hiding this comment.
SetupDB now returns (*pgxpool.Pool, string), but at least one integration test still calls it with a single assignment (internal/repositories/postgres/database_repo_test.go:18). This will fail to compile; please update remaining callers (e.g., dbPool, _ := SetupDB(t) or use the schema return value).
| err = db.Ping(ctx) | ||
| if err != nil { | ||
| t.Skipf("Skipping integration test: database not available: %v", err) | ||
| } |
There was a problem hiding this comment.
A schema is created before db.Ping(), but if Ping fails the test calls t.Skipf before registering t.Cleanup, so the newly-created schema can be leaked in the target database. Consider registering cleanup immediately after CREATE SCHEMA (or explicitly dropping the schema before skipping).
| -- +goose Up | ||
| DO $$ | ||
| BEGIN | ||
| IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name='clusters' AND column_name='ssh_key') THEN | ||
| UPDATE clusters SET ssh_private_key_encrypted = ssh_key WHERE ssh_private_key_encrypted IS NULL; | ||
| IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_schema = current_schema() AND table_name='clusters' AND column_name='ssh_key') THEN | ||
| ALTER TABLE clusters RENAME COLUMN ssh_key TO ssh_key_id; | ||
| END IF; | ||
| IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name='clusters' AND column_name='kubeconfig') THEN | ||
| UPDATE clusters SET kubeconfig_encrypted = kubeconfig WHERE kubeconfig_encrypted IS NULL; | ||
|
|
||
| IF EXISTS (SELECT 1 FROM information_schema.columns WHERE table_schema = current_schema() AND table_name='clusters' AND column_name='kubeconfig') THEN | ||
| ALTER TABLE clusters DROP COLUMN kubeconfig; | ||
| END IF; | ||
| END $$; |
There was a problem hiding this comment.
081_fix_clusters_table.up.sql now only renames/drops legacy columns, but the corresponding down migration (081_fix_clusters_table.down.sql) still drops core cluster columns (pod/service CIDRs, encrypted fields, etc.) that are created in the base 020_create_clusters.up.sql. This makes rollbacks destructive; the down migration should be updated to reverse only the rename/drop performed here (or the up migration should keep the original column-add behavior).
…dd_object_versioning.down.sql
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 34 out of 34 changed files in this pull request and generated 17 comments.
Comments suppressed due to low confidence (1)
internal/repositories/postgres/migrator.go:78
RunMigrationslogs and continues on any SQL execution error, but still returnsnil. This can silently leave the DB partially migrated (including in production startup via internal/api/setup), making subsequent failures harder to diagnose. Consider returning an error for unexpected failures and only ignoring specific idempotency-related Postgres error codes (e.g., already-exists), usingpgconn.PgErrorto checkCode.
_, err = executor.Exec(ctx, sql)
if err != nil {
// Log but don't fail, as tables might already exist
// Ideally we should check specific error codes
logger.Warn("migration result", "migration", entry.Name(), "error", err)
} else {
logger.Info("applied migration", "migration", entry.Name())
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| db, _ := SetupDB(t) | ||
| defer db.Close() |
There was a problem hiding this comment.
SetupDB now registers a t.Cleanup that calls db.Close(), so this defer db.Close() is redundant and can lead to double-closing the pool. Prefer removing the defer (or remove the close from SetupDB’s cleanup and let callers own it), but avoid doing both.
| db, _ := SetupDB(t) | ||
| defer db.Close() | ||
| repo := NewSecretRepository(db) |
There was a problem hiding this comment.
SetupDB now registers a t.Cleanup that calls db.Close(), so this defer db.Close() is redundant and can lead to double-closing the pool. Prefer removing the defer (or remove the close from SetupDB’s cleanup and let callers own it), but avoid doing both.
| func TestRBACRepository_Integration(t *testing.T) { | ||
| db := SetupDB(t) | ||
| db, _ := SetupDB(t) | ||
| defer db.Close() |
There was a problem hiding this comment.
SetupDB now registers a t.Cleanup that calls db.Close(), so this defer db.Close() is redundant and can lead to double-closing the pool. Prefer removing the defer (or remove the close from SetupDB’s cleanup and let callers own it), but avoid doing both.
| defer db.Close() |
| func TestDatabaseRepository_Integration(t *testing.T) { | ||
| dbPool := SetupDB(t) | ||
| dbPool, _ := SetupDB(t) | ||
| defer dbPool.Close() |
There was a problem hiding this comment.
SetupDB now registers a t.Cleanup that calls db.Close(), so this defer dbPool.Close() is redundant and can lead to double-closing the pool. Prefer removing the defer (or remove the close from SetupDB’s cleanup and let callers own it), but avoid doing both.
| defer dbPool.Close() |
| req, _ := http.NewRequest("GET", url, nil) | ||
| req.Header.Set(testutil.TestHeaderAPIKey, token) | ||
| resp, err := client.Do(req) | ||
| if err == nil && resp.StatusCode == http.StatusOK { | ||
| var res struct { | ||
| URL string `json:"url"` | ||
| } | ||
| _ = json.NewDecoder(resp.Body).Decode(&res) | ||
| resp.Body.Close() | ||
| if finalURL = res.URL; finalURL != "" && (finalURL == httpbinAnything+"/specific" || finalURL == httpbinAnything+"/general") { | ||
| if finalURL == httpbinAnything+"/specific" { | ||
| break | ||
| } | ||
| } | ||
| } | ||
| time.Sleep(1 * time.Second) |
There was a problem hiding this comment.
In the retry loop, when client.Do succeeds but returns a non-200 status, the response body is never closed. That can leak connections and make later retries hang. Ensure resp.Body.Close() is called on all non-success paths (and also handle the http.NewRequest error).
| // Create base connection to initialize schema | ||
| baseDB, err := pgxpool.New(ctx, dbURL) | ||
| require.NoError(t, err) | ||
| defer baseDB.Close() | ||
|
|
||
| _, err = baseDB.Exec(ctx, fmt.Sprintf("CREATE SCHEMA %s", schema)) | ||
| require.NoError(t, err) | ||
|
|
||
| // Configure pool to use the schema for ALL connections | ||
| // We include 'public' for extensions like uuid-ossp | ||
| config, err := pgxpool.ParseConfig(dbURL) | ||
| require.NoError(t, err) | ||
|
|
||
| if config.ConnConfig.RuntimeParams == nil { | ||
| config.ConnConfig.RuntimeParams = make(map[string]string) | ||
| } | ||
| config.ConnConfig.RuntimeParams["search_path"] = fmt.Sprintf("%s, public", schema) | ||
|
|
||
| db, err := pgxpool.NewWithConfig(ctx, config) | ||
| require.NoError(t, err) | ||
|
|
||
| err = db.Ping(ctx) | ||
| if err != nil { | ||
| t.Skipf("Skipping integration test: database not available: %v", err) | ||
| } |
There was a problem hiding this comment.
SetupDB now executes CREATE SCHEMA ... (and require.NoError) before the Ping/skip check. If DATABASE_URL points to an unavailable DB, tests will fail instead of skipping as intended. Suggest doing a connectivity check first (e.g., baseDB.Ping/Acquire) and only then creating the schema, or converting the schema creation errors into t.Skipf for the “DB not available” path.
| _ = repo.Create(ctx, secret) | ||
|
|
||
| _, err := svc.GetSecret(ctx, secret.ID) |
There was a problem hiding this comment.
The result of repo.Create is ignored here. If the insert fails (schema mismatch, constraints, etc.), the test could still pass/fail later for the wrong reason. Please assert require.NoError(t, repo.Create(...)) before proceeding.
| _ = repo.Create(ctx, secret) | |
| _, err := svc.GetSecret(ctx, secret.ID) | |
| err := repo.Create(ctx, secret) | |
| require.NoError(t, err) | |
| _, err = svc.GetSecret(ctx, secret.ID) |
| func TestInstanceRepositoryIntegration(t *testing.T) { | ||
| db := SetupDB(t) | ||
| db, _ := SetupDB(t) | ||
| defer db.Close() |
There was a problem hiding this comment.
SetupDB now registers a t.Cleanup that calls db.Close(), so this defer db.Close() is redundant and can lead to double-closing the pool. Prefer removing the defer (or remove the close from SetupDB’s cleanup and let callers own it), but avoid doing both.
| defer db.Close() |
| func TestIdentityRepository_Integration(t *testing.T) { | ||
| db := SetupDB(t) | ||
| db, _ := SetupDB(t) | ||
| defer db.Close() |
There was a problem hiding this comment.
SetupDB now registers a t.Cleanup that calls db.Close(), so this defer db.Close() is redundant and can lead to double-closing the pool. Prefer removing the defer (or remove the close from SetupDB’s cleanup and let callers own it), but avoid doing both.
| defer db.Close() |
| func TestGlobalLBRepositoryIntegration(t *testing.T) { | ||
| db := SetupDB(t) | ||
| db, _ := SetupDB(t) | ||
| defer db.Close() |
There was a problem hiding this comment.
SetupDB now registers a t.Cleanup that calls db.Close(), so this defer db.Close() is redundant and can lead to double-closing the pool. Prefer removing the defer (or remove the close from SetupDB’s cleanup and let callers own it), but avoid doing both.
| defer db.Close() |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 34 out of 34 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| t.Cleanup(func() { | ||
| db.Close() | ||
| // Clean up the schema | ||
| cleanupDB, _ := pgxpool.New(ctx, dbURL) | ||
| if cleanupDB != nil { | ||
| _, _ = cleanupDB.Exec(ctx, fmt.Sprintf("DROP SCHEMA %s CASCADE", schema)) | ||
| cleanupDB.Close() | ||
| } | ||
| }) |
There was a problem hiding this comment.
SetupDB registers a t.Cleanup that calls db.Close(), but most integration tests still defer db.Close() after calling SetupDB. That double-close can cause flakiness depending on pgxpool.Pool's close semantics. Either remove the pool close from this cleanup (and let callers own it) or update callers to stop deferring Close() and rely on t.Cleanup only.
| var res struct { | ||
| URL string `json:"url"` | ||
| } | ||
| _ = json.NewDecoder(resp.Body).Decode(&res) | ||
| resp.Body.Close() | ||
| if finalURL = res.URL; finalURL != "" && (finalURL == httpbinAnything+"/specific" || finalURL == httpbinAnything+"/general") { |
There was a problem hiding this comment.
In VerifyPriorityMatching, JSON decode errors are ignored (_ = json.NewDecoder(...).Decode(...)). If decoding fails (partial response, HTML error page, etc.), finalURL stays empty and the test failure becomes harder to diagnose. Consider asserting require.NoError on decode and always closing the body via defer once you know resp != nil.
| urlPost := testutil.TestBaseURL + "/gw" + pattern + "/post" | ||
| reqPost, _ := http.NewRequest("POST", urlPost, nil) | ||
| reqPost.Header.Set(testutil.TestHeaderAPIKey, token) | ||
| respPost, err := client.Do(reqPost) | ||
|
|
||
| // Use a simple retry for POST since waitForRoute is GET only | ||
| var respPost *http.Response | ||
| for i := 0; i < 5; i++ { | ||
| respPost, err = client.Do(reqPost) | ||
| if err == nil && respPost.StatusCode == http.StatusOK { |
There was a problem hiding this comment.
The POST retry loop reuses the same *http.Request across multiple client.Do calls. Reusing a request can be problematic once the transport mutates it (and it will break immediately if a body is added later). Prefer creating a new request inside the retry loop (mirroring waitForRoute).
| var pgErr *pgconn.PgError | ||
| if errors.As(err, &pgErr) { | ||
| // Specific error codes we can safely ignore during idempotent migrations: | ||
| // 42P07 - duplicate_table | ||
| // 42701 - duplicate_column | ||
| // 42P05 - duplicate_prepared_statement | ||
| // 23505 - unique_violation | ||
| // 42P16 - invalid_table_definition (duplicate constraint) | ||
| if pgErr.Code == "42P07" || pgErr.Code == "42701" || pgErr.Code == "42P05" || pgErr.Code == "23505" || pgErr.Code == "42P16" { | ||
| logger.Debug("ignoring idempotent migration error", "migration", entry.Name(), "code", pgErr.Code, "error", pgErr.Message) | ||
| continue | ||
| } | ||
| } | ||
| return fmt.Errorf("failed to execute migration %s: %w", entry.Name(), err) |
There was a problem hiding this comment.
RunMigrations currently treats Postgres error code 23505 (unique_violation) as an ignorable "idempotent" error. This is unsafe: if a migration adds a UNIQUE constraint/index and it fails due to duplicates, ignoring the error will silently leave the DB without the intended constraint. Consider removing 23505 from the ignore list (or only ignoring errors that are guaranteed to be harmless, like duplicate table/column/constraint).
| backend := os.Getenv("COMPUTE_BACKEND") | ||
| if backend == "libvirt" { | ||
| uri := os.Getenv("LIBVIRT_URI") | ||
| if uri == "" { | ||
| uri = "qemu:///system" | ||
| } | ||
| compute, err = libvirt.NewLibvirtAdapter(slog.Default(), uri) | ||
| if err != nil { | ||
| t.Logf("Warning: failed to initialize libvirt adapter: %v. Falling back to noop for logic testing.", err) | ||
| compute = noop.NewNoopComputeBackend() | ||
| } | ||
| } else { | ||
| compute, err = docker.NewDockerAdapter(slog.Default()) | ||
| if err != nil { | ||
| t.Logf("Warning: failed to initialize docker adapter: %v. Falling back to noop for logic testing.", err) | ||
| compute = noop.NewNoopComputeBackend() | ||
| } |
There was a problem hiding this comment.
In setupInstanceServiceTest, if the chosen compute backend (docker/libvirt) fails to initialize, the test falls back to noop and continues. That can cause CI to report green while the real backend is broken/unavailable, reducing the value of these integration tests. Consider t.Skip (or require.NoError) when the requested backend can't be initialized, and reserve noop only for explicitly unit/logic-focused tests.
Consolidated migration of SecretService, EventService, and AuditService to real-dependency integration tests. Removed redundant unit tests.
Summary by CodeRabbit
Tests
Chores