feat: Improve test coverage to 75.2% with libvirt adapter refactor#22
feat: Improve test coverage to 75.2% with libvirt adapter refactor#22
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughBroad test additions and refactors across the codebase, SecurityGroup/SecurityRule validation added, Libvirt interactions refactored to a LibvirtClient abstraction with real and mock implementations, LB worker made testable via injectable dialer, and RBAC response handling standardized in SDK/handlers. Changes
Sequence Diagram(s)sequenceDiagram
participant Adapter as LibvirtAdapter
participant Client as LibvirtClient (Real/Mock)
participant Libvirt as libvirt binding
Adapter->>Client: DomainLookupByName / DomainCreate / NetworkGetDhcpLeases / ...
Client->>Libvirt: invoke underlying libvirt call
Libvirt-->>Client: return result / error
Client-->>Adapter: return result / error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧹 Recent nitpick comments
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used📓 Path-based instructions (5)internal/core/services/*.go📄 CodeRabbit inference engine (GEMINI.md)
Files:
**/*.go📄 CodeRabbit inference engine (GEMINI.md)
Files:
**/*_test.go📄 CodeRabbit inference engine (GEMINI.md)
Files:
internal/core/services/**/*.go📄 CodeRabbit inference engine (GEMINI.md)
Files:
internal/core/services/**/*_test.go📄 CodeRabbit inference engine (GEMINI.md)
Files:
🧠 Learnings (4)📚 Learning: 2026-01-14T11:33:57.684ZApplied to files:
📚 Learning: 2026-01-14T11:33:57.684ZApplied to files:
📚 Learning: 2026-01-14T11:33:57.684ZApplied to files:
📚 Learning: 2026-01-14T11:33:57.684ZApplied to files:
🧬 Code graph analysis (1)internal/core/services/queue_test.go (4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
🔇 Additional comments (7)
✏️ Tip: You can disable this entire section by setting Comment |
There was a problem hiding this comment.
⚠️ Performance Alert ⚠️
Possible performance regression was detected for benchmark 'Go Benchmarks'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.50.
| Benchmark suite | Current: 52ed87e | Previous: 494d540 | Ratio |
|---|---|---|---|
BenchmarkFunctionServiceInvoke |
55720 ns/op 1120 B/op 20 allocs/op |
28412 ns/op 1120 B/op 20 allocs/op |
1.96 |
BenchmarkFunctionServiceInvoke - ns/op |
55720 ns/op |
28412 ns/op |
1.96 |
BenchmarkCacheServiceList |
5.956 ns/op 0 B/op 0 allocs/op |
3.809 ns/op 0 B/op 0 allocs/op |
1.56 |
BenchmarkCacheServiceList - ns/op |
5.956 ns/op |
3.809 ns/op |
1.56 |
This comment was automatically generated by workflow using github-action-benchmark.
There was a problem hiding this comment.
Pull request overview
This PR improves test coverage from 74.3% to 75.2% through significant refactoring for testability and comprehensive test additions. The main changes introduce dependency injection patterns for the libvirt adapter (via LibvirtClient interface) and LB worker (via PortDialer interface), enabling effective mocking. The PR also fixes RBAC handler response formatting issues and adds extensive test coverage across services, handlers, domain validation, platform components, and CLI commands.
Changes:
- Refactored libvirt adapter to use
LibvirtClientinterface withRealLibvirtClientandMockLibvirtClientimplementations for dependency injection - Injected
PortDialerinterface into LBWorker for testable network dialing - Fixed RBAC handler to use
httputil.Successwrapper and updated SDK tests to expect wrapped responses
Reviewed changes
Copilot reviewed 39 out of 39 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| pkg/sdk/rbac.go, rbac_test.go | Fixed response wrapping with Response generic type |
| internal/repositories/libvirt/*.go | Introduced LibvirtClient interface, RealLibvirtClient, and MockLibvirtClient for dependency injection |
| internal/core/services/lb_worker.go | Added PortDialer interface injection for testable network dialing |
| internal/core/services/*_test.go | Added comprehensive service test coverage (volume, stack, secret, snapshot, notify, queue, lb, instance, workers) |
| internal/handlers/*_test.go | Added handler tests for console URL and error cases |
| internal/core/domain/security_group.go | Added Validate methods for SecurityGroup and SecurityRule |
| internal/platform/*_test.go | Added platform component tests (Redis, metrics) |
| internal/api/setup/*_test.go | Added infrastructure and router tests |
| cmd/cloud/*_test.go, cmd/api/main_test.go, cmd/doccheck/main_test.go | Added comprehensive CLI and command tests |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| import ( | ||
| "context" | ||
| "errors" | ||
| "io/ioutil" | ||
| "log/slog" | ||
| "testing" | ||
| "time" | ||
|
|
||
| "github.com/digitalocean/go-libvirt" | ||
| "github.com/stretchr/testify/assert" | ||
| "github.com/stretchr/testify/mock" | ||
| ) |
There was a problem hiding this comment.
The package io/ioutil is deprecated since Go 1.16. Use io or os package functions instead. Replace ioutil.Discard with io.Discard.
| func (r *RealLibvirtClient) NetworkGetDhcpLeases(net libvirt.Network, mac libvirt.OptString, needResults uint32, flags uint32) ([]libvirt.NetworkDhcpLease, uint32, error) { | ||
| // The library seems to use different types in different versions or bindings? | ||
| // Based on error: cannot use needResults (variable of type uint32) as int32 value | ||
| // We cast it. | ||
| return r.conn.NetworkGetDhcpLeases(net, mac, int32(needResults), flags) | ||
| } |
There was a problem hiding this comment.
The comment mentions version differences but the implementation always casts to int32. This comment is misleading - either the library consistently expects int32 (in which case the interface parameter type should be int32), or this is a workaround for specific versions (which should be documented more clearly).
| func TestLBWorkerProcessHealthChecks(t *testing.T) { | ||
| lbRepo := new(mockLBRepo) | ||
| instRepo := new(mockInstanceRepo) | ||
| worker := NewLBWorker(lbRepo, instRepo, nil) | ||
| instRepo := new(mockInstRepo) | ||
| proxy := new(MockLBProxyAdapter) | ||
| worker := NewLBWorker(lbRepo, instRepo, proxy) | ||
|
|
||
| ctx := context.Background() | ||
| lbID := uuid.New() | ||
| userID := uuid.New() | ||
| lb := &domain.LoadBalancer{ID: lbID, UserID: userID, Status: domain.LBStatusActive} | ||
| instID := uuid.New() | ||
| targets := []*domain.LBTarget{{InstanceID: instID, Port: 80, Health: "unknown"}} | ||
| inst := &domain.Instance{ID: instID, Ports: "49151:80"} | ||
| lb := &domain.LoadBalancer{ | ||
| ID: lbID, | ||
| UserID: uuid.New(), | ||
| Status: domain.LBStatusActive, | ||
| } | ||
|
|
||
| target := &domain.LBTarget{ | ||
| InstanceID: instID, | ||
| Port: 80, | ||
| Health: "unhealthy", | ||
| } | ||
|
|
||
| inst := &domain.Instance{ | ||
| ID: instID, | ||
| Ports: "80:8080", | ||
| } | ||
|
|
||
| lbRepo.On("ListAll", ctx).Return([]*domain.LoadBalancer{lb}, nil) | ||
| lbRepo.On("ListTargets", mock.Anything, lbID).Return(targets, nil) | ||
| lbRepo.On("ListTargets", mock.Anything, lbID).Return([]*domain.LBTarget{target}, nil) | ||
| instRepo.On("GetByID", mock.Anything, instID).Return(inst, nil) | ||
|
|
||
| lbRepo.On("UpdateTargetHealth", mock.Anything, lbID, instID, mock.Anything).Return(nil) | ||
| // Since we are mocking, we cannot easily mock net.Dial from within isPortOpen since it's hardcoded. | ||
| // But we can test that it TRIES to update if health status changes. | ||
| // However, isPortOpen will likely fail (return false) since 8080 is not open locally. | ||
| // So status stays "unhealthy". UpdateTargetHealth won't be called. | ||
|
|
||
| // If we want to test "healthy" transition we need DI for dialer or logic separation. | ||
| // For now, let's test that it runs without error. | ||
|
|
||
| worker.processHealthChecks(ctx) | ||
| lbRepo.AssertExpectations(t) | ||
|
|
||
| // If we want to verify healthy check, we can maybe simulate it, but tough without refactoring. | ||
| } |
There was a problem hiding this comment.
This test doesn't verify any behavior - it just calls the function and lets it fail silently. Since the PortDialer interface was already injected in lb_worker.go, this test should mock the dialer to properly test both healthy and unhealthy scenarios.
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
internal/handlers/rbac_handler.go (2)
85-104: Inconsistent response format: name-based lookup usesc.JSONwhile ID-based useshttputil.Success.The
GetRolehandler has two code paths with different response formats:
- Line 94: ID-based lookup uses
httputil.Success(wrapped response with metadata)- Line 103: Name-based lookup uses
c.JSON(raw response without wrapper)This causes clients to receive different response structures for the same endpoint depending on input format, breaking API consistency.
🐛 Proposed fix
role, err := h.svc.GetRoleByName(c.Request.Context(), idParam) if err != nil { httputil.Error(c, err) return } - c.JSON(http.StatusOK, role) + httputil.Success(c, http.StatusOK, role) }
136-142:UpdateRoleshould also usehttputil.Successfor consistency.Per coding guidelines, handlers should use
httputil.Success(c, http.StatusOK, data)for successful responses. This handler still usesc.JSON.🐛 Proposed fix
if err := h.svc.UpdateRole(c.Request.Context(), role); err != nil { httputil.Error(c, err) return } - c.JSON(http.StatusOK, role) + httputil.Success(c, http.StatusOK, role) }
🤖 Fix all issues with AI agents
In `@cmd/cloud/cli_commands_test.go`:
- Around line 368-373: Capture and restore the original flag and global state
instead of discarding Set errors and hard-coding resets: before calling
sgListCmd.Flags().Set(flagVPCID, "vpc-1") read the current value (e.g., val, err
:= sgListCmd.Flags().GetString(flagVPCID) or use Lookup) and store it and the
current outputJSON value; on cleanup, call Set with the original value and reset
outputJSON to the saved original, and check/set errors (fail the test
immediately with t.Fatalf or t.Fatal if Set returns an error) rather than
assigning to the blank identifier; apply the same pattern for every test that
mutates sgListCmd.Flags().Set(flagVPCID) and outputJSON in this file (including
the other affected blocks).
- Around line 65-92: The mock handlers (e.g., handleSecurityGroupsMock and the
other handlers mentioned: handleVPCsMock, handleCachesMock, handleQueuesMock,
handleNotifyMock, handleRBACMock, handleDatabasesMock, handleSecretsMock)
currently discard json.Encoder errors with `_ = json.NewEncoder(w).Encode(...)`;
change each to capture the error and handle it (e.g., if err :=
json.NewEncoder(w).Encode(resp); err != nil { http.Error(w, "failed to encode
response", http.StatusInternalServerError); return true } or similar) so
encoding failures return a non-2xx response and do not mask malformed payloads;
apply this pattern consistently to every place Encoding is used in the test
handlers.
In `@cmd/cloud/instance_test.go`:
- Around line 39-46: The test currently ignores the error from
json.NewEncoder(w).Encode(payload) and only restores outputJSON, leaving apiURL
and apiKey mutated for other tests; change the handler to capture the encoder
error (err := json.NewEncoder(w).Encode(payload)) and fail the test if non-nil
(e.g., t.Fatalf("encode payload: %v", err)), and at the top of the test capture
prior values (oldAPIURL, oldAPIKey, oldOutputJSON := apiURL, apiKey, outputJSON)
then defer a restore of those globals (apiURL, apiKey, outputJSON = oldAPIURL,
oldAPIKey, oldOutputJSON) so the test cleans up all modified globals while
keeping the existing server.Close defer.
- Around line 79-98: The test currently discards errors and mutates globals
without restoring them; update the test to check and fail on any returned errors
from json.NewEncoder(w).Encode(...) and from launchCmd.Flags().Set(...) calls
(use t.Fatalf on failure), and capture and restore global state (apiURL, apiKey)
and flag values after the test (save originals before setting and defer their
restoration), ensuring the test server is closed via defer server.Close()
(already present) and avoid using the blank identifier for error returns;
reference launchCmd, launchCmd.Flags().Set, json.NewEncoder(w).Encode, apiURL,
and apiKey when locating the changes.
In `@cmd/doccheck/main_test.go`:
- Around line 92-96: The test TestRelPathFallback uses a hardcoded Unix path
"/tmp/file.go"; update it to create a temp directory via t.TempDir(), then build
the file path with filepath.Join(tempDir, "file.go") and pass that to relPath so
the test is portable on Windows; ensure you import "path/filepath" if not
already and keep the same assertion behavior.
In `@internal/core/services/autoscaling_worker_test.go`:
- Around line 334-343: The inline comment for the TargetValue on the
domain.ScalingPolicy instance (variable "policy" in autoscaling_worker_test.go)
is wrong: it says "Scale in if < 20.0" but TargetValue is 30.0 and the test uses
a mocked CPU average of 15.0 expecting a scale-in; update the comment to
reference the actual threshold (e.g., "Scale in if < 30.0") or remove the
misleading note so the comment matches the TargetValue field on the
domain.ScalingPolicy.
In `@internal/core/services/container_worker_test.go`:
- Around line 160-163: The test currently calls worker.Reconcile(ctx) and
asserts expectations only on instSvc; add repo.AssertExpectations(t) after
instSvc.AssertExpectations(t) so the repository mock (repo) is also
verified—this ensures calls like repo.UpdateDeployment(...) with
DeploymentStatusScaling were executed as expected.
In `@internal/core/services/cron_worker_test.go`:
- Around line 60-68: The test is using a hard-coded
"http://localhost:12345/unreachable" which may be occupied on CI or dev
machines; instead construct a deterministic unreachable URL by creating an
httptest.Server, capturing its URL and immediately closing it (or using port 0 /
an explicit closed listener) and assign that to the CronJob.TargetURL so
requests reliably fail. Update the test where the CronJob is created (variable
job / field TargetURL in the test) to use the closed httptest server URL rather
than the fixed localhost:12345.
- Around line 50-52: Replace the non-deterministic time.Sleep waits with
explicit synchronization: in each test that spawns runJob (or otherwise invokes
the goroutine and currently uses time.Sleep at lines ~50, ~83, ~117), add a
sync.WaitGroup or a channel that the mock callback (the function you pass into
the job runner / runJob) signals when it has been invoked; increment the
WaitGroup (or prepare the channel) before starting the goroutine, call wg.Done()
or send to the channel from inside the mock callback, and then wait for that
signal (with a short timeout and t.Fatal on timeout) instead of sleeping to
ensure deterministic test completion and avoid flakes.
In `@internal/core/services/lb_worker_test.go`:
- Around line 3-11: Fix the port mapping string from "80:8080" to "8080:80" and
inject a mocked PortDialer into the worker to avoid real network calls: create a
mockDialer that embeds mock.Mock, set worker.dialer = mockDialer after
NewWorker(...) is called, stub mockDialer.DialTimeout to return a deterministic
successful result (a dummy net.Conn and nil error) so isPortOpen(...) returns
true, and then assert UpdateTargetHealth on the worker is invoked with the
"healthy" status.
In `@internal/core/services/queue_test.go`:
- Around line 200-218: The tests (e.g., TestQueueService_DeleteMessage) set
expectations on mock objects (repo, eventSvc, auditSvc) but never assert them;
add a t.Cleanup that calls AssertExpectations(t) on each mock (or use
mock.AssertExpectationsForObjects) to ensure repo.AssertExpectations(t),
eventSvc.AssertExpectations(t) and auditSvc.AssertExpectations(t) are executed
after the test so missing or extra mock calls fail the test.
In `@internal/core/services/rbac_cached_test.go`:
- Around line 133-141: The test currently ignores the error returned by
cache.Set which can hide setup failures; update the test around cache.Set(ctx,
key, "1", time.Minute) to capture its returned error and fail the test if
non-nil (e.g., t.Fatalf or require.NoError) so the setup always validates the
Redis/miniredis write before calling svc.HasPermission; keep the rest of the
assertions (allowed/err and mockSvc.AssertNotCalled) unchanged.
In `@internal/handlers/security_group_handler_test.go`:
- Around line 267-317: Add mock expectation assertions to the error-path tests
so the mock service call is verified: at the end of
TestSecurityGroupHandlerAttachError (which sets expectation for
AttachToInstance) and TestSecurityGroupHandlerDetachError (which sets
expectation for DetachFromInstance) call svc.AssertExpectations(t) to ensure the
handlers actually invoked the mocked methods.
In `@internal/repositories/libvirt/adapter_lifecycle_test.go`:
- Around line 21-28: In newTestAdapter, replace the deprecated ioutil.Discard
usage with io.Discard in the logger initialization: update the import to include
"io" instead of "io/ioutil" (or add io alongside existing imports) and change
slog.NewTextHandler(ioutil.Discard, nil) to slog.NewTextHandler(io.Discard, nil)
within the LibvirtAdapter construction so the logger uses io.Discard correctly.
- Around line 3-14: The test file adapter_lifecycle_test.go currently imports
the deprecated package io/ioutil; replace that import with the standard io
package and update any uses of ioutil.Discard (or other ioutil symbols) to use
io.Discard (or the corresponding io replacement) so the import block removes
"io/ioutil" and adds "io", and references in tests (e.g., places that used
ioutil.Discard) are changed to io.Discard.
In `@internal/repositories/libvirt/libvirt_client.go`:
- Around line 9-38: The LibvirtClient interface currently exposes blocking
libvirt calls without context or close semantics; update the interface
(LibvirtClient) so every libvirt operation method (e.g., DomainLookupByName,
DomainDefineXML, DomainCreate, DomainDestroy, DomainUndefine, DomainGetState,
DomainGetXMLDesc, DomainAttachDevice, DomainDetachDevice, DomainMemoryStats,
NetworkLookupByName, NetworkDefineXML, NetworkCreate, NetworkDestroy,
NetworkUndefine, NetworkGetDhcpLeases, StoragePoolLookupByName,
StoragePoolRefresh, StorageVolLookupByName, StorageVolCreateXML,
StorageVolDelete, StorageVolGetPath and Connect) accepts context.Context as the
first parameter to enable cancellation, and add a Close (or Disconnect) method
to the interface to allow releasing the underlying libvirt connection; then
update RealLibvirtClient (and all callers/tests) to match the new signatures and
implement Close/Disconnect to properly close the libvirt connection and handle
context cancellation for each operation.
In `@internal/repositories/libvirt/real_client.go`:
- Around line 18-56: The Libvirt client methods (e.g., DomainLookupByName,
DomainDefineXML, DomainCreate, DomainDestroy, DomainUndefine, DomainGetState,
DomainGetXMLDesc, DomainAttachDevice, DomainDetachDevice, DomainMemoryStats)
must accept context.Context as the first parameter to support
cancellation/timeouts: update the LibvirtClient interface signatures and the
RealLibvirtClient receiver methods to add ctx context.Context as the first param
and propagate it where possible; then update the LibvirtAdapter caller sites to
enforce timeouts/cancellation (for example, wrap calls to the RealLibvirtClient
in a goroutine and select on ctx.Done() or apply socket-level deadlines) so
operations respect context cancellation before delegating to the go-libvirt
calls. Ensure all call sites and tests are updated to pass the context through.
♻️ Duplicate comments (1)
cmd/doccheck/main_test.go (1)
59-90: Covered by the table‑driven test refactor suggestion above.
🧹 Nitpick comments (29)
internal/repositories/libvirt/templates_test.go (1)
5-5: Preferassert.Containsoverassert.True(strings.Contains(...)).Using
assert.Containsdirectly is more idiomatic with testify and provides better failure messages (shows expected vs actual content rather than just "expected true, got false"). This would also eliminate the need for thestringsimport.♻️ Suggested refactor
import ( "fmt" - "strings" "testing" "github.com/poyrazk/thecloud/pkg/testutil" "github.com/stretchr/testify/assert" )- assert.True(t, strings.Contains(xml, "<network>")) + assert.Contains(t, xml, "<network>")Also applies to: 56-56
internal/core/services/snapshot_test.go (1)
131-160: Convert the new tests to table‑driven format.Both new tests are single‑case and not table‑driven. The repo guidelines require table‑driven tests in
*_test.gofiles. As per coding guidelines, please refactor these into table‑driven tests for consistency and easier extension. Based on coding guidelines, use table-driven tests.♻️ Example refactor (table‑driven structure)
func TestListSnapshots(t *testing.T) { - repo, _, _, _, _, svc := setupSnapshotServiceTest(t) - defer repo.AssertExpectations(t) - - userID := uuid.New() - ctx := appcontext.WithUserID(context.Background(), userID) - - items := []*domain.Snapshot{{ID: uuid.New(), UserID: userID}} - repo.On("ListByUserID", ctx, userID).Return(items, nil).Once() - - res, err := svc.ListSnapshots(ctx) - assert.NoError(t, err) - assert.Len(t, res, 1) + cases := []struct { + name string + userID uuid.UUID + items []*domain.Snapshot + wantLen int + }{ + { + name: "single snapshot", + userID: uuid.New(), + items: []*domain.Snapshot{{ID: uuid.New()}}, + wantLen: 1, + }, + } + + for _, tc := range cases { + t.Run(tc.name, func(t *testing.T) { + repo, _, _, _, _, svc := setupSnapshotServiceTest(t) + defer repo.AssertExpectations(t) + + for _, it := range tc.items { + it.UserID = tc.userID + } + ctx := appcontext.WithUserID(context.Background(), tc.userID) + repo.On("ListByUserID", ctx, tc.userID).Return(tc.items, nil).Once() + + res, err := svc.ListSnapshots(ctx) + assert.NoError(t, err) + assert.Len(t, res, tc.wantLen) + }) + } } func TestGetSnapshot(t *testing.T) { - repo, _, _, _, _, svc := setupSnapshotServiceTest(t) - defer repo.AssertExpectations(t) - - userID := uuid.New() - ctx := appcontext.WithUserID(context.Background(), userID) - - snapID := uuid.New() - snap := &domain.Snapshot{ID: snapID, UserID: userID} - repo.On("GetByID", ctx, snapID).Return(snap, nil).Once() - - res, err := svc.GetSnapshot(ctx, snapID) - assert.NoError(t, err) - assert.Equal(t, snapID, res.ID) + cases := []struct { + name string + userID uuid.UUID + snapID uuid.UUID + }{ + { + name: "snapshot exists", + userID: uuid.New(), + snapID: uuid.New(), + }, + } + + for _, tc := range cases { + t.Run(tc.name, func(t *testing.T) { + repo, _, _, _, _, svc := setupSnapshotServiceTest(t) + defer repo.AssertExpectations(t) + + ctx := appcontext.WithUserID(context.Background(), tc.userID) + snap := &domain.Snapshot{ID: tc.snapID, UserID: tc.userID} + repo.On("GetByID", ctx, tc.snapID).Return(snap, nil).Once() + + res, err := svc.GetSnapshot(ctx, tc.snapID) + assert.NoError(t, err) + assert.Equal(t, tc.snapID, res.ID) + }) + } }internal/core/services/container_worker_test.go (1)
165-178: Test provides minimal behavioral coverage.This test only verifies that
Runexits gracefully on context cancellation without deadlocking. The mocks are instantiated but have no expectations set or verified.Consider adding a test that allows at least one reconciliation loop iteration before cancellation to verify
Runactually invokesReconcile. Otherwise, this only tests the immediate cancellation path.cmd/cloud/instance_test.go (1)
56-98: Test doesn't verify volume parsing.The test name suggests it validates volume parsing, but it only checks for a success message. Consider verifying the request body to ensure the volume was correctly parsed and sent.
Suggested enhancement
func TestLaunchInstance_VolumeParsing(t *testing.T) { + var receivedBody map[string]interface{} server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { if r.URL.Path != "/instances" || r.Method != http.MethodPost { w.WriteHeader(http.StatusNotFound) return } + if err := json.NewDecoder(r.Body).Decode(&receivedBody); err != nil { + t.Errorf("failed to decode request body: %v", err) + } w.Header().Set("Content-Type", "application/json") // ... rest of handler })) // ... setup code ... out := captureStdout(t, func() { launchCmd.Run(launchCmd, nil) }) if !strings.Contains(out, "Instance launched successfully") { t.Fatalf("expected success message, got: %s", out) } + // Verify volume was parsed correctly + if volumes, ok := receivedBody["volumes"].([]interface{}); !ok || len(volumes) == 0 { + t.Error("expected volumes to be parsed and sent in request") + } }cmd/doccheck/main_test.go (1)
9-57: Prefer table‑driven tests for scan scenarios.These two scan tests can be consolidated into a table‑driven test to align with repo testing guidelines and reduce duplication.
♻️ Proposed refactor (table‑driven scan tests)
-func TestScanFindsMissingDocs(t *testing.T) { - root := t.TempDir() - ... -} - -func TestScanIncludesTestsWhenEnabled(t *testing.T) { - root := t.TempDir() - ... -} +func TestScan(t *testing.T) { + type testCase struct { + name string + includeTests bool + writeFiles func(root string) (badTestFile string) + expectTestFile bool + } + + cases := []testCase{ + { + name: "exclude_tests", + includeTests: false, + writeFiles: func(root string) string { + // write bad.go, badtype.go, ignored_test.go + // return ignored_test.go path + // (reuse existing helpers/inline writes from current test) + return filepath.Join(root, "ignored_test.go") + }, + expectTestFile: false, + }, + { + name: "include_tests", + includeTests: true, + writeFiles: func(root string) string { + // write included_test.go + return filepath.Join(root, "included_test.go") + }, + expectTestFile: true, + }, + } + + for _, tc := range cases { + t.Run(tc.name, func(t *testing.T) { + root := t.TempDir() + badTestFile := tc.writeFiles(root) + + findings, err := scan(root, tc.includeTests) + if err != nil { + t.Fatalf("scan: %v", err) + } + if len(findings) == 0 { + t.Fatalf("expected findings, got none") + } + + foundTest := false + for _, f := range findings { + if filepath.Base(f.file) == filepath.Base(badTestFile) { + foundTest = true + break + } + } + if foundTest != tc.expectTestFile { + t.Fatalf("unexpected test-file findings: %v", foundTest) + } + }) + } +}As per coding guidelines and learnings, please use table‑driven tests.
internal/handlers/instance_handler_test.go (1)
361-398: Consider consolidating GetConsole tests into a table-driven case.These two tests are structurally identical; a table-driven test would reduce duplication and keep success/error cases aligned. As per coding guidelines, please use table-driven tests in *_test.go.
internal/core/services/instance_test.go (2)
231-263: Prefer a table-driven test for the container-ID vs instance-ID fallback.These two cases differ only in input/setup; a table-driven test will be cleaner and aligns with the test guidelines. As per coding guidelines, please use table-driven tests in *_test.go.
437-547: Consider table-driven coverage for the provisioning error cases.The three error-path tests share the same shape; a table-driven structure would reduce duplication and keep assertions consistent (the success case can remain separate). As per coding guidelines, please use table-driven tests in *_test.go.
internal/core/services/cron_worker_test.go (1)
17-120: Consider table-driven tests for the three ProcessJobs variants.The success, request-failure, and HTTP error tests share nearly identical scaffolding. A table-driven structure will reduce duplication and align with the testing guidelines. Based on learnings, prefer table-driven tests in
*_test.go.[spec]
If you agree, I can draft a table-driven version with per-case server setup and expected assertions.
[/spec]internal/core/services/loadbalancer_service_test.go (1)
14-29: Adopt table-driven style for LBService tests.These tests are single-case and repeat setup. The guidelines call for table-driven tests in *_test.go files; consider using a
[]struct{ name string; setup func(...) }plust.Runto consolidate cases and reduce duplication across the file.As per coding guidelines, please use table-driven tests.
internal/core/services/queue_test.go (1)
200-218: Prefer table-driven tests for service cases.These tests are single-case; consider consolidating success/error variants into table-driven subtests to align with the test standards (e.g., unauthorized, not found, repo error). As per coding guidelines, use table-driven tests.
Also applies to: 220-237
cmd/cloud/cache_test.go (1)
5-15: Consider using table-driven tests for better maintainability.The current implementation uses separate
ifstatements for each case. Per coding guidelines, table-driven tests are preferred. This pattern also makes it easier to add more edge cases (e.g., GB/TB ranges, boundary values).♻️ Suggested table-driven refactor
func TestFormatBytes(t *testing.T) { - if got := formatBytes(0); got != "0 B" { - t.Fatalf("expected 0 B, got %q", got) - } - if got := formatBytes(1024); got != "1.0 KB" { - t.Fatalf("expected 1.0 KB, got %q", got) - } - if got := formatBytes(1024 * 1024); got != "1.0 MB" { - t.Fatalf("expected 1.0 MB, got %q", got) - } + tests := []struct { + name string + input uint64 + expected string + }{ + {"zero bytes", 0, "0 B"}, + {"one KB", 1024, "1.0 KB"}, + {"one MB", 1024 * 1024, "1.0 MB"}, + {"one GB", 1024 * 1024 * 1024, "1.0 GB"}, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + if got := formatBytes(tt.input); got != tt.expected { + t.Errorf("formatBytes(%d) = %q, want %q", tt.input, got, tt.expected) + } + }) + } }internal/platform/metrics_test.go (1)
5-14: Consider adding assertions or a comment clarifying the test intent.This test verifies that metric collectors don't panic when used, which is valuable. However, without assertions, it's not immediately clear what the test validates. Consider either adding minimal assertions or a comment explaining this is a smoke test for metric registration.
💡 Suggested improvement
+// TestMetrics_CollectorsAreUsable verifies that all metric collectors are properly +// registered and can be used without panicking. func TestMetrics_CollectorsAreUsable(t *testing.T) { HTTPRequestsTotal.WithLabelValues("GET", "/health", "200").Inc() HTTPRequestDuration.WithLabelValues("GET", "/health").Observe(0.1) LBRequestsTotal.WithLabelValues("lb-1").Inc() InstancesTotal.WithLabelValues("running", "docker").Set(1) AutoScalingCurrentInstances.WithLabelValues("group-1").Set(2) AuthAttemptsTotal.WithLabelValues("success").Inc() VolumesTotal.WithLabelValues("available").Set(3) QueueMessagesTotal.WithLabelValues("queue-1", "send").Inc() + // If we reach here without panic, collectors are properly registered }internal/core/services/notify_test.go (2)
238-243: Flaky test: Replacetime.Sleepwith synchronization primitives.Using
time.Sleep(100 * time.Millisecond)to wait for goroutines is unreliable and can cause flaky tests on slower CI environments. Consider using a channel,sync.WaitGroup, or a mock with a done callback to signal completion.♻️ Suggested approach using a channel
// In test setup, create a channel to signal when SendMessage is called done := make(chan struct{}) queueSvc.On("SendMessage", mock.Anything, qID, body).Run(func(args mock.Arguments) { close(done) }).Return(&domain.Message{}, nil).Once() // ... err := svc.Publish(ctx, topicID, body) require.NoError(t, err) // Wait with timeout instead of sleep select { case <-done: // Goroutine completed case <-time.After(1 * time.Second): t.Fatal("timed out waiting for SendMessage") }
277-280: Same flaky test concern applies here.This test also uses
time.Sleepto wait for the webhook delivery goroutine. Apply the same synchronization pattern suggested above.pkg/sdk/rbac.go (1)
62-65: Consider using a path constant for bindings endpoint.The
/rbac/bindingspath is used in bothBindRole(line 59) andListRoleBindings(line 64) but is hardcoded while roles paths are centralized. For consistency, consider addingpathBindingsconstant.♻️ Suggested improvement
const ( pathRoles = "/rbac/roles" pathRolesFmt = "/rbac/roles/%s" + pathBindings = "/rbac/bindings" )pkg/sdk/rbac_test.go (1)
33-35: Consider checkingjson.Encodeerror in tests.While unlikely to fail in tests, ignoring the error from
json.NewEncoder(w).Encode(...)silently swallows potential issues. This pattern is repeated in other test handlers.♻️ Suggested fix
w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(Response[domain.Role]{Data: expectedRole}) + if err := json.NewEncoder(w).Encode(Response[domain.Role]{Data: expectedRole}); err != nil { + t.Errorf("failed to encode response: %v", err) + }cmd/cloud/cli_commands_test.go (1)
363-1046: Consider table‑driven tests to reduce duplication.Many CLI command tests share the same arrange/act/assert shape; a table‑driven structure with shared setup helpers would reduce repetition and simplify adding new cases. Based on learnings, prefer table‑driven tests in *_test.go.
internal/core/services/stack_test.go (1)
88-90: Replace fixed sleeps with deterministic sync for async stack processing.The fixed
time.Sleep(150ms)waits can be flaky on slower CI or under load. Prefer a deterministic signal (channel/WaitGroup or test hook) to wait for background completion.Also applies to: 118-119, 184-185, 300-300
internal/core/domain/security_group.go (1)
76-97: Extract protocol set and port bounds into constants.
validateProtocolallocates a map each call andvalidatePortshardcodes1/65535. Define package‑level constants and a protocol set to avoid magic numbers and repeated allocations. As per coding guidelines, avoid magic numbers.♻️ Suggested refactor
+const ( + minPort = 1 + maxPort = 65535 +) + +var validProtocols = map[string]struct{}{ + "tcp": {}, + "udp": {}, + "icmp": {}, + "all": {}, +} + func (sr *SecurityRule) validateProtocol() error { - validProtocols := map[string]bool{"tcp": true, "udp": true, "icmp": true, "all": true} - if !validProtocols[sr.Protocol] { + if _, ok := validProtocols[sr.Protocol]; !ok { return fmt.Errorf("invalid protocol: %s", sr.Protocol) } return nil } func (sr *SecurityRule) validatePorts() error { if sr.Protocol == "all" || sr.Protocol == "icmp" { return nil } - if sr.PortMin < 1 || sr.PortMin > 65535 { + if sr.PortMin < minPort || sr.PortMin > maxPort { return fmt.Errorf("invalid port_min: %d", sr.PortMin) } - if sr.PortMax < 1 || sr.PortMax > 65535 { + if sr.PortMax < minPort || sr.PortMax > maxPort { return fmt.Errorf("invalid port_max: %d", sr.PortMax) } if sr.PortMin > sr.PortMax { return fmt.Errorf("port_min (%d) cannot be greater than port_max (%d)", sr.PortMin, sr.PortMax) } return nil }internal/core/services/rbac_cached_test.go (1)
84-92: Close the Redis client in test setup to avoid leaks.
redis.Clientstarts background goroutines; it should be closed to prevent test-time leaks. Consider closing viat.Cleanupin the helper.♻️ Proposed fix
func setupCachedRBACTest(t *testing.T) (*mockRBACService, *redis.Client, *miniredis.Miniredis) { t.Helper() mr, err := miniredis.Run() if err != nil { t.Fatalf("miniredis: %v", err) } client := redis.NewClient(&redis.Options{Addr: mr.Addr()}) + t.Cleanup(func() { + _ = client.Close() + }) return new(mockRBACService), client, mr }internal/api/setup/router_test.go (1)
18-109: Consider table-driven tests and testify/mock for the backend stub.The four health checks are great coverage, but they’re repetitive. A table-driven test with a testify/mock-based backend would align with the test guidelines and reduce duplication. As per coding guidelines, use table-driven tests and testify/mock for mocks.
internal/api/setup/infrastructure_test.go (1)
11-78: Convert Init*Backend tests to table-driven form.These cases can be consolidated into a single table-driven test to cut repetition and match Go testing conventions. As per coding guidelines, use table-driven tests.
internal/repositories/libvirt/adapter_unit_test.go (1)
428-451: Silent failure in test cleanup.Line 449 uses a blank identifier for
os.Removewhich silently ignores any error. Per coding guidelines, avoid silent failures.For test cleanup, consider using
t.Cleanupor at minimum logging if removal fails:Suggested fix
// Cleanup the generated ISO file if it exists (though it might not if true succeeded) if isoPath != "" { - _ = os.Remove(isoPath) + t.Cleanup(func() { + if err := os.Remove(isoPath); err != nil && !os.IsNotExist(err) { + t.Logf("failed to cleanup test ISO: %v", err) + } + }) }internal/repositories/libvirt/mock_client_test.go (1)
18-21: Type assertion may panic on nil return values.The type assertion
args.Get(0).(libvirt.Domain)will panic if the mock is configured to returnnilfor the domain. This is a common pattern with testify/mock, but could be made safer.This is acceptable for test code since improper mock setup would be a test bug, but for robustness you could add nil checks:
Optional safer pattern
func (m *MockLibvirtClient) DomainLookupByName(name string) (libvirt.Domain, error) { args := m.Called(name) dom, _ := args.Get(0).(libvirt.Domain) return dom, args.Error(1) }internal/repositories/libvirt/real_client.go (1)
3-14: Guard against nil libvirt connections.If
RealLibvirtClientis constructed with a nilconn(e.g., failed libvirt setup), any method call will panic. Consider adding a small constructor to validate input and return an error early.💡 Suggested constructor guard
import ( + "errors" "github.com/digitalocean/go-libvirt" ) // RealLibvirtClient implements LibvirtClient using the actual libvirt library. type RealLibvirtClient struct { conn *libvirt.Libvirt } +func NewRealLibvirtClient(conn *libvirt.Libvirt) (*RealLibvirtClient, error) { + if conn == nil { + return nil, errors.New("libvirt connection is nil") + } + return &RealLibvirtClient{conn: conn}, nil +} + func (r *RealLibvirtClient) Connect() error { return r.conn.Connect() }internal/core/services/lb_worker.go (2)
19-24: Propagate context through the dialer and avoid a hard-coded timeout.The health-check dial currently ignores cancellation because the dialer API has no
context.Context, and the timeout is a literal. Consider a context-aware dialer and a named timeout constant. As per coding guidelines, propagate context to blocking calls and avoid magic numbers.♻️ Proposed refactor
+const healthCheckDialTimeout = 2 * time.Second + // PortDialer defines an interface for dialing network connections. type PortDialer interface { - DialTimeout(network, address string, timeout time.Duration) (net.Conn, error) + DialContext(ctx context.Context, network, address string, timeout time.Duration) (net.Conn, error) } type realDialer struct{} -func (d *realDialer) DialTimeout(network, address string, timeout time.Duration) (net.Conn, error) { - return net.DialTimeout(network, address, timeout) +func (d *realDialer) DialContext(ctx context.Context, network, address string, timeout time.Duration) (net.Conn, error) { + nd := net.Dialer{Timeout: timeout} + return nd.DialContext(ctx, network, address) } @@ - if w.isPortOpen(hostPort) { + if w.isPortOpen(ctx, hostPort) { status = "healthy" } @@ -func (w *LBWorker) isPortOpen(port string) bool { - conn, err := w.dialer.DialTimeout("tcp", "localhost:"+port, 2*time.Second) +func (w *LBWorker) isPortOpen(ctx context.Context, port string) bool { + conn, err := w.dialer.DialContext(ctx, "tcp", "localhost:"+port, healthCheckDialTimeout)Also applies to: 26-35, 197-198, 222-224
37-44: Prefer a params struct for the LB worker constructor and allow dialer injection.This service already has 3 dependencies; a params struct keeps the signature stable and lets tests inject a dialer without touching internal fields. As per coding guidelines, use a params struct for services with 3+ dependencies.
♻️ Proposed refactor
-func NewLBWorker(lbRepo ports.LBRepository, instanceRepo ports.InstanceRepository, proxyAdapter ports.LBProxyAdapter) *LBWorker { - return &LBWorker{ - lbRepo: lbRepo, - instanceRepo: instanceRepo, - proxyAdapter: proxyAdapter, - dialer: &realDialer{}, - } -} +type LBWorkerParams struct { + LBRepo ports.LBRepository + InstanceRepo ports.InstanceRepository + ProxyAdapter ports.LBProxyAdapter + Dialer PortDialer +} + +func NewLBWorker(p LBWorkerParams) *LBWorker { + dialer := p.Dialer + if dialer == nil { + dialer = &realDialer{} + } + return &LBWorker{ + lbRepo: p.LBRepo, + instanceRepo: p.InstanceRepo, + proxyAdapter: p.ProxyAdapter, + dialer: dialer, + } +}internal/core/services/lb_worker_test.go (1)
118-191: Consider a table-driven layout for the lifecycle tests.The create/delete/active tests share a common shape; a table-driven setup would reduce duplication and ease adding more scenarios. As per coding guidelines, prefer table-driven tests in unit suites.
| func handleSecurityGroupsMock(w http.ResponseWriter, r *http.Request, method, path string) bool { | ||
| switch { | ||
| case method == http.MethodGet && path == "/security-groups": | ||
| resp := sdk.Response[[]sdk.SecurityGroup]{ | ||
| Data: []sdk.SecurityGroup{{ID: testSGID, VPCID: testVPCID, Name: "default", Description: "default group", ARN: "arn:sg:1", CreatedAt: time.Now().UTC()}}, | ||
| } | ||
| _ = json.NewEncoder(w).Encode(resp) | ||
| return true | ||
| case method == http.MethodPost && path == "/security-groups": | ||
| resp := sdk.Response[sdk.SecurityGroup]{ | ||
| Data: sdk.SecurityGroup{ID: "sg-2", VPCID: "vpc-2", Name: "web", Description: "web group", ARN: "arn:sg:2", CreatedAt: time.Now().UTC()}, | ||
| } | ||
| _ = json.NewEncoder(w).Encode(resp) | ||
| return true | ||
| case method == http.MethodGet && path == pathSG+testSGID: | ||
| resp := sdk.Response[sdk.SecurityGroup]{ | ||
| Data: sdk.SecurityGroup{ID: testSGID, VPCID: testVPCID, Name: "default", Description: "default group", ARN: "arn:sg:1", CreatedAt: time.Now().UTC()}, | ||
| } | ||
| _ = json.NewEncoder(w).Encode(resp) | ||
| return true | ||
| case method == http.MethodDelete && path == pathSG+testSGID: | ||
| return respondNoContent(w) | ||
| case method == http.MethodPost && strings.HasPrefix(path, pathSG) && strings.HasSuffix(path, pathRules): | ||
| resp := sdk.Response[sdk.SecurityRule]{ | ||
| Data: sdk.SecurityRule{ID: "rule-1", Direction: "ingress", Protocol: "tcp", PortMin: 80, PortMax: 80, CIDR: "0.0.0.0/0", Priority: 100}, | ||
| } | ||
| _ = json.NewEncoder(w).Encode(resp) | ||
| return true |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the entire cli_commands_test.go file to understand context and find all instances
wc -l cmd/cloud/cli_commands_test.goRepository: PoyrazK/thecloud
Length of output: 95
🏁 Script executed:
# Check if the file exists and get its structure
git ls-files cmd/cloud/cli_commands_test.goRepository: PoyrazK/thecloud
Length of output: 90
🏁 Script executed:
# Read the test file to understand the context around lines 65-92 and find all instances of the pattern
cat -n cmd/cloud/cli_commands_test.go | head -150Repository: PoyrazK/thecloud
Length of output: 6836
🏁 Script executed:
# Search for all instances of silent JSON encoding failures in the test file
rg '\_ = json\.NewEncoder' cmd/cloud/cli_commands_test.go -nRepository: PoyrazK/thecloud
Length of output: 147
🏁 Script executed:
# Search for all instances of silent JSON encoding failures in the test file (fixed regex)
rg '_.*json\.NewEncoder' cmd/cloud/cli_commands_test.go -nRepository: PoyrazK/thecloud
Length of output: 1399
🏁 Script executed:
# Let's read more of the file to see all mock handlers
sed -n '100,250p' cmd/cloud/cli_commands_test.go | cat -nRepository: PoyrazK/thecloud
Length of output: 7742
🏁 Script executed:
# Let's check if there's a respondNoContent function that might show the error handling pattern
rg 'func respondNoContent' cmd/cloud/cli_commands_test.go -A 5Repository: PoyrazK/thecloud
Length of output: 163
🏁 Script executed:
# Check if there are any other error handling patterns in the test file
rg 'http\.Error|WriteHeader.*Status' cmd/cloud/cli_commands_test.go | head -20Repository: PoyrazK/thecloud
Length of output: 234
🏁 Script executed:
# Count how many handler functions use the problematic pattern vs proper error handling
sed -n '65,1050p' cmd/cloud/cli_commands_test.go | grep -c 'func handle'Repository: PoyrazK/thecloud
Length of output: 61
🏁 Script executed:
# Let's see the respondNoContent function to understand the error handling pattern better
sed -n '340,360p' cmd/cloud/cli_commands_test.go | cat -nRepository: PoyrazK/thecloud
Length of output: 534
🏁 Script executed:
# Check if there's a helper function that might already handle JSON encoding errors
rg 'func.*json|func.*respond' cmd/cloud/cli_commands_test.go -B 2 -A 3Repository: PoyrazK/thecloud
Length of output: 166
Handle JSON encode errors across all mock handlers.
Discarding json.Encoder errors with _ masks malformed responses and lets tests pass with invalid payloads. This pattern appears 32 times across the entire test file (handleSecurityGroupsMock, handleVPCsMock, handleCachesMock, handleQueuesMock, handleNotifyMock, handleRBACMock, handleDatabasesMock, handleSecretsMock). Apply the fix systematically to all handlers:
🔧 Suggested fix pattern
- _ = json.NewEncoder(w).Encode(resp)
- return true
+ if err := json.NewEncoder(w).Encode(resp); err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ }
+ return true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func handleSecurityGroupsMock(w http.ResponseWriter, r *http.Request, method, path string) bool { | |
| switch { | |
| case method == http.MethodGet && path == "/security-groups": | |
| resp := sdk.Response[[]sdk.SecurityGroup]{ | |
| Data: []sdk.SecurityGroup{{ID: testSGID, VPCID: testVPCID, Name: "default", Description: "default group", ARN: "arn:sg:1", CreatedAt: time.Now().UTC()}}, | |
| } | |
| _ = json.NewEncoder(w).Encode(resp) | |
| return true | |
| case method == http.MethodPost && path == "/security-groups": | |
| resp := sdk.Response[sdk.SecurityGroup]{ | |
| Data: sdk.SecurityGroup{ID: "sg-2", VPCID: "vpc-2", Name: "web", Description: "web group", ARN: "arn:sg:2", CreatedAt: time.Now().UTC()}, | |
| } | |
| _ = json.NewEncoder(w).Encode(resp) | |
| return true | |
| case method == http.MethodGet && path == pathSG+testSGID: | |
| resp := sdk.Response[sdk.SecurityGroup]{ | |
| Data: sdk.SecurityGroup{ID: testSGID, VPCID: testVPCID, Name: "default", Description: "default group", ARN: "arn:sg:1", CreatedAt: time.Now().UTC()}, | |
| } | |
| _ = json.NewEncoder(w).Encode(resp) | |
| return true | |
| case method == http.MethodDelete && path == pathSG+testSGID: | |
| return respondNoContent(w) | |
| case method == http.MethodPost && strings.HasPrefix(path, pathSG) && strings.HasSuffix(path, pathRules): | |
| resp := sdk.Response[sdk.SecurityRule]{ | |
| Data: sdk.SecurityRule{ID: "rule-1", Direction: "ingress", Protocol: "tcp", PortMin: 80, PortMax: 80, CIDR: "0.0.0.0/0", Priority: 100}, | |
| } | |
| _ = json.NewEncoder(w).Encode(resp) | |
| return true | |
| func handleSecurityGroupsMock(w http.ResponseWriter, r *http.Request, method, path string) bool { | |
| switch { | |
| case method == http.MethodGet && path == "/security-groups": | |
| resp := sdk.Response[[]sdk.SecurityGroup]{ | |
| Data: []sdk.SecurityGroup{{ID: testSGID, VPCID: testVPCID, Name: "default", Description: "default group", ARN: "arn:sg:1", CreatedAt: time.Now().UTC()}}, | |
| } | |
| if err := json.NewEncoder(w).Encode(resp); err != nil { | |
| http.Error(w, err.Error(), http.StatusInternalServerError) | |
| } | |
| return true | |
| case method == http.MethodPost && path == "/security-groups": | |
| resp := sdk.Response[sdk.SecurityGroup]{ | |
| Data: sdk.SecurityGroup{ID: "sg-2", VPCID: "vpc-2", Name: "web", Description: "web group", ARN: "arn:sg:2", CreatedAt: time.Now().UTC()}, | |
| } | |
| if err := json.NewEncoder(w).Encode(resp); err != nil { | |
| http.Error(w, err.Error(), http.StatusInternalServerError) | |
| } | |
| return true | |
| case method == http.MethodGet && path == pathSG+testSGID: | |
| resp := sdk.Response[sdk.SecurityGroup]{ | |
| Data: sdk.SecurityGroup{ID: testSGID, VPCID: testVPCID, Name: "default", Description: "default group", ARN: "arn:sg:1", CreatedAt: time.Now().UTC()}, | |
| } | |
| if err := json.NewEncoder(w).Encode(resp); err != nil { | |
| http.Error(w, err.Error(), http.StatusInternalServerError) | |
| } | |
| return true | |
| case method == http.MethodDelete && path == pathSG+testSGID: | |
| return respondNoContent(w) | |
| case method == http.MethodPost && strings.HasPrefix(path, pathSG) && strings.HasSuffix(path, pathRules): | |
| resp := sdk.Response[sdk.SecurityRule]{ | |
| Data: sdk.SecurityRule{ID: "rule-1", Direction: "ingress", Protocol: "tcp", PortMin: 80, PortMax: 80, CIDR: "0.0.0.0/0", Priority: 100}, | |
| } | |
| if err := json.NewEncoder(w).Encode(resp); err != nil { | |
| http.Error(w, err.Error(), http.StatusInternalServerError) | |
| } | |
| return true |
🤖 Prompt for AI Agents
In `@cmd/cloud/cli_commands_test.go` around lines 65 - 92, The mock handlers
(e.g., handleSecurityGroupsMock and the other handlers mentioned:
handleVPCsMock, handleCachesMock, handleQueuesMock, handleNotifyMock,
handleRBACMock, handleDatabasesMock, handleSecretsMock) currently discard
json.Encoder errors with `_ = json.NewEncoder(w).Encode(...)`; change each to
capture the error and handle it (e.g., if err :=
json.NewEncoder(w).Encode(resp); err != nil { http.Error(w, "failed to encode
response", http.StatusInternalServerError); return true } or similar) so
encoding failures return a non-2xx response and do not mask malformed payloads;
apply this pattern consistently to every place Encoding is used in the test
handlers.
| _ = sgListCmd.Flags().Set(flagVPCID, "vpc-1") | ||
| outputJSON = true | ||
| t.Cleanup(func() { | ||
| _ = sgListCmd.Flags().Set(flagVPCID, "") | ||
| outputJSON = false | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l cmd/cloud/cli_commands_test.goRepository: PoyrazK/thecloud
Length of output: 95
🏁 Script executed:
cat -n cmd/cloud/cli_commands_test.go | sed -n '365,375p'Repository: PoyrazK/thecloud
Length of output: 374
🏁 Script executed:
cat -n cmd/cloud/cli_commands_test.go | sed -n '697,715p'Repository: PoyrazK/thecloud
Length of output: 833
🏁 Script executed:
cat -n cmd/cloud/cli_commands_test.go | sed -n '861,872p'Repository: PoyrazK/thecloud
Length of output: 456
🏁 Script executed:
cat -n cmd/cloud/cli_commands_test.go | sed -n '991,1002p'Repository: PoyrazK/thecloud
Length of output: 532
Preserve command flag state and surface flag-set errors.
Tests mutate global Cobra flags and ignore errors using silent failures (blank identifier), while resetting to hard-coded values instead of preserving originals. This violates the coding guideline against blank identifier assignments and can leak state between tests (lines 368, 700, 864, 994). Capture original values before mutation and restore them in cleanup, failing fast on Set errors.
🔧 Suggested pattern (example from Line 368)
- _ = sgListCmd.Flags().Set(flagVPCID, "vpc-1")
+ oldVPCID, err := sgListCmd.Flags().GetString(flagVPCID)
+ if err != nil {
+ t.Fatalf("get %s: %v", flagVPCID, err)
+ }
+ if err := sgListCmd.Flags().Set(flagVPCID, "vpc-1"); err != nil {
+ t.Fatalf("set %s: %v", flagVPCID, err)
+ }
outputJSON = true
t.Cleanup(func() {
- _ = sgListCmd.Flags().Set(flagVPCID, "")
+ _ = sgListCmd.Flags().Set(flagVPCID, oldVPCID)
outputJSON = false
})Also applies to: 700–711, 864–869, 994–999
🤖 Prompt for AI Agents
In `@cmd/cloud/cli_commands_test.go` around lines 368 - 373, Capture and restore
the original flag and global state instead of discarding Set errors and
hard-coding resets: before calling sgListCmd.Flags().Set(flagVPCID, "vpc-1")
read the current value (e.g., val, err := sgListCmd.Flags().GetString(flagVPCID)
or use Lookup) and store it and the current outputJSON value; on cleanup, call
Set with the original value and reset outputJSON to the saved original, and
check/set errors (fail the test immediately with t.Fatalf or t.Fatal if Set
returns an error) rather than assigning to the blank identifier; apply the same
pattern for every test that mutates sgListCmd.Flags().Set(flagVPCID) and
outputJSON in this file (including the other affected blocks).
| _ = json.NewEncoder(w).Encode(payload) | ||
| })) | ||
| defer server.Close() | ||
|
|
||
| apiURL = server.URL | ||
| apiKey = "test-key" | ||
| outputJSON = true | ||
| defer func() { outputJSON = false }() |
There was a problem hiding this comment.
Handle the encoding error and clean up all modified globals.
Line 39 silently discards the encoding error. Also, cleanup only restores outputJSON but leaves apiURL and apiKey modified, which could affect subsequent tests.
Proposed fix
- _ = json.NewEncoder(w).Encode(payload)
+ if err := json.NewEncoder(w).Encode(payload); err != nil {
+ t.Errorf("failed to encode response: %v", err)
+ }
}))
defer server.Close()
+ oldAPIURL := apiURL
+ oldAPIKey := apiKey
+ oldOutputJSON := outputJSON
+ defer func() {
+ apiURL = oldAPIURL
+ apiKey = oldAPIKey
+ outputJSON = oldOutputJSON
+ }()
+
apiURL = server.URL
apiKey = "test-key"
outputJSON = true
- defer func() { outputJSON = false }()🤖 Prompt for AI Agents
In `@cmd/cloud/instance_test.go` around lines 39 - 46, The test currently ignores
the error from json.NewEncoder(w).Encode(payload) and only restores outputJSON,
leaving apiURL and apiKey mutated for other tests; change the handler to capture
the encoder error (err := json.NewEncoder(w).Encode(payload)) and fail the test
if non-nil (e.g., t.Fatalf("encode payload: %v", err)), and at the top of the
test capture prior values (oldAPIURL, oldAPIKey, oldOutputJSON := apiURL,
apiKey, outputJSON) then defer a restore of those globals (apiURL, apiKey,
outputJSON = oldAPIURL, oldAPIKey, oldOutputJSON) so the test cleans up all
modified globals while keeping the existing server.Close defer.
| _ = json.NewEncoder(w).Encode(payload) | ||
| })) | ||
| defer server.Close() | ||
|
|
||
| apiURL = server.URL | ||
| apiKey = "test-key" | ||
|
|
||
| _ = launchCmd.Flags().Set("name", "app") | ||
| _ = launchCmd.Flags().Set("image", "alpine") | ||
| _ = launchCmd.Flags().Set("port", "80:8080") | ||
| _ = launchCmd.Flags().Set("vpc", "vpc-1") | ||
| _ = launchCmd.Flags().Set("subnet", "subnet-1") | ||
| _ = launchCmd.Flags().Set("volume", "vol-1:/data") | ||
|
|
||
| out := captureStdout(t, func() { | ||
| launchCmd.Run(launchCmd, nil) | ||
| }) | ||
| if !strings.Contains(out, "Instance launched successfully") { | ||
| t.Fatalf("expected success message, got: %s", out) | ||
| } |
There was a problem hiding this comment.
Avoid silent failures and reset state after test.
Multiple blank identifier assignments suppress errors that could indicate test setup failures. Additionally, flag values and global variables (apiURL, apiKey) are not restored, potentially polluting other tests.
Proposed fix
- _ = json.NewEncoder(w).Encode(payload)
+ if err := json.NewEncoder(w).Encode(payload); err != nil {
+ t.Errorf("failed to encode response: %v", err)
+ }
}))
defer server.Close()
+ oldAPIURL := apiURL
+ oldAPIKey := apiKey
+ defer func() {
+ apiURL = oldAPIURL
+ apiKey = oldAPIKey
+ // Reset flags to defaults
+ _ = launchCmd.Flags().Set("name", "")
+ _ = launchCmd.Flags().Set("image", "")
+ _ = launchCmd.Flags().Set("port", "")
+ _ = launchCmd.Flags().Set("vpc", "")
+ _ = launchCmd.Flags().Set("subnet", "")
+ _ = launchCmd.Flags().Set("volume", "")
+ }()
+
apiURL = server.URL
apiKey = "test-key"
- _ = launchCmd.Flags().Set("name", "app")
- _ = launchCmd.Flags().Set("image", "alpine")
- _ = launchCmd.Flags().Set("port", "80:8080")
- _ = launchCmd.Flags().Set("vpc", "vpc-1")
- _ = launchCmd.Flags().Set("subnet", "subnet-1")
- _ = launchCmd.Flags().Set("volume", "vol-1:/data")
+ if err := launchCmd.Flags().Set("name", "app"); err != nil {
+ t.Fatalf("failed to set name flag: %v", err)
+ }
+ if err := launchCmd.Flags().Set("image", "alpine"); err != nil {
+ t.Fatalf("failed to set image flag: %v", err)
+ }
+ if err := launchCmd.Flags().Set("port", "80:8080"); err != nil {
+ t.Fatalf("failed to set port flag: %v", err)
+ }
+ if err := launchCmd.Flags().Set("vpc", "vpc-1"); err != nil {
+ t.Fatalf("failed to set vpc flag: %v", err)
+ }
+ if err := launchCmd.Flags().Set("subnet", "subnet-1"); err != nil {
+ t.Fatalf("failed to set subnet flag: %v", err)
+ }
+ if err := launchCmd.Flags().Set("volume", "vol-1:/data"); err != nil {
+ t.Fatalf("failed to set volume flag: %v", err)
+ }🤖 Prompt for AI Agents
In `@cmd/cloud/instance_test.go` around lines 79 - 98, The test currently discards
errors and mutates globals without restoring them; update the test to check and
fail on any returned errors from json.NewEncoder(w).Encode(...) and from
launchCmd.Flags().Set(...) calls (use t.Fatalf on failure), and capture and
restore global state (apiURL, apiKey) and flag values after the test (save
originals before setting and defer their restoration), ensuring the test server
is closed via defer server.Close() (already present) and avoid using the blank
identifier for error returns; reference launchCmd, launchCmd.Flags().Set,
json.NewEncoder(w).Encode, apiURL, and apiKey when locating the changes.
| func TestRelPathFallback(t *testing.T) { | ||
| path := relPath("", "/tmp/file.go") | ||
| if path == "" { | ||
| t.Fatal("expected non-empty path") | ||
| } |
There was a problem hiding this comment.
Avoid hardcoded “/tmp” to keep tests portable.
/tmp is Unix‑specific. Use t.TempDir() and filepath.Join so the test runs on Windows too.
✅ Suggested fix
-func TestRelPathFallback(t *testing.T) {
- path := relPath("", "/tmp/file.go")
+func TestRelPathFallback(t *testing.T) {
+ root := t.TempDir()
+ path := relPath("", filepath.Join(root, "file.go"))
if path == "" {
t.Fatal("expected non-empty path")
}
}🤖 Prompt for AI Agents
In `@cmd/doccheck/main_test.go` around lines 92 - 96, The test TestRelPathFallback
uses a hardcoded Unix path "/tmp/file.go"; update it to create a temp directory
via t.TempDir(), then build the file path with filepath.Join(tempDir, "file.go")
and pass that to relPath so the test is portable on Windows; ensure you import
"path/filepath" if not already and keep the same assertion behavior.
| type LibvirtClient interface { | ||
| Connect() error | ||
|
|
||
| // Domain | ||
| DomainLookupByName(name string) (libvirt.Domain, error) | ||
| DomainDefineXML(xml string) (libvirt.Domain, error) | ||
| DomainCreate(dom libvirt.Domain) error | ||
| DomainDestroy(dom libvirt.Domain) error | ||
| DomainUndefine(dom libvirt.Domain) error | ||
| DomainGetState(dom libvirt.Domain, flags uint32) (int32, int32, error) | ||
| DomainGetXMLDesc(dom libvirt.Domain, flags libvirt.DomainXMLFlags) (string, error) | ||
| DomainAttachDevice(dom libvirt.Domain, xml string) error | ||
| DomainDetachDevice(dom libvirt.Domain, xml string) error | ||
| DomainMemoryStats(dom libvirt.Domain, maxStats uint32, flags uint32) ([]libvirt.DomainMemoryStat, error) | ||
|
|
||
| // Network | ||
| NetworkLookupByName(name string) (libvirt.Network, error) | ||
| NetworkDefineXML(xml string) (libvirt.Network, error) | ||
| NetworkCreate(net libvirt.Network) error | ||
| NetworkDestroy(net libvirt.Network) error | ||
| NetworkUndefine(net libvirt.Network) error | ||
| NetworkGetDhcpLeases(net libvirt.Network, mac libvirt.OptString, needResults uint32, flags uint32) ([]libvirt.NetworkDhcpLease, uint32, error) | ||
|
|
||
| // Storage | ||
| StoragePoolLookupByName(name string) (libvirt.StoragePool, error) | ||
| StoragePoolRefresh(pool libvirt.StoragePool, flags uint32) error | ||
| StorageVolLookupByName(pool libvirt.StoragePool, name string) (libvirt.StorageVol, error) | ||
| StorageVolCreateXML(pool libvirt.StoragePool, xml string, flags uint32) (libvirt.StorageVol, error) | ||
| StorageVolDelete(vol libvirt.StorageVol, flags uint32) error | ||
| StorageVolGetPath(vol libvirt.StorageVol) (string, error) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the full LibvirtClient interface definition
cat -n internal/repositories/libvirt/libvirt_client.go | head -60Repository: PoyrazK/thecloud
Length of output: 2064
🏁 Script executed:
# Check if there are more methods in the interface beyond line 38
cat -n internal/repositories/libvirt/libvirt_client.goRepository: PoyrazK/thecloud
Length of output: 2064
🏁 Script executed:
# Search for the actual implementation of LibvirtClient
fd -t f "\.go$" -path "*/libvirt/*" | head -20Repository: PoyrazK/thecloud
Length of output: 228
🏁 Script executed:
# Check how LibvirtClient is used in the codebase
rg "LibvirtClient" --type go -B 2 -A 2Repository: PoyrazK/thecloud
Length of output: 22543
Add context-aware methods (and close semantics) to the LibvirtClient interface.
All libvirt operations are blocking I/O calls that should accept context.Context as the first parameter to enable cancellation and deadline propagation, per coding guidelines. Additionally, the interface lacks a Close or Disconnect method, preventing proper resource cleanup. The underlying RealLibvirtClient holds an active libvirt connection that has no way to be released through the interface.
🤖 Prompt for AI Agents
In `@internal/repositories/libvirt/libvirt_client.go` around lines 9 - 38, The
LibvirtClient interface currently exposes blocking libvirt calls without context
or close semantics; update the interface (LibvirtClient) so every libvirt
operation method (e.g., DomainLookupByName, DomainDefineXML, DomainCreate,
DomainDestroy, DomainUndefine, DomainGetState, DomainGetXMLDesc,
DomainAttachDevice, DomainDetachDevice, DomainMemoryStats, NetworkLookupByName,
NetworkDefineXML, NetworkCreate, NetworkDestroy, NetworkUndefine,
NetworkGetDhcpLeases, StoragePoolLookupByName, StoragePoolRefresh,
StorageVolLookupByName, StorageVolCreateXML, StorageVolDelete, StorageVolGetPath
and Connect) accepts context.Context as the first parameter to enable
cancellation, and add a Close (or Disconnect) method to the interface to allow
releasing the underlying libvirt connection; then update RealLibvirtClient (and
all callers/tests) to match the new signatures and implement Close/Disconnect to
properly close the libvirt connection and handle context cancellation for each
operation.
| func (r *RealLibvirtClient) DomainLookupByName(name string) (libvirt.Domain, error) { | ||
| return r.conn.DomainLookupByName(name) | ||
| } | ||
|
|
||
| func (r *RealLibvirtClient) DomainDefineXML(xml string) (libvirt.Domain, error) { | ||
| return r.conn.DomainDefineXML(xml) | ||
| } | ||
|
|
||
| func (r *RealLibvirtClient) DomainCreate(dom libvirt.Domain) error { | ||
| return r.conn.DomainCreate(dom) | ||
| } | ||
|
|
||
| func (r *RealLibvirtClient) DomainDestroy(dom libvirt.Domain) error { | ||
| return r.conn.DomainDestroy(dom) | ||
| } | ||
|
|
||
| func (r *RealLibvirtClient) DomainUndefine(dom libvirt.Domain) error { | ||
| return r.conn.DomainUndefine(dom) | ||
| } | ||
|
|
||
| func (r *RealLibvirtClient) DomainGetState(dom libvirt.Domain, flags uint32) (int32, int32, error) { | ||
| return r.conn.DomainGetState(dom, flags) | ||
| } | ||
|
|
||
| func (r *RealLibvirtClient) DomainGetXMLDesc(dom libvirt.Domain, flags libvirt.DomainXMLFlags) (string, error) { | ||
| return r.conn.DomainGetXMLDesc(dom, flags) | ||
| } | ||
|
|
||
| func (r *RealLibvirtClient) DomainAttachDevice(dom libvirt.Domain, xml string) error { | ||
| return r.conn.DomainAttachDevice(dom, xml) | ||
| } | ||
|
|
||
| func (r *RealLibvirtClient) DomainDetachDevice(dom libvirt.Domain, xml string) error { | ||
| return r.conn.DomainDetachDevice(dom, xml) | ||
| } | ||
|
|
||
| func (r *RealLibvirtClient) DomainMemoryStats(dom libvirt.Domain, maxStats uint32, flags uint32) ([]libvirt.DomainMemoryStat, error) { | ||
| return r.conn.DomainMemoryStats(dom, maxStats, flags) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file structure around the libvirt client
find . -type f -name "*.go" | grep -i libvirt | head -20Repository: PoyrazK/thecloud
Length of output: 575
🏁 Script executed:
# Let's check the LibvirtClient interface definition
find . -type f -name "*.go" | xargs grep -l "type LibvirtClient" 2>/dev/null | head -5Repository: PoyrazK/thecloud
Length of output: 109
🏁 Script executed:
# Check the go.mod to see the libvirt library version
cat go.mod | grep libvirtRepository: PoyrazK/thecloud
Length of output: 130
🏁 Script executed:
# Read the LibvirtClient interface definition
cat -n ./internal/repositories/libvirt/libvirt_client.goRepository: PoyrazK/thecloud
Length of output: 2064
🏁 Script executed:
# Read the real_client.go file to see the complete context
cat -n ./internal/repositories/libvirt/real_client.goRepository: PoyrazK/thecloud
Length of output: 4479
🏁 Script executed:
# Check how these LibvirtClient methods are used in adapter
head -150 ./internal/repositories/libvirt/adapter.go | cat -nRepository: PoyrazK/thecloud
Length of output: 5086
🏁 Script executed:
# Look for context usage in adapter
grep -n "context\|Context" ./internal/repositories/libvirt/adapter.go | head -20Repository: PoyrazK/thecloud
Length of output: 1882
🏁 Script executed:
# Check if there are any existing patterns with blocking calls and context in the codebase
find . -name "*.go" -type f | xargs grep -l "ctx context.Context" | head -10Repository: PoyrazK/thecloud
Length of output: 413
🏁 Script executed:
# Check if go-libvirt module is vendored or check what context methods might exist
find . -name "*.go" -type f | xargs grep -h "libvirt\." | grep -i "context\|ctx" | sort -u | head -20Repository: PoyrazK/thecloud
Length of output: 42
🏁 Script executed:
# Search for any context-aware methods in usage of libvirt
grep -r "DomainCreate\|DomainLookup\|DomainDefine" . --include="*.go" | grep -i "ctx\|context" | head -10Repository: PoyrazK/thecloud
Length of output: 42
🌐 Web query:
digitalocean go-libvirt context support blocking calls
💡 Result:
Short answer: No — digitalocean/go-libvirt’s public API does not take context.Context and its calls are synchronous/blocking; it doesn’t provide built‑in context cancellation for RPC calls. Use your own goroutines/timeouts, deadlines on the underlying connection, or wrap calls yourself to support cancellation. References: project README and package docs. [1][2]
Sources
- digitalocean/go-libvirt README (project description, no context-based API). [1]
- go-libvirt package docs on pkg.go.dev (API surface — no context.Context parameters). [2]
Thread context.Context through the LibvirtClient interface and implementations.
The interface and RealLibvirtClient methods lack context.Context as the first parameter on all blocking operations (domain, network, storage calls). Although the underlying digitalocean/go-libvirt library does not natively support context, the wrapper interface should still accept and propagate context to enable timeout handling, cancellation support, and better composability at the adapter level.
Update the LibvirtClient interface and RealLibvirtClient methods to accept ctx context.Context as the first parameter. Implement timeout/cancellation logic in LibvirtAdapter before calling these methods (e.g., using context deadlines on socket operations or wrapping calls with goroutines respecting context cancellation). This ensures compliance with the coding guideline that context must be the first parameter in all functions.
🤖 Prompt for AI Agents
In `@internal/repositories/libvirt/real_client.go` around lines 18 - 56, The
Libvirt client methods (e.g., DomainLookupByName, DomainDefineXML, DomainCreate,
DomainDestroy, DomainUndefine, DomainGetState, DomainGetXMLDesc,
DomainAttachDevice, DomainDetachDevice, DomainMemoryStats) must accept
context.Context as the first parameter to support cancellation/timeouts: update
the LibvirtClient interface signatures and the RealLibvirtClient receiver
methods to add ctx context.Context as the first param and propagate it where
possible; then update the LibvirtAdapter caller sites to enforce
timeouts/cancellation (for example, wrap calls to the RealLibvirtClient in a
goroutine and select on ctx.Done() or apply socket-level deadlines) so
operations respect context cancellation before delegating to the go-libvirt
calls. Ensure all call sites and tests are updated to pass the context through.
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 39 out of 39 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ID: uuid.New(), | ||
| Port: 80, | ||
| Algorithm: "round-robin", | ||
| Algorithm: algoRoundRobin, |
There was a problem hiding this comment.
The constant algoRoundRobin is defined locally in this test file but the actual production code uses the string literal "round-robin" directly. Consider defining this constant in the domain package or the adapter code itself for consistency between tests and production.
| ID: uuid.New(), | ||
| Port: 80, | ||
| Algorithm: "round-robin", | ||
| Algorithm: algoRoundRobin, |
There was a problem hiding this comment.
The constant algoRoundRobin is defined locally in this test file but the actual production code uses the string literal "round-robin" directly. Consider defining this constant in the domain package or the adapter code itself for consistency between tests and production.
| // Since we are mocking, we cannot easily mock net.Dial from within isPortOpen since it's hardcoded. | ||
| // But we can test that it TRIES to update if health status changes. | ||
| // However, isPortOpen will likely fail (return false) since 8080 is not open locally. | ||
| // So status stays "unhealthy". UpdateTargetHealth won't be called. | ||
|
|
||
| // If we want to test "healthy" transition we need DI for dialer or logic separation. | ||
| // For now, let's test that it runs without error. |
There was a problem hiding this comment.
The test doesn't verify the health check behavior properly because the dialer isn't mocked. Consider adding a test that injects a mock PortDialer to verify both healthy and unhealthy status transitions are handled correctly.
| assert.Contains(t, xml, "<bridge name='test-br' stp='on' delay='0'/>") | ||
| assert.Contains(t, xml, fmt.Sprintf("<ip address='%s'", testutil.TestIPHost)) | ||
| assert.Contains(t, xml, fmt.Sprintf("<range start='%s' end='%s'/>", testutil.TestLibvirtDHCPStart, testutil.TestLibvirtDHCPEnd)) | ||
| assert.True(t, strings.Contains(xml, "<network>")) |
There was a problem hiding this comment.
Use assert.Contains(t, xml, "<network>") instead of assert.True(t, strings.Contains(...)) for better error messages when the assertion fails.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@internal/repositories/libvirt/adapter_lifecycle_test.go`:
- Around line 195-213: In TestGetInstanceStatsSuccess replace the silent close
("_ = rc.Close()") with an assertion (assert.NoError(t, rc.Close())) and remove
the magic numeric Tag by introducing/using a named constant (e.g., MemoryStatRSS
= 6) for the DomainMemoryStat Tag value; update the test to use MemoryStatRSS in
the stats slice and ensure the constant is declared at package scope so the
DomainMemoryStat{Tag: MemoryStatRSS, Val: 1048576} is used and the close error
is properly asserted in the TestGetInstanceStatsSuccess function.
🧹 Nitpick comments (1)
internal/repositories/libvirt/adapter_lifecycle_test.go (1)
30-56: Consolidate StopInstance cases into a table-driven test.The success and not-found cases are structurally identical except for setup/expectations. A table-driven test will reduce duplication and align with Go test conventions. As per coding guidelines, please prefer table-driven tests in *_test.go files.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/repositories/libvirt/adapter_lifecycle_test.go
🧰 Additional context used
📓 Path-based instructions (3)
internal/repositories/**/*.go
📄 CodeRabbit inference engine (GEMINI.md)
Repositories (
internal/repositories/) must implement Ports interfaces
Files:
internal/repositories/libvirt/adapter_lifecycle_test.go
**/*.go
📄 CodeRabbit inference engine (GEMINI.md)
**/*.go: Do not use global variables (e.g.,var DB *sql.DB)
Do not panic in production code - return errors instead
Do not use silent failures - avoid blank identifier assignment like_ = someFunc()
Do not use magic numbers - use named constants instead
Do not skipcontext.Contextas the first parameter in functions
Do not create circular dependencies between packages
Use constructor injection for dependencies instead of global initialization
Propagatecontext.Contextto all blocking calls
Use PascalCase for exported types and structs (e.g.,InstanceService,LaunchRequest)
Use PascalCase for exported interfaces (e.g.,InstanceRepository,ComputeBackend)
Use PascalCase for exported functions and camelCase for unexported functions
Use SCREAMING_SNAKE_CASE for status constants and PascalCase for limit constants (e.g.,StatusRunning,MaxPortsPerInstance)
Use camelCase for variable names (e.g.,instanceRepo,vpcID)
Includejsontags on struct fields withomitemptyfor optional fields
Files:
internal/repositories/libvirt/adapter_lifecycle_test.go
**/*_test.go
📄 CodeRabbit inference engine (GEMINI.md)
**/*_test.go: Do not test with real external dependencies in unit tests - use mocks instead
Use table-driven tests in test files
Usetestify/mockfor creating mock objects in tests
Files:
internal/repositories/libvirt/adapter_lifecycle_test.go
🧠 Learnings (1)
📚 Learning: 2026-01-14T11:33:57.684Z
Learnt from: CR
Repo: PoyrazK/thecloud PR: 0
File: GEMINI.md:0-0
Timestamp: 2026-01-14T11:33:57.684Z
Learning: Applies to **/*_test.go : Do not test with real external dependencies in unit tests - use mocks instead
Applied to files:
internal/repositories/libvirt/adapter_lifecycle_test.go
🧬 Code graph analysis (1)
internal/repositories/libvirt/adapter_lifecycle_test.go (3)
internal/repositories/libvirt/mock_client_test.go (1)
MockLibvirtClient(8-10)internal/repositories/libvirt/adapter.go (1)
LibvirtAdapter(40-53)internal/errors/errors.go (1)
New(61-63)
🔇 Additional comments (7)
internal/repositories/libvirt/adapter_lifecycle_test.go (7)
21-27: Clean test adapter setup.Isolated logger and short wait interval keep tests fast and deterministic.
58-85: Thorough delete lifecycle coverage.The mocked call sequence exercises lookup, stop, undefine, and storage cleanup in a clear success path.
87-111: Solid GetInstanceIP success-path test.Mocks cover XML parsing and DHCP lease resolution cleanly.
113-142: waitInitialIP test is well scoped.The mocks cover MAC extraction, network lookup, and lease retrieval accurately.
144-159: AttachVolume test looks good.Expectations on device attach are correctly asserted.
161-176: DetachVolume test looks good.Domain lookup plus detach expectations are verified cleanly.
178-193: Console URL parsing test is clear.The XML fixture and expected URL make the intent obvious.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| func TestGetInstanceStatsSuccess(t *testing.T) { | ||
| m := new(MockLibvirtClient) | ||
| a := newTestAdapter(m) | ||
| ctx := context.Background() | ||
|
|
||
| dom := libvirt.Domain{Name: testInstanceName, ID: 1, UUID: [16]byte{1}} | ||
| stats := []libvirt.DomainMemoryStat{ | ||
| {Tag: 6, Val: 1048576}, // rss | ||
| } | ||
|
|
||
| m.On("DomainLookupByName", testInstanceName).Return(dom, nil) | ||
| m.On("DomainMemoryStats", dom, uint32(10), uint32(0)).Return(stats, nil) | ||
|
|
||
| rc, err := a.GetInstanceStats(ctx, testInstanceName) | ||
| assert.NoError(t, err) | ||
| assert.NotNil(t, rc) | ||
| _ = rc.Close() | ||
| m.AssertExpectations(t) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the test file to understand the context
cd $(git rev-parse --show-toplevel) 2>/dev/null || pwd
fd adapter_lifecycle_test.goRepository: PoyrazK/thecloud
Length of output: 115
🏁 Script executed:
# Let's examine the full test file to understand the context
cat -n internal/repositories/libvirt/adapter_lifecycle_test.go | head -250Repository: PoyrazK/thecloud
Length of output: 7881
🏁 Script executed:
# Let's check the libvirt package to understand DomainMemoryStat and its tag constants
fd -e go . internal/repositories/libvirt/ | head -20Repository: PoyrazK/thecloud
Length of output: 529
🏁 Script executed:
# Search for how Close() errors are handled elsewhere in the test files
rg "\.Close\(\)" --type go internal/repositories/libvirt/ -A 2 -B 2Repository: PoyrazK/thecloud
Length of output: 428
🏁 Script executed:
# Search for memory stat tag usage to see if there are constants defined
rg "Tag.*6|memory.*stat|DomainMemoryStat" --type go -A 2 -B 2Repository: PoyrazK/thecloud
Length of output: 8881
🏁 Script executed:
# Check the actual implementation of GetInstanceStats to see what it returns
sed -n '1,500p' internal/repositories/libvirt/adapter.go | grep -A 20 "func.*GetInstanceStats"Repository: PoyrazK/thecloud
Length of output: 690
🏁 Script executed:
# Check if the go-libvirt package defines constants for DomainMemoryStat tags
rg "DomainMemoryStat|MemoryStat" --type go -B 5 -A 10 | head -100Repository: PoyrazK/thecloud
Length of output: 7735
🏁 Script executed:
# Check the imported libvirt package - search for tag constants or enums
rg "const.*[0-9]|iota" internal/repositories/libvirt/ --type go | grep -i "stat\|tag\|memory"Repository: PoyrazK/thecloud
Length of output: 42
🏁 Script executed:
# Check if io.NopCloser's Close() can return errors
python3 << 'EOF'
# io.NopCloser is a standard library wrapper that makes Close() a no-op
# It implements io.ReadCloser
# The Close() method of NopCloser always returns nil
# However, even if it can't error in this specific case, the coding guideline
# says not to use _ = someFunc() for silent failures
# Check what the actual behavior should be
print("io.NopCloser Close() always returns nil (no error)")
print("However, coding guidelines state: 'Do not use silent failures - avoid blank identifier assignment'")
print("Even though it won't error, the pattern _ = rc.Close() violates the guideline")
EOFRepository: PoyrazK/thecloud
Length of output: 146
🏁 Script executed:
# Search for how other tests handle Close() errors
rg "Close\(\)" internal/ --type go -B 2 -A 2 | grep -v "Binary\|node_modules" | head -60Repository: PoyrazK/thecloud
Length of output: 3612
Handle Close() error and replace magic number with a constant.
The test violates two coding guidelines:
-
_ = rc.Close()uses silent failure pattern - should useassert.NoError(t, rc.Close())per the "avoid blank identifier assignment" guideline. -
Tag: 6is a magic number - should be defined as a named constant (e.g.,MemoryStatRSS = 6).
Suggested fix
+const MemoryStatRSS = 6
func TestGetInstanceStatsSuccess(t *testing.T) {
m := new(MockLibvirtClient)
a := newTestAdapter(m)
ctx := context.Background()
dom := libvirt.Domain{Name: testInstanceName, ID: 1, UUID: [16]byte{1}}
stats := []libvirt.DomainMemoryStat{
- {Tag: 6, Val: 1048576}, // rss
+ {Tag: MemoryStatRSS, Val: 1048576}, // rss
}
m.On("DomainLookupByName", testInstanceName).Return(dom, nil)
m.On("DomainMemoryStats", dom, uint32(10), uint32(0)).Return(stats, nil)
rc, err := a.GetInstanceStats(ctx, testInstanceName)
assert.NoError(t, err)
assert.NotNil(t, rc)
- _ = rc.Close()
+ assert.NoError(t, rc.Close())
m.AssertExpectations(t)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func TestGetInstanceStatsSuccess(t *testing.T) { | |
| m := new(MockLibvirtClient) | |
| a := newTestAdapter(m) | |
| ctx := context.Background() | |
| dom := libvirt.Domain{Name: testInstanceName, ID: 1, UUID: [16]byte{1}} | |
| stats := []libvirt.DomainMemoryStat{ | |
| {Tag: 6, Val: 1048576}, // rss | |
| } | |
| m.On("DomainLookupByName", testInstanceName).Return(dom, nil) | |
| m.On("DomainMemoryStats", dom, uint32(10), uint32(0)).Return(stats, nil) | |
| rc, err := a.GetInstanceStats(ctx, testInstanceName) | |
| assert.NoError(t, err) | |
| assert.NotNil(t, rc) | |
| _ = rc.Close() | |
| m.AssertExpectations(t) | |
| } | |
| const MemoryStatRSS = 6 | |
| func TestGetInstanceStatsSuccess(t *testing.T) { | |
| m := new(MockLibvirtClient) | |
| a := newTestAdapter(m) | |
| ctx := context.Background() | |
| dom := libvirt.Domain{Name: testInstanceName, ID: 1, UUID: [16]byte{1}} | |
| stats := []libvirt.DomainMemoryStat{ | |
| {Tag: MemoryStatRSS, Val: 1048576}, // rss | |
| } | |
| m.On("DomainLookupByName", testInstanceName).Return(dom, nil) | |
| m.On("DomainMemoryStats", dom, uint32(10), uint32(0)).Return(stats, nil) | |
| rc, err := a.GetInstanceStats(ctx, testInstanceName) | |
| assert.NoError(t, err) | |
| assert.NotNil(t, rc) | |
| assert.NoError(t, rc.Close()) | |
| m.AssertExpectations(t) | |
| } |
🤖 Prompt for AI Agents
In `@internal/repositories/libvirt/adapter_lifecycle_test.go` around lines 195 -
213, In TestGetInstanceStatsSuccess replace the silent close ("_ = rc.Close()")
with an assertion (assert.NoError(t, rc.Close())) and remove the magic numeric
Tag by introducing/using a named constant (e.g., MemoryStatRSS = 6) for the
DomainMemoryStat Tag value; update the test to use MemoryStatRSS in the stats
slice and ensure the constant is declared at package scope so the
DomainMemoryStat{Tag: MemoryStatRSS, Val: 1048576} is used and the close error
is properly asserted in the TestGetInstanceStatsSuccess function.


Summary
This PR significantly improves test coverage from 74.3% to 75.2% and introduces major refactoring for better testability.
Key Changes
🔧 Refactoring
LibvirtClientinterface to decouple from concrete*libvirt.LibvirtdependencyRealLibvirtClientfor production useMockLibvirtClientfor testingPortDialerinterface to make network dialing testable✅ Test Coverage Improvements
🐛 Fixes
Coverage Stats
Testing
All tests passing:
go test ./...Linting clean:
Summary by CodeRabbit
Bug Fixes
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.