-
Notifications
You must be signed in to change notification settings - Fork 92
Refactor add V2 E2E testing for more maintainability #2836
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThe changes introduce a comprehensive end-to-end testing framework focused on CRUD operations using ann-benchmarks datasets. New test suites (unary, multi, stream) and helper functions are added to manage client initialization, dataset loading, and error logging. In addition, a configuration package is provided to support various testing parameters. Enhancements in Kubernetes integration include a client interface, rollout restart, resource management, port-forwarding functionality, and status monitoring. Improvements to internal error handling, HTTP client transport initialization, and GitHub workflow configurations for credentials further strengthen the overall testing infrastructure. Changes
Sequence Diagram(s)sequenceDiagram
participant TestSuite as E2E Test Suite
participant Client as NewClient
participant DB as Vector Database
participant Logger as Logger/Monitor
TestSuite->>Client: Initialize client with configuration flags
Client->>DB: Establish connection and load dataset
TestSuite->>DB: Execute CRUD operations (Insert, Search, Update, Remove, Flush)
DB-->>Client: Return responses/errors
Client->>TestSuite: Propagate operation results
TestSuite->>Logger: Log success and error details
sequenceDiagram
participant Controller as Rollout Controller
participant K8sClient as Kubernetes Client
participant Resource as K8s Resource
Controller->>K8sClient: Retrieve resource by name
K8sClient-->>Controller: Return current resource state
Controller->>Resource: Update rollout annotation (with timestamp)
Resource-->>Controller: Confirm update or trigger conflict
Controller->>Controller: Retry update on conflict if necessary
Suggested labels
Suggested reviewers
Warning There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure. 🔧 golangci-lint (1.62.2)level=warning msg="[config_reader] The configuration option ✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
[CHATOPS:HELP] ChatOps commands.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (2)
tests/v2/e2e/crud/crud_test.go (2)
539-539: Correct the spelling of 'updated'.Change “updateed” to “updated.”
- t.Logf("vector %v id %s updateed to %s", vec, id, res.String()) + t.Logf("vector %v id %s updated to %s", vec, id, res.String())
319-617: Split the monolithic test into sub-tests for improved readability.This test function covers many operations (insert, search, update, remove, etc.) in a single go, which may make debugging difficult. Breaking it down into multiple sub-tests using t.Run can improve maintainability and clarity.
Example approach:
- Organize each group of operations (insert, search, update, etc.) into separate sub-tests.
- Use t.Run("Insert", func(t *testing.T) { ... }) constructs for logical separation.
- Keep shared setup in a parent function or in init to reduce duplication.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/v2/e2e/client/client.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/v2/e2e/client/client.go
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: CodeQL
- GitHub Check: runner / go build
- GitHub Check: Cloudflare Pages
tests/v2/e2e/crud/crud_test.go
Outdated
| if err != nil { | ||
| t.Errorf("failed to get timestamp: %v", err) | ||
| } | ||
| t.Lgof("id %s got timestamp: %v", id, res.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct misspelling in log method name.
It appears “t.Lgof” is a typo and won’t compile. It should be “t.Logf.”
- t.Lgof("id %s got timestamp: %v", id, res.String())
+ t.Logf("id %s got timestamp: %v", id, res.String())
tests/v2/e2e/crud/crud_test.go
Outdated
| return res, rid, err | ||
| } | ||
|
|
||
| func linearSearch(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix missing comma in function return parameters.
Again, the return parameter list needs a comma.
-func linearSearch(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) {
+func linearSearch(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, err error) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func linearSearch(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) { | |
| func linearSearch(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, err error) { |
tests/v2/e2e/crud/crud_test.go
Outdated
| return res, rid, err | ||
| } | ||
|
|
||
| func linearSearchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix missing comma in function return parameters.
Same compilation issue persists here.
-func linearSearchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) {
+func linearSearchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, err error) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func linearSearchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) { | |
| func linearSearchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, err error) { |
tests/v2/e2e/crud/crud_test.go
Outdated
| return res, rid, err | ||
| } | ||
|
|
||
| func searchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix missing comma in function return parameters.
Same issue as above: a missing comma between "rid string" and "err error."
-func searchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) {
+func searchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, err error) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func searchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) { | |
| func searchByID(t *testing.T, ctx context.Context, client vald.Client, id string, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, err error) { |
tests/v2/e2e/crud/crud_test.go
Outdated
| } | ||
| } | ||
|
|
||
| func search(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix missing comma in function return parameters.
There's a missing comma between the return variables "rid string" and "err error," causing a compilation error.
Apply this diff to fix the function signature:
-func search(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) {
+func search(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, err error) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func search(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string err error) { | |
| func search(t *testing.T, ctx context.Context, client vald.Client, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, err error) { |
ad76b22 to
d0826ae
Compare
Deploying vald with
|
| Latest commit: |
d4f681c
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://7b13bb67.vald.pages.dev |
| Branch Preview URL: | https://refactor-test-e2e-add-v2-e2e.vald.pages.dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
tests/v2/e2e/crud/crud_test.go (4)
104-104: Simplify searchK assignment.The current assignment is unnecessarily complex:
searchK = uint32(*flag.Uint64("search-k", 100, "top-k number for searching"))Simplify it to:
-searchK = uint32(*flag.Uint64("search-k", 100, "top-k number for searching")) +var searchKFlag = flag.Uint("search-k", 100, "top-k number for searching") +searchK = uint32(*searchKFlag)
270-332: Reduce code duplication in search functions.The search-related functions (
search,searchByID,linearSearch,linearSearchByID) share significant code duplication in their configuration setup.Consider extracting the common configuration setup into a helper function:
+func createSearchConfig(rid string) *payload.Search_Config { + return &payload.Search_Config{ + RequestId: rid, + Num: searchK, + Radius: searchRadius, + Epsilon: searchEpsilon, + Timeout: searchTimeout, + } +}Then use it in all search functions:
- Config: &payload.Search_Config{ - RequestId: rid, - Num: searchK, - Radius: searchRadius, - Epsilon: searchEpsilon, - Timeout: searchTimeout, - }, + Config: createSearchConfig(rid),
557-557: Fix typo in log message.The log message contains a typo: "updateed" should be "updated".
- t.Logf("vector %v id %s updateed to %s", vec, id, res.String()) + t.Logf("vector %v id %s updated to %s", vec, id, res.String())
334-637: Consider test parallelization for performance.The test function performs multiple independent operations that could potentially run in parallel to reduce test execution time.
Consider using subtests with
t.Run()andt.Parallel()for independent operations:func TestE2EUnaryCRUD(t *testing.T) { t.Cleanup(teardown) client, ctx := newClient(t, context.Background()) + + t.Run("group", func(t *testing.T) { + t.Run("search", func(t *testing.T) { + t.Parallel() + // search operations + }) + + t.Run("searchByID", func(t *testing.T) { + t.Parallel() + // searchByID operations + }) + + // ... other operations + }) }Note: Ensure operations that modify shared state (like insert/update/delete) are not parallelized.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/v2/e2e/client/client.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/v2/e2e/client/client.go
🧰 Additional context used
🪛 GitHub Actions: Run formatter
tests/v2/e2e/crud/crud_test.go
[warning] 267-267: Code formatting changes detected. Please execute make format locally.
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: runner / go build
- GitHub Check: Cloudflare Pages
🔇 Additional comments (3)
tests/v2/e2e/crud/crud_test.go (3)
1-42: LGTM!The file structure follows Go best practices with proper build constraints, license header, package documentation, and organized imports.
44-80: LGTM!While global variables are generally discouraged, their use here is appropriate for test configuration and parameters.
267-267: Run code formatter.The CI pipeline detected formatting issues. Please run
make formatlocally to fix the formatting.✅ Verification successful
🏁 Script executed:
#!/bin/bash # Description: Check if make format command exists and suggest running it if [ -f "Makefile" ] && grep -q "^format:" "Makefile"; then echo "Found format target in Makefile. Please run 'make format' locally." else echo "No format target found in Makefile. Please check the project's formatting guidelines." fiLength of output: 200
Formatting Issue: Please Run 'make format' Locally
- The Makefile contains the "format" target.
- Running
make formatlocally should resolve the formatting issues detected in the CI pipeline.🧰 Tools
🪛 GitHub Actions: Run formatter
[warning] 267-267: Code formatting changes detected. Please execute
make formatlocally.
bca1041 to
84845b2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (3)
tests/v2/e2e/config/config.go (1)
117-131: Avoid mutating the map while iterating over it.Inside the loop that processes metadata, the code manipulates the map (delete/add) while iterating. Although Go won't panic in single-threaded usage, it can lead to unexpected behavior in more complex concurrency scenarios. Consider building an auxiliary map (or storing deferred changes) for clarity and robust handling.
tests/v2/e2e/crud/crud_test.go (2)
343-384: Reduce repetitive search loops for multiple algorithms.You’re invoking the same search or searchByID in loops for each algorithm. This makes the code verbose. Consider extracting a helper function that iterates over all algorithms and performs searches in one place, to simplify your tests and make them more maintainable.
Also applies to: 386-417, 419-450
497-497: Correct the spelling in the log message.“updateed” should be “updated.”
- t.Logf("vector %v id %s updateed to %s", vec, id, res.String()) + t.Logf("vector %v id %s updated to %s", vec, id, res.String())
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: CodeQL
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: Analyze (go)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (3)
tests/v2/e2e/config/config.go (1)
1-90: All good here.No major concerns found in these sections. The initialization logic, default configuration, and binding functions are clean and appear to match the intended usage for your e2e testing framework.
Also applies to: 97-116, 132-293
tests/v2/e2e/crud/crud_test.go (2)
1-177: Initial setup logic looks solid.TestMain and global variable setup appear correct for e2e testing. The usage of HDF5 dataset loading, port forwarding, and client initialization are properly structured.
229-229: No major concerns in the remaining code segments.Overall, the CRUD tests for insert, update, remove, and upsert operations look coherent. Logging and error checks are handled reasonably.
Also applies to: 251-342, 385-385, 418-418, 451-496, 498-578
tests/v2/e2e/config/config.go
Outdated
| Enabled bool `yaml:"enabled"` | ||
| PodName string `yaml:"pod_name"` | ||
| PodPort uint16 `yaml:"pod_port"` | ||
| LocalPort uint16 `yaml:"pod_port"` | ||
| Namespace string `yaml:"namespace"` | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix YAML tag mismatch for LocalPort.
Currently, “LocalPort” reuses the same YAML tag as “PodPort,” which likely causes confusion or misconfiguration.
Apply this diff to correct the YAML tag:
- LocalPort uint16 `yaml:"pod_port"`
+ LocalPort uint16 `yaml:"local_port"`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Enabled bool `yaml:"enabled"` | |
| PodName string `yaml:"pod_name"` | |
| PodPort uint16 `yaml:"pod_port"` | |
| LocalPort uint16 `yaml:"pod_port"` | |
| Namespace string `yaml:"namespace"` | |
| } | |
| Enabled bool `yaml:"enabled"` | |
| PodName string `yaml:"pod_name"` | |
| PodPort uint16 `yaml:"pod_port"` | |
| LocalPort uint16 `yaml:"local_port"` | |
| Namespace string `yaml:"namespace"` | |
| } |
tests/v2/e2e/crud/crud_test.go
Outdated
| func search(t *testing.T, ctx context.Context, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, rc float32, err error) { | ||
| t.Helper() | ||
| id := strconv.Itoa(idx) | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(algo)] | ||
| res, err = client.Search(ctx, &payload.Search_Request{ | ||
| Vector: vec, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: searchK, | ||
| Radius: searchRadius, | ||
| Epsilon: searchEpsilon, | ||
| Timeout: searchTimeout, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to search vector: %v", err) | ||
| } | ||
| rc = calculateRecall(t, res, idx) | ||
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, rc, res.String()) | ||
| return res, rc, rid, err | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix undefined variable and return-value ordering.
- The statement “id := strconv.Itoa(idx)” references an undeclared “idx”.
- The return values appear out of order (the function signature is (res, rid, rc, err), but the return is (res, rc, rid, err)).
Below is one possible fix, assuming you want to pass “idx int” and compute the ID inside the function:
-func search(t *testing.T, ctx context.Context, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, rc float32, err error) {
- id := strconv.Itoa(idx)
- rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(algo)]
- ...
- return res, rc, rid, err
+func search(t *testing.T, ctx context.Context, idx int, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, rc float32, err error) {
+ generatedID := strconv.Itoa(idx)
+ rid = generatedID + "-" + payload.Search_AggregationAlgorithm_name[int32(algo)]
+ ...
+ return res, rid, rc, err
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func search(t *testing.T, ctx context.Context, id string, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, rc float32, err error) { | |
| t.Helper() | |
| id := strconv.Itoa(idx) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(algo)] | |
| res, err = client.Search(ctx, &payload.Search_Request{ | |
| Vector: vec, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: searchK, | |
| Radius: searchRadius, | |
| Epsilon: searchEpsilon, | |
| Timeout: searchTimeout, | |
| }, | |
| }) | |
| if err != nil { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| rc = calculateRecall(t, res, idx) | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, rc, res.String()) | |
| return res, rc, rid, err | |
| } | |
| func search(t *testing.T, ctx context.Context, idx int, vec []float32, algo payload.Search_AggregationAlgorithm) (res *payload.Search_Response, rid string, rc float32, err error) { | |
| t.Helper() | |
| generatedID := strconv.Itoa(idx) | |
| rid = generatedID + "-" + payload.Search_AggregationAlgorithm_name[int32(algo)] | |
| res, err = client.Search(ctx, &payload.Search_Request{ | |
| Vector: vec, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: searchK, | |
| Radius: searchRadius, | |
| Epsilon: searchEpsilon, | |
| Timeout: searchTimeout, | |
| }, | |
| }) | |
| if err != nil { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| rc = calculateRecall(t, res, idx) | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, rc, res.String()) | |
| return res, rid, rc, err | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
tests/v2/e2e/crud/crud_test.go (1)
234-523: Guard against nil responses in the test logic.
Throughout TestE2EUnaryCRUD, if an error occurs (err != nil), you print the error but then call res.String() immediately afterward. If res is nil, this will panic. Consider returning early or skipping res.String() if err != nil.Example fix:
res, err := client.Search(ctx, &payload.Search_Request{ ... }) if err != nil { t.Errorf("failed to search vector: %v", err) -} -t.Logf("... %s", res.String()) + continue } t.Logf("... %s", res.String())
🧹 Nitpick comments (3)
tests/v2/e2e/crud/crud_test.go (2)
22-52: Consider avoiding global state for clarity.
Using global variables (e.g., client, ctx, cfg, ds) can sometimes introduce complexity and make test behavior harder to isolate. It can be clearer to pass these as parameters to functions where needed, which helps avoid unintended interactions across multiple tests.
126-143: Optional: Document metadata usage more explicitly.
newClient sets a metadata context only if the incoming meta map is non-nil. Consider documenting how or when meta is expected to be populated, to avoid confusion if tests rely on it.tests/v2/e2e/config/config.go (1)
200-251: Caution with queries iteration in SearchConfig.Bind.
When sc.Queries is empty, no iteration occurs. That’s fine, but if sc.Queries unexpectedly contains nil elements, the loop would skip them safely, though it might hamper search coverage. Consider logging a warning for nil queries to aid debugging.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: check-format-diff
- GitHub Check: CodeQL
- GitHub Check: runner / go build
- GitHub Check: Analyze (go)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (10)
tests/v2/e2e/crud/crud_test.go (2)
1-20: Good inclusion of license and build constraints.
These lines provide standard licensing and build instructions for e2e tests. No issues noted.
53-124: Potential indefinite block on the error channel goroutine.
In TestMain, you launch a goroutine that listens for errors on ech (line 106+). If the client’s internal processes never send an error and ctx never cancels, the goroutine will remain blocked. This is typically acceptable in tests that run to completion, but if you anticipate partial teardown or abrupt test stops, ensure that ech is closed or that the code explicitly handles context cancellation.tests/v2/e2e/config/config.go (8)
1-20: Appropriate licensing and build constraints.
These lines correctly declare the e2e build tag and include licensing details.
22-56: Clear high-level config structure.
The Data struct captures a diverse set of configurations in a straightforward manner. No immediate concerns here.
58-79: Validate indexing for SearchQuery defaults.
When assigning defaults in sq.Timeout = timeutil.ParseWithDefault(...), you rely on Default.Search.Queries[0].Timeout. Ensure that Default.Search.Queries is non-empty; otherwise, you risk an index out of range error if queries are not defined.
80-110: Duplicate YAML tag for LocalPort.
LocalPort is declared with yaml:"pod_port" (line 108), the same as PodPort. This inadvertently prevents separate configuration of local vs. pod port.As previously suggested, fix with:
- LocalPort uint16 `yaml:"pod_port"` + LocalPort uint16 `yaml:"local_port"`
116-198: Good modular Bind approach on Data struct.
Chaining the Bind calls for nested structs is neat and ensures config is post-processed thoroughly. The approach is consistent and maintainable.
253-289: Robust binding for WaitAfterInsert and Setting.
Using timeutil.ParseWithDefault and verifying numeric fields is a solid approach. The warnings for zero or negative values are helpful for diagnosing misconfigurations.
291-331: PortForward default logic is correct, aside from YAML duplication.
Falling back to localPort if 0 is assigned fosters a frictionless experience, but ensure you fix the YAML tag duplication to allow distinct overrides.
333-445: Load merges config with defaults gracefully.
Merging partial configs with Default is beneficial for simplifying user-provided YAML. The final Bind call ensures data consistency. No concerns.
| func sleep(t *testing.T, dur time.Duration) { | ||
| t.Logf("%v sleep for %s.", time.Now(), dur) | ||
| time.Sleep(dur) | ||
| t.Logf("%v sleep finished.", time.Now()) | ||
| } | ||
|
|
||
| func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { | ||
| t.Helper() | ||
| ns := map[string]struct{}{} | ||
| for _, n := range neighbors { | ||
| ns[strconv.Itoa(n)] = struct{}{} | ||
| } | ||
|
|
||
| for _, r := range resultIDs { | ||
| if _, ok := ns[r]; ok { | ||
| recall++ | ||
| } | ||
| } | ||
|
|
||
| return recall / float64(len(neighbors)) | ||
| } | ||
|
|
||
| func calculateRecall(t *testing.T, res *payload.Search_Response, idx int) (rc float64) { | ||
| t.Helper() | ||
| topKIDs := make([]string, 0, len(res.GetResults())) | ||
| for _, d := range res.GetResults() { | ||
| topKIDs = append(topKIDs, d.GetId()) | ||
| } | ||
|
|
||
| if len(topKIDs) == 0 { | ||
| t.Errorf("empty result is returned for test ID %s: %#v", res.GetRequestId(), topKIDs) | ||
| return | ||
| } | ||
| rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)]) | ||
| return rc | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent potential division by zero in recall calculation.
If neighbors is an empty slice, recall(t, resultIDs, neighbors) will do a division by zero on line 164. Add a check to avoid dividing by float64(len(neighbors)) if len(neighbors) == 0.
Example fix:
- return recall / float64(len(neighbors))
+ if len(neighbors) == 0 {
+ t.Errorf("neighbors is empty, cannot compute recall")
+ return 0.0
+ }
+ return recall / float64(len(neighbors))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func sleep(t *testing.T, dur time.Duration) { | |
| t.Logf("%v sleep for %s.", time.Now(), dur) | |
| time.Sleep(dur) | |
| t.Logf("%v sleep finished.", time.Now()) | |
| } | |
| func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { | |
| t.Helper() | |
| ns := map[string]struct{}{} | |
| for _, n := range neighbors { | |
| ns[strconv.Itoa(n)] = struct{}{} | |
| } | |
| for _, r := range resultIDs { | |
| if _, ok := ns[r]; ok { | |
| recall++ | |
| } | |
| } | |
| return recall / float64(len(neighbors)) | |
| } | |
| func calculateRecall(t *testing.T, res *payload.Search_Response, idx int) (rc float64) { | |
| t.Helper() | |
| topKIDs := make([]string, 0, len(res.GetResults())) | |
| for _, d := range res.GetResults() { | |
| topKIDs = append(topKIDs, d.GetId()) | |
| } | |
| if len(topKIDs) == 0 { | |
| t.Errorf("empty result is returned for test ID %s: %#v", res.GetRequestId(), topKIDs) | |
| return | |
| } | |
| rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)]) | |
| return rc | |
| } | |
| func sleep(t *testing.T, dur time.Duration) { | |
| t.Logf("%v sleep for %s.", time.Now(), dur) | |
| time.Sleep(dur) | |
| t.Logf("%v sleep finished.", time.Now()) | |
| } | |
| func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { | |
| t.Helper() | |
| ns := map[string]struct{}{} | |
| for _, n := range neighbors { | |
| ns[strconv.Itoa(n)] = struct{}{} | |
| } | |
| for _, r := range resultIDs { | |
| if _, ok := ns[r]; ok { | |
| recall++ | |
| } | |
| } | |
| if len(neighbors) == 0 { | |
| t.Errorf("neighbors is empty, cannot compute recall") | |
| return 0.0 | |
| } | |
| return recall / float64(len(neighbors)) | |
| } | |
| func calculateRecall(t *testing.T, res *payload.Search_Response, idx int) (rc float64) { | |
| t.Helper() | |
| topKIDs := make([]string, 0, len(res.GetResults())) | |
| for _, d := range res.GetResults() { | |
| topKIDs = append(topKIDs, d.GetId()) | |
| } | |
| if len(topKIDs) == 0 { | |
| t.Errorf("empty result is returned for test ID %s: %#v", res.GetRequestId(), topKIDs) | |
| return | |
| } | |
| rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)]) | |
| return rc | |
| } |
| func indexStatus(t *testing.T, ctx context.Context) { | ||
| t.Helper() | ||
| { | ||
| res, err := client.IndexInfo(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexInfo %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexInfo %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexInfo: %v", res.String()) | ||
| } | ||
| { | ||
| res, err := client.IndexDetail(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexDetail %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexDetail %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexDetail: %v", res.String()) | ||
| } | ||
| { | ||
| res, err := client.IndexStatistics(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexStatistics %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexStatistics %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexStatistics: %v", res.String()) | ||
| } | ||
| { | ||
| res, err := client.IndexStatisticsDetail(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexStatisticsDetail %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexStatisticsDetail %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexStatisticsDetail: %v", res.String()) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid nil pointer dereferences in indexStatus.
If any of the client.* calls fail, res can be nil but is still passed to res.String(). Return early or skip logging if err != nil to prevent nil pointer panics.
Example fix:
res, err := client.IndexInfo(ctx, &payload.Empty{})
if err != nil {
...
+ return
}
t.Logf("IndexInfo: %v", res.String())e2bf032 to
d400d67
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/v2/e2e/crud/crud_test.go (1)
280-280: Consider reducing verbosity of vector logs
These tests log the entire vector for each operation, potentially generating very large logs.To keep logs more concise without losing necessary info, consider truncating or summarizing vectors:
- t.Logf("vector %v id %s ...", vec, id, ...) + t.Logf("vector of length %d id %s ...", len(vec), id, ...)Also applies to: 314-314, 346-346, 377-377, 407-407
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: runner / textlint
- GitHub Check: Cloudflare Pages
🔇 Additional comments (3)
tests/v2/e2e/config/config.go (2)
109-109: Duplicate YAML tag for LocalPort
LocalPort currently shares the same YAML tag ("pod_port") as PodPort, which likely results in misconfigurations.To fix this, replace the YAML tag for LocalPort so it's distinct, for example:
- LocalPort uint16 `yaml:"pod_port"` + LocalPort uint16 `yaml:"local_port"`
1-466: Overall Implementation Looks Good
The rest of the configuration code is well-structured and consistent with best practices for config binding and environment variable expansion. Great job!tests/v2/e2e/crud/crud_test.go (1)
1-579: Overall Test Code Assessment
The tests are thoughtfully structured, covering all CRUD operations comprehensively with concurrency support. Nice work!
tests/v2/e2e/config/config.go
Outdated
| } | ||
|
|
||
| // Log the entire configuration as raw YAML for debugging purposes. | ||
| log.Debug(config.ToRawYaml(d)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential PII exposure in debug log
Logging the entire configuration (including Metadata) can inadvertently expose sensitive or private information in logs.
Consider removing or sanitizing the debug log, for example:
- log.Debug(config.ToRawYaml(d))
+ // log.Debug(config.ToRawYaml(d)) // Remove or sanitize to prevent sensitive data leakage📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| log.Debug(config.ToRawYaml(d)) | |
| // log.Debug(config.ToRawYaml(d)) // Remove or sanitize to prevent sensitive data leakage |
d400d67 to
c5340d3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/v2/e2e/crud/crud_test.go
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: check-format-diff
- GitHub Check: CodeQL
- GitHub Check: runner / go build
- GitHub Check: Cloudflare Pages
🔇 Additional comments (4)
tests/v2/e2e/config/config.go (4)
1-16: Overall license and header
Everything looks good here. No issues found with the file headers and license comment.
110-111: Fix YAML tag mismatch for LocalPort.
Currently, “LocalPort” reuses the same YAML tag as “PodPort,” which likely causes confusion or misconfiguration.Apply the fix from the past comment:
- LocalPort uint16 `yaml:"pod_port"` + LocalPort uint16 `yaml:"local_port"`
200-200: Potential PII exposure in debug log.
Logging the entire configuration, including metadata, could inadvertently expose sensitive or private information in logs.Consider removing or sanitizing the debug log:
- log.Debug(config.ToRawYaml(d)) + // log.Debug(config.ToRawYaml(d)) // Remove or sanitize sensitive data
235-235: Verify reference to Default.Search.Queries[0].Timeout.
AccessingDefault.Search.Queries[0]may cause an out-of-range panic if any code removes or empties this slice. Confirm that at least one query always exists inDefault.Search.Queriesor add appropriate safety checks.
tests/v2/e2e/config/config.go
Outdated
| PodPort uint16 `yaml:"pod_port"` // The port number on the pod. | ||
| LocalPort uint16 `yaml:"pod_port"` // The local port number; if not set, it defaults to PodPort. | ||
| Namespace string `yaml:"namespace"` // The Kubernetes namespace of the pod. | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Doc mismatch for local port default behavior.
The doc mentions "if not set, it defaults to PodPort," but the code in lines 340–342 sets it to localPort (value 8081). Either update the comment to reflect the actual logic or change the assignment to match the comment to avoid confusion.
- // The local port number; if not set, it defaults to PodPort.
+ // The local port number; if not set, it defaults to 8081.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| PodPort uint16 `yaml:"pod_port"` // The port number on the pod. | |
| LocalPort uint16 `yaml:"pod_port"` // The local port number; if not set, it defaults to PodPort. | |
| Namespace string `yaml:"namespace"` // The Kubernetes namespace of the pod. | |
| } | |
| PodPort uint16 `yaml:"pod_port"` // The port number on the pod. | |
| LocalPort uint16 `yaml:"pod_port"` // The local port number; if not set, it defaults to 8081. | |
| Namespace string `yaml:"namespace"` // The Kubernetes namespace of the pod. | |
| } |
eb34989 to
48fea73
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (4)
tests/v2/e2e/crud/unary_test.go (2)
51-83: Consider capturing the error fromeg.Wait()
Throughout the test, the error returned byeg.Wait()is discarded. This can hide failures in concurrent operations, as any goroutine error would go unnoticed. To ensure that test failures are reported correctly, capture the returned error and fail the test accordingly.Example fix:
eg.Wait() +if err := eg.Wait(); err != nil { + t.Errorf("errgroup encountered an error: %v", err) +}Also applies to: 89-123, 125-159, 161-195, 197-232, 233-279, 280-313, 314-342, 343-376
91-122: Reduce repetitive search logic
Multiple sections of the code perform nearly identical search logic with slight differences (e.g., searching by vector vs. ID). Consider extracting these repeated blocks into a reusable helper function or method to adhere to DRY (Don’t Repeat Yourself) principles and improve maintainability.Also applies to: 127-158, 163-194, 199-230
tests/v2/e2e/crud/multi_test.go (2)
567-567: Correct the typo in log statement.
“upsertd” should be “upserted” for clarity.- t.Logf("vectors %s upsertd %s", req.String(), res.String()) + t.Logf("vectors %s upserted %s", req.String(), res.String())
36-622: Consider refactoring repeated concurrency patterns into reusable helper methods.
The test repeatedly chunks requests, invokesMultiInsert,MultiSearch,MultiUpdate, etc. with the same concurrency pattern. Factoring this into a helper could reduce duplication, simplify maintenance, and make the tests more readable.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: runner / go build
- GitHub Check: Cloudflare Pages
🔇 Additional comments (4)
tests/v2/e2e/config/config.go (3)
111-112: Fix YAML tag mismatch for LocalPort
LocalPort is incorrectly using the same YAML tag (pod_port) as PodPort. This can lead to confusion or misconfiguration.- LocalPort uint16 `yaml:"pod_port"` + LocalPort uint16 `yaml:"local_port"`
110-113: Doc mismatch for local port default behavior
The comment states “if not set, it defaults to PodPort,” but the code actually defaults to a constant value (8081). Update the doc or logic for consistency.- // The local port number; if not set, it defaults to PodPort. + // The local port number; if not set, it defaults to 8081.
200-200: Potential PII exposure in debug log
Logging the entire configuration (including metadata) usingconfig.ToRawYaml(d)can inadvertently leak sensitive or private information. Consider removing or sanitizing this debug output.- log.Debug(config.ToRawYaml(d)) + // log.Debug(config.ToRawYaml(d)) // Remove or sanitize to prevent sensitive data leakagetests/v2/e2e/crud/crud_test.go (1)
150-164: Guard against potential division by zero inrecall()function.If
neighborsis empty,float64(len(neighbors))equals zero, leading to a division by zero. This concern matches a past review comment.Below is an example safeguard:
func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { if len(neighbors) == 0 { t.Errorf("neighbors slice is empty, recall calculation is undefined") return 0 } return recall / float64(len(neighbors)) }
| ts = timestamp | ||
| } | ||
| idx = 0 | ||
| datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likely off-by-one error in remove slice range
Here, you're using ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] instead of referencing cfg.Remove for both the offset and length. This likely removes a different slice than intended.
Suggested fix:
- datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num]
+ datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] | |
| datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num] |
| eg.Go(safety.RecoverFunc(func() error { | ||
| obj, err := client.GetObject(ctx, &payload.Object_VectorRequest{ | ||
| Id: &payload.Object_ID{Id: id}, | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to get object: %v", err) | ||
| } | ||
| t.Logf("id %s got object: %v", id, obj.String()) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid potential nil pointer dereference when GetObject fails.
If err is non-nil, obj may be nil, causing obj.String() to panic. Consider returning immediately or skipping the obj.String() call when err != nil.
Here's an example fix:
- if err != nil {
- t.Errorf("failed to get object: %v", err)
- }
- t.Logf("id %s got object: %v", id, obj.String())
+ if err != nil {
+ t.Errorf("failed to get object: %v", err)
+ return nil
+ }
+ if obj != nil {
+ t.Logf("id %s got object: %v", id, obj.String())
+ } else {
+ t.Errorf("failed to get object: object is nil")
+ return nil
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| eg.Go(safety.RecoverFunc(func() error { | |
| obj, err := client.GetObject(ctx, &payload.Object_VectorRequest{ | |
| Id: &payload.Object_ID{Id: id}, | |
| }) | |
| if err != nil { | |
| t.Errorf("failed to get object: %v", err) | |
| } | |
| t.Logf("id %s got object: %v", id, obj.String()) | |
| eg.Go(safety.RecoverFunc(func() error { | |
| obj, err := client.GetObject(ctx, &payload.Object_VectorRequest{ | |
| Id: &payload.Object_ID{Id: id}, | |
| }) | |
| if err != nil { | |
| t.Errorf("failed to get object: %v", err) | |
| return nil | |
| } | |
| if obj != nil { | |
| t.Logf("id %s got object: %v", id, obj.String()) | |
| } else { | |
| t.Errorf("failed to get object: object is nil") | |
| return nil | |
| } |
48fea73 to
2b0c505
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/v2/e2e/crud/crud_test.go (1)
127-144: Check adherence of newClient function to project coding standards.The pipeline indicates that
newClientwas reformatted. Confirm gofmt/goimports alignment and any project-specific style rules (e.g., indentation, naming, doc comments).- // If there's a specific style you need to enforce, apply it here. + // Example: run "goimports" or any internal linter your project provides.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/v2/e2e/crud/stream_test.go
🧰 Additional context used
🪛 GitHub Actions: Run formatter
tests/v2/e2e/crud/crud_test.go
[warning] 27-27: Unused import 'google.golang.org/grpc/metadata'.
[warning] 38-38: Function 'newClient' has been reformatted. Ensure it adheres to the project's coding standards.
tests/v2/e2e/crud/multi_test.go
[warning] 25-25: Unused import 'google.golang.org/protobuf/types/known/wrapperspb'.
tests/v2/e2e/crud/unary_test.go
[warning] 24-24: Unused import 'google.golang.org/protobuf/types/known/wrapperspb'.
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: runner / go build
- GitHub Check: Cloudflare Pages
🔇 Additional comments (8)
tests/v2/e2e/config/config.go (2)
111-111: Fix YAML tag mismatch for LocalPort.Both
PodPortandLocalPortshare the same YAML tag (pod_port), causing potential confusion and misconfiguration. This issue was previously flagged and remains unresolved.Apply this diff to fix the YAML tag:
- LocalPort uint16 `yaml:"pod_port"` + LocalPort uint16 `yaml:"local_port"`
465-466: Potential PII exposure in debug log.Logging the entire configuration may expose sensitive or private information, especially since
Metadatacan contain secrets or internal data. This issue was previously reported and remains a concern.Consider removing or sanitizing the debug log, for example:
- log.Debug(config.ToRawYaml(cfg)) + // log.Debug(config.ToRawYaml(cfg)) // Remove or sanitize to prevent sensitive data leakagetests/v2/e2e/crud/multi_test.go (3)
78-90: Guard against potential nil pointer dereferences in logging.In these code blocks, after detecting
err != nil, you still logres.String()without first verifying thatres != nil. Ifresis nil, calling.String()will panic. This issue closely mirrors past feedback about checking nil after errors.A minimal fix is to skip logging if
resis nil, for example:if err != nil { t.Errorf("failed to insert vector: %v", err) + return nil } if res == nil { + t.Errorf("failed to insert vector: response was nil") + return nil } t.Logf("vectors %s inserted %s", req.String(), res.String())Also applies to: 93-104, 144-162, 167-185, 221-239, 244-262, 298-316, 321-339, 375-393, 398-416, 527-539, 542-554, 582-594, 597-609
424-431: Avoid potential nil pointer dereference for GetObject.If
erris non-nil,objmay benil, and callingobj.String()will panic. This issue was raised previously and still exists here.Proposed fix (illustrative):
if err != nil { t.Errorf("failed to get object: %v", err) + return nil } if obj == nil { + t.Errorf("object is nil; cannot log details") + return nil } t.Logf("id %s got object: %v", id, obj.String())
23-29: Re-verify the usage of wrapperspb import.The pipeline reports an unused import of
"google.golang.org/protobuf/types/known/wrapperspb", but this appears to be used in multiple places (e.g., lines 121, 198, 275, etc.). Please verify whether the pipeline warning is a false positive or if there's any conditional code path that removes its usage.✅ Verification successful
🏁 Script executed:
#!/bin/bash # Description: Search for usage of wrapperspb in multi_test.go rg 'wrapperspb' -A 5Length of output: 12068
Wrapperspb usage in tests/v2/e2e/crud/multi_test.go is confirmed to be valid.
The search results clearly show that within the file (and related test files) the
wrapperspbpackage is utilized (for example, via calls likewrapperspb.Float(query.Ratio)), so the pipeline warning appears to be a false positive.
- The import of
"google.golang.org/protobuf/types/known/wrapperspb"is indeed referenced multiple times (e.g., when initializingvar ratio *wrapperspb.FloatValue).- Similar usages are found in other test files such as
unary_test.goandstream_test.go.🧰 Tools
🪛 GitHub Actions: Run formatter
[warning] 25-25: Unused import 'google.golang.org/protobuf/types/known/wrapperspb'.
tests/v2/e2e/crud/crud_test.go (2)
185-195: Prevent nil pointer dereferences in indexStatus.Similar to previous feedback, you log
res.String()even iferris non-nil. Ifresis nil, calling.String()will panic.An example fix:
res, err := client.IndexInfo(ctx, &payload.Empty{}) if err != nil { ... + return } if res == nil { + t.Errorf("received nil response from IndexInfo") + return } t.Logf("IndexInfo: %v", res.String())Also applies to: 197-208, 210-220, 222-233
30-30: Confirm usage of metadata import.The pipeline flags this import as unused, but the code below (e.g., line 141) actually uses
metadata.NewOutgoingContext. Verify that the pipeline warning is up-to-date or whether some conditional compilation might make this import optional.✅ Verification successful
🏁 Script executed:
#!/bin/bash # Description: Search for usage of "google.golang.org/grpc/metadata" in crud_test.go rg 'metadata.' -A 5Length of output: 69942
Confirm metadata usage in tests/v2/e2e/crud/crud_test.go
I verified that the file indeed calls
metadata.NewOutgoingContext(around line 141) so the import from"google.golang.org/grpc/metadata"is required. The pipeline warning appears to be a false positive, perhaps due to conditional compilation or build context.
- The call to
metadata.NewOutgoingContextconfirms the import is in active use.tests/v2/e2e/crud/unary_test.go (1)
27-27: False positive:wrapperspbimport is used.The pipeline warning about unused import is a false positive. The
wrapperspbpackage is used multiple times in the code for creatingFloatValuewrappers (e.g., lines 104, 147, 190, 233).
| func TestE2EUnarySkipStrictExistsCheckCRUD(t *testing.T) { | ||
| timestamp := time.Now().UnixNano() | ||
|
|
||
| { | ||
| res, err := client.IndexProperty(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexProperty %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexProperty %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexProperty: %v", res.String()) | ||
| } | ||
|
|
||
| t.Log("starting test #1 run Update with SkipStrictExistCheck=true and check that it fails.") | ||
| eg, _ := errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Update.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Update.Offset : cfg.Update.Offset+cfg.Update.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Update.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| _, err := client.Update(ctx, &payload.Update_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Update_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: true, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if !ok || st == nil || st.Code() != codes.NotFound { | ||
| t.Errorf("update vector response is not NotFound: %v with SkipStrictExistCheck=true", err) | ||
| } | ||
| } | ||
| t.Logf("received a NotFound error on #1: %s", err.Error()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| t.Log("starting test #2 run Update with SkipStrictExistCheck=false, and check that the internal Remove Operation returns a NotFound error.") | ||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Update.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Update.Offset : cfg.Update.Offset+cfg.Update.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Update.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| _, err := client.Update(ctx, &payload.Update_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Update_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: false, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if !ok || st == nil || st.Code() != codes.NotFound { | ||
| t.Errorf("update vector response is not NotFound: %v with SkipStrictExistCheck=false", err) | ||
| } | ||
| } | ||
| t.Logf("received a NotFound error on #2: %s", err.Error()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| t.Log("starting test #3 run Insert with SkipStrictExistCheck=false and confirmed that it succeeded") | ||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Insert.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Insert.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Insert(ctx, &payload.Insert_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Insert_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: false, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to insert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s inserted on #3 to %s", vec, id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| sleep(t, cfg.Index.WaitAfterInsert) | ||
|
|
||
| indexStatus(t, ctx) | ||
|
|
||
| t.Log("starting test #4 run Update with SkipStrictExistCheck=false & a different vector, and check that it succeeds") | ||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Update.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Update.Offset : cfg.Update.Offset+cfg.Update.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Update.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Update(ctx, &payload.Update_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Update_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: false, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to update vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to update vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s updated on #4 to %s", vec, id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| sleep(t, cfg.Index.WaitAfterInsert) | ||
|
|
||
| t.Log("starting test #5 run Update with SkipStrictExistCheck=false, and check that the internal Remove Operation returns a NotFound error.") | ||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Update.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Update.Offset : cfg.Update.Offset+cfg.Update.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Update.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| _, err := client.Update(ctx, &payload.Update_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Update_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: false, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if !ok || st == nil || st.Code() != codes.AlreadyExists { | ||
| t.Errorf("update vector response is not AlreadyExists: %v with SkipStrictExistCheck=false", err) | ||
| } | ||
| } | ||
| t.Logf("received a NotFound error on #5: %s", err.Error()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| t.Log("starting test #6 run Update with SkipStrictExistCheck=true & 4 and check that it succeess") | ||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Update.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Update.Offset : cfg.Update.Offset+cfg.Update.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Update.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Update(ctx, &payload.Update_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Update_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: true, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to update vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to update vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s updated on #6 to %s", vec, id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Remove.Concurrency)) | ||
| for i := range ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Remove.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Remove(ctx, &payload.Remove_Request{ | ||
| Id: &payload.Object_ID{Id: id}, | ||
| Config: &payload.Remove_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Remove.SkipStrictExistCheck, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to remove vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to remove vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("id %s'd vector removed to %s", id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Upsert.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Upsert.Offset : cfg.Upsert.Offset+cfg.Upsert.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Upsert.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Upsert(ctx, &payload.Upsert_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Upsert_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Upsert.SkipStrictExistCheck, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to upsert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to upsert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s upserted to %s", vec, id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| { | ||
| res, err := client.Flush(ctx, &payload.Flush_Request{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to flush %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to flush %v", err) | ||
| } | ||
| } | ||
| t.Logf("flushed %s", res.String()) | ||
| } | ||
|
|
||
| indexStatus(t, ctx) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor test cases using subtests.
The test cases should be organized using t.Run() for better clarity and independent execution. This will also allow running specific test cases using -run flag.
func TestE2EUnarySkipStrictExistsCheckCRUD(t *testing.T) {
- t.Log("starting test #1 run Update with SkipStrictExistCheck=true and check that it fails.")
+ t.Run("UpdateWithSkipStrictExistCheckTrue", func(t *testing.T) {
+ // Test case #1 implementation
+ })Fix typo in log message.
The log message on line 634 incorrectly states "NotFound" when the error is actually "AlreadyExists".
- t.Logf("received a NotFound error on #5: %s", err.Error())
+ t.Logf("received an AlreadyExists error on #5: %s", err.Error())Add test case documentation.
Each test case should have proper documentation explaining the test scenario, expected behavior, and any preconditions.
+// TestE2EUnarySkipStrictExistsCheckCRUD tests the behavior of SkipStrictExistCheck flag
+// in various scenarios:
+// 1. Update with SkipStrictExistCheck=true on non-existent vector
+// 2. Update with SkipStrictExistCheck=false on non-existent vector
+// 3. Insert with SkipStrictExistCheck=false
+// 4. Update with SkipStrictExistCheck=false on existing vector
+// 5. Update with SkipStrictExistCheck=false with same vector
+// 6. Update with SkipStrictExistCheck=true on existing vector
func TestE2EUnarySkipStrictExistsCheckCRUD(t *testing.T) {Committable suggestion skipped: line range outside the PR's diff.
| func TestE2EUnaryCRUD(t *testing.T) { | ||
| timestamp := time.Now().UnixNano() | ||
|
|
||
| { | ||
| res, err := client.IndexProperty(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexProperty %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexProperty %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexProperty: %v", res.String()) | ||
| } | ||
|
|
||
| var eg errgroup.Group | ||
| if cfg.Insert.Num != 0 { | ||
| eg, _ := errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Insert.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Insert.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Insert(ctx, &payload.Insert_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Insert_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to insert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| sleep(t, cfg.Index.WaitAfterInsert) | ||
|
|
||
| } | ||
|
|
||
| indexStatus(t, ctx) | ||
|
|
||
| if cfg.Search.Num != 0 { | ||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Search.Concurrency)) | ||
| for i, vec := range ds.Test[cfg.Search.Offset : cfg.Search.Offset+cfg.Search.Num] { | ||
| for _, query := range cfg.Search.Queries { | ||
| id := strconv.Itoa(i) | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| var ratio *wrapperspb.FloatValue | ||
| if query.Ratio != 0 { | ||
| ratio = wrapperspb.Float(query.Ratio) | ||
| } else { | ||
| ratio = nil | ||
| } | ||
| res, err := client.Search(ctx, &payload.Search_Request{ | ||
| Vector: vec, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: query.K, | ||
| Radius: query.Radius, | ||
| Epsilon: query.Epsilon, | ||
| Timeout: query.Timeout.Nanoseconds(), | ||
| AggregationAlgorithm: query.Algorithm, | ||
| MinNum: query.MinNum, | ||
| Ratio: ratio, | ||
| Nprobe: query.Nprobe, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to search vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| } | ||
| eg.Wait() | ||
| } | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.SearchByID.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.SearchByID.Offset : cfg.SearchByID.Offset+cfg.SearchByID.Num] { | ||
| for _, query := range cfg.SearchByID.Queries { | ||
| id := strconv.Itoa(i) | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| var ratio *wrapperspb.FloatValue | ||
| if query.Ratio != 0 { | ||
| ratio = wrapperspb.Float(query.Ratio) | ||
| } else { | ||
| ratio = nil | ||
| } | ||
|
|
||
| res, err := client.SearchByID(ctx, &payload.Search_IDRequest{ | ||
| Id: id, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: query.K, | ||
| Radius: query.Radius, | ||
| Epsilon: query.Epsilon, | ||
| Timeout: query.Timeout.Nanoseconds(), | ||
| AggregationAlgorithm: query.Algorithm, | ||
| MinNum: query.MinNum, | ||
| Ratio: ratio, | ||
| Nprobe: query.Nprobe, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to search vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| } | ||
| eg.Wait() | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.LinearSearch.Concurrency)) | ||
| for i, vec := range ds.Test[cfg.LinearSearch.Offset : cfg.LinearSearch.Offset+cfg.LinearSearch.Num] { | ||
| for _, query := range cfg.LinearSearch.Queries { | ||
| id := strconv.Itoa(i) | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| var ratio *wrapperspb.FloatValue | ||
| if query.Ratio != 0 { | ||
| ratio = wrapperspb.Float(query.Ratio) | ||
| } else { | ||
| ratio = nil | ||
| } | ||
|
|
||
| res, err := client.LinearSearch(ctx, &payload.Search_Request{ | ||
| Vector: vec, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: query.K, | ||
| Radius: query.Radius, | ||
| Epsilon: query.Epsilon, | ||
| Timeout: query.Timeout.Nanoseconds(), | ||
| AggregationAlgorithm: query.Algorithm, | ||
| MinNum: query.MinNum, | ||
| Ratio: ratio, | ||
| Nprobe: query.Nprobe, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to search vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| } | ||
| eg.Wait() | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.LinearSearchByID.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.LinearSearchByID.Offset : cfg.LinearSearchByID.Offset+cfg.LinearSearchByID.Num] { | ||
| for _, query := range cfg.LinearSearchByID.Queries { | ||
| id := strconv.Itoa(i) | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| var ratio *wrapperspb.FloatValue | ||
| if query.Ratio != 0 { | ||
| ratio = wrapperspb.Float(query.Ratio) | ||
| } else { | ||
| ratio = nil | ||
| } | ||
|
|
||
| res, err := client.LinearSearchByID(ctx, &payload.Search_IDRequest{ | ||
| Id: id, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: query.K, | ||
| Radius: query.Radius, | ||
| Epsilon: query.Epsilon, | ||
| Timeout: query.Timeout.Nanoseconds(), | ||
| AggregationAlgorithm: query.Algorithm, | ||
| MinNum: query.MinNum, | ||
| Ratio: ratio, | ||
| Nprobe: query.Nprobe, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to search vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| } | ||
| eg.Wait() | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Object.Concurrency)) | ||
| for i := range ds.Train[cfg.Object.Offset : cfg.Object.Offset+cfg.Object.Num] { | ||
| id := strconv.Itoa(i) | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| obj, err := client.GetObject(ctx, &payload.Object_VectorRequest{ | ||
| Id: &payload.Object_ID{Id: id}, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get object: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get object: %v", err) | ||
| } | ||
| } | ||
| t.Logf("id %s got object: %v", id, obj.String()) | ||
|
|
||
| exists, err := client.Exists(ctx, &payload.Object_ID{Id: id}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to check object exists: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to check object exitst: %v", err) | ||
| } | ||
| } | ||
| t.Logf("id %s exists: %v", id, exists.String()) | ||
|
|
||
| res, err := client.GetTimestamp(ctx, &payload.Object_TimestampRequest{ | ||
| Id: &payload.Object_ID{Id: id}, | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to get timestamp: %v", err) | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get object timestamp: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get object timestamp: %v", err) | ||
| } | ||
| } | ||
| t.Logf("id %s got timestamp: %v", id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Update.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Update.Offset : cfg.Update.Offset+cfg.Update.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Update.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Update(ctx, &payload.Update_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Update_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Update.SkipStrictExistCheck, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to update vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to update vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s updated to %s", vec, id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Remove.Concurrency)) | ||
| for i := range ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Remove.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Remove(ctx, &payload.Remove_Request{ | ||
| Id: &payload.Object_ID{Id: id}, | ||
| Config: &payload.Remove_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Remove.SkipStrictExistCheck, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to remove vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to remove vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("id %s'd vector removed to %s", id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Upsert.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Upsert.Offset : cfg.Upsert.Offset+cfg.Upsert.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Upsert.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Upsert(ctx, &payload.Upsert_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Upsert_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Upsert.SkipStrictExistCheck, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to upsert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to upsert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s upserted to %s", vec, id, res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| eg.Wait() | ||
|
|
||
| { | ||
| rts := time.Now().Add(-time.Hour).UnixNano() | ||
| res, err := client.RemoveByTimestamp(ctx, &payload.Remove_TimestampRequest{ | ||
| Timestamps: []*payload.Remove_Timestamp{ | ||
| { | ||
| Timestamp: rts, | ||
| Operator: payload.Remove_Timestamp_Le, | ||
| }, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to remove by timestamp vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to remove by timestamp vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("removed by timestamp %s to %s", time.Unix(0, rts).String(), res.String()) | ||
| } | ||
|
|
||
| { | ||
| res, err := client.Flush(ctx, &payload.Flush_Request{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to flush %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to flush %v", err) | ||
| } | ||
| } | ||
| t.Logf("flushed %s", res.String()) | ||
| } | ||
|
|
||
| indexStatus(t, ctx) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor repeated error handling pattern.
The error handling pattern is duplicated across multiple operations. Consider extracting it into a helper function to improve maintainability and reduce code duplication.
+func handleError(t *testing.T, err error, operation string) {
+ if err != nil {
+ st, ok := status.FromError(err)
+ if ok && st != nil {
+ t.Errorf("failed to %s: %v, status: %s", operation, err, st.String())
+ } else {
+ t.Errorf("failed to %s: %v", operation, err)
+ }
+ }
+}Add proper test cleanup.
The test should ensure proper cleanup even if intermediate operations fail. Consider using t.Cleanup() to register cleanup functions.
+func TestE2EUnaryCRUD(t *testing.T) {
+ t.Cleanup(func() {
+ _, err := client.Flush(ctx, &payload.Flush_Request{})
+ if err != nil {
+ t.Logf("cleanup failed: %v", err)
+ }
+ })Extract timestamp handling logic.
The timestamp handling logic is duplicated across multiple operations. Consider extracting it into a helper function.
+func getTimestamp(cfg *Config, defaultTimestamp int64) int64 {
+ if cfg.Timestamp != 0 {
+ return cfg.Timestamp
+ }
+ return defaultTimestamp
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func TestE2EUnaryCRUD(t *testing.T) { | |
| timestamp := time.Now().UnixNano() | |
| { | |
| res, err := client.IndexProperty(ctx, &payload.Empty{}) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to get IndexProperty %v status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to get IndexProperty %v", err) | |
| } | |
| } | |
| t.Logf("IndexProperty: %v", res.String()) | |
| } | |
| var eg errgroup.Group | |
| if cfg.Insert.Num != 0 { | |
| eg, _ := errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Insert.Concurrency)) | |
| for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Insert.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Insert(ctx, &payload.Insert_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: id, | |
| Vector: vec, | |
| Timestamp: ts, | |
| }, | |
| Config: &payload.Insert_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to insert vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| sleep(t, cfg.Index.WaitAfterInsert) | |
| } | |
| indexStatus(t, ctx) | |
| if cfg.Search.Num != 0 { | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Search.Concurrency)) | |
| for i, vec := range ds.Test[cfg.Search.Offset : cfg.Search.Offset+cfg.Search.Num] { | |
| for _, query := range cfg.Search.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.Search(ctx, &payload.Search_Request{ | |
| Vector: vec, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Wait() | |
| } | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.SearchByID.Concurrency)) | |
| for i, vec := range ds.Train[cfg.SearchByID.Offset : cfg.SearchByID.Offset+cfg.SearchByID.Num] { | |
| for _, query := range cfg.SearchByID.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.SearchByID(ctx, &payload.Search_IDRequest{ | |
| Id: id, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.LinearSearch.Concurrency)) | |
| for i, vec := range ds.Test[cfg.LinearSearch.Offset : cfg.LinearSearch.Offset+cfg.LinearSearch.Num] { | |
| for _, query := range cfg.LinearSearch.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.LinearSearch(ctx, &payload.Search_Request{ | |
| Vector: vec, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.LinearSearchByID.Concurrency)) | |
| for i, vec := range ds.Train[cfg.LinearSearchByID.Offset : cfg.LinearSearchByID.Offset+cfg.LinearSearchByID.Num] { | |
| for _, query := range cfg.LinearSearchByID.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.LinearSearchByID(ctx, &payload.Search_IDRequest{ | |
| Id: id, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Object.Concurrency)) | |
| for i := range ds.Train[cfg.Object.Offset : cfg.Object.Offset+cfg.Object.Num] { | |
| id := strconv.Itoa(i) | |
| eg.Go(safety.RecoverFunc(func() error { | |
| obj, err := client.GetObject(ctx, &payload.Object_VectorRequest{ | |
| Id: &payload.Object_ID{Id: id}, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to get object: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to get object: %v", err) | |
| } | |
| } | |
| t.Logf("id %s got object: %v", id, obj.String()) | |
| exists, err := client.Exists(ctx, &payload.Object_ID{Id: id}) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to check object exists: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to check object exitst: %v", err) | |
| } | |
| } | |
| t.Logf("id %s exists: %v", id, exists.String()) | |
| res, err := client.GetTimestamp(ctx, &payload.Object_TimestampRequest{ | |
| Id: &payload.Object_ID{Id: id}, | |
| }) | |
| if err != nil { | |
| t.Errorf("failed to get timestamp: %v", err) | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to get object timestamp: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to get object timestamp: %v", err) | |
| } | |
| } | |
| t.Logf("id %s got timestamp: %v", id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Update.Concurrency)) | |
| for i, vec := range ds.Train[cfg.Update.Offset : cfg.Update.Offset+cfg.Update.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Update.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Update(ctx, &payload.Update_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: id, | |
| Vector: vec, | |
| Timestamp: ts, | |
| }, | |
| Config: &payload.Update_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Update.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to update vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to update vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s updated to %s", vec, id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Remove.Concurrency)) | |
| for i := range ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Remove.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Remove(ctx, &payload.Remove_Request{ | |
| Id: &payload.Object_ID{Id: id}, | |
| Config: &payload.Remove_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Remove.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to remove vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to remove vector: %v", err) | |
| } | |
| } | |
| t.Logf("id %s'd vector removed to %s", id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Upsert.Concurrency)) | |
| for i, vec := range ds.Train[cfg.Upsert.Offset : cfg.Upsert.Offset+cfg.Upsert.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Upsert.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Upsert(ctx, &payload.Upsert_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: id, | |
| Vector: vec, | |
| Timestamp: ts, | |
| }, | |
| Config: &payload.Upsert_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Upsert.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to upsert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to upsert vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s upserted to %s", vec, id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| { | |
| rts := time.Now().Add(-time.Hour).UnixNano() | |
| res, err := client.RemoveByTimestamp(ctx, &payload.Remove_TimestampRequest{ | |
| Timestamps: []*payload.Remove_Timestamp{ | |
| { | |
| Timestamp: rts, | |
| Operator: payload.Remove_Timestamp_Le, | |
| }, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to remove by timestamp vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to remove by timestamp vector: %v", err) | |
| } | |
| } | |
| t.Logf("removed by timestamp %s to %s", time.Unix(0, rts).String(), res.String()) | |
| } | |
| { | |
| res, err := client.Flush(ctx, &payload.Flush_Request{}) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to flush %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to flush %v", err) | |
| } | |
| } | |
| t.Logf("flushed %s", res.String()) | |
| } | |
| indexStatus(t, ctx) | |
| } | |
| package crud_test | |
| // (Other imports remain unchanged) | |
| import ( | |
| "strconv" | |
| "testing" | |
| "time" | |
| "golang.org/x/sync/errgroup" | |
| "google.golang.org/grpc/status" | |
| "google.golang.org/protobuf/types/known/wrapperspb" | |
| // ... other necessary imports | |
| ) | |
| // New helper function to reduce duplicate error handling. | |
| func handleError(t *testing.T, err error, operation string) { | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to %s: %v, status: %s", operation, err, st.String()) | |
| } else { | |
| t.Errorf("failed to %s: %v", operation, err) | |
| } | |
| } | |
| } | |
| // New helper function to extract timestamp handling logic. | |
| // Note: Adjust the type or parameter as needed to match your config structures. | |
| func getTimestamp(cfg *Config, defaultTimestamp int64) int64 { | |
| if cfg.Timestamp != 0 { | |
| return cfg.Timestamp | |
| } | |
| return defaultTimestamp | |
| } | |
| func TestE2EUnaryCRUD(t *testing.T) { | |
| // Ensure cleanup is performed regardless of test outcome. | |
| t.Cleanup(func() { | |
| _, err := client.Flush(ctx, &payload.Flush_Request{}) | |
| if err != nil { | |
| t.Logf("cleanup failed: %v", err) | |
| } | |
| }) | |
| timestamp := time.Now().UnixNano() | |
| { | |
| res, err := client.IndexProperty(ctx, &payload.Empty{}) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to get IndexProperty %v status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to get IndexProperty %v", err) | |
| } | |
| } | |
| t.Logf("IndexProperty: %v", res.String()) | |
| } | |
| var eg errgroup.Group | |
| if cfg.Insert.Num != 0 { | |
| eg, _ := errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Insert.Concurrency)) | |
| for i, vec := range ds.Train[cfg.Insert.Offset:cfg.Insert.Offset+cfg.Insert.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Insert.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Insert(ctx, &payload.Insert_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: id, | |
| Vector: vec, | |
| Timestamp: ts, | |
| }, | |
| Config: &payload.Insert_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to insert vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| sleep(t, cfg.Index.WaitAfterInsert) | |
| } | |
| indexStatus(t, ctx) | |
| if cfg.Search.Num != 0 { | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Search.Concurrency)) | |
| for i, vec := range ds.Test[cfg.Search.Offset:cfg.Search.Offset+cfg.Search.Num] { | |
| for _, query := range cfg.Search.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.Search(ctx, &payload.Search_Request{ | |
| Vector: vec, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Wait() | |
| } | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.SearchByID.Concurrency)) | |
| for i, vec := range ds.Train[cfg.SearchByID.Offset:cfg.SearchByID.Offset+cfg.SearchByID.Num] { | |
| for _, query := range cfg.SearchByID.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.SearchByID(ctx, &payload.Search_IDRequest{ | |
| Id: id, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.LinearSearch.Concurrency)) | |
| for i, vec := range ds.Test[cfg.LinearSearch.Offset:cfg.LinearSearch.Offset+cfg.LinearSearch.Num] { | |
| for _, query := range cfg.LinearSearch.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.LinearSearch(ctx, &payload.Search_Request{ | |
| Vector: vec, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.LinearSearchByID.Concurrency)) | |
| for i, vec := range ds.Train[cfg.LinearSearchByID.Offset:cfg.LinearSearchByID.Offset+cfg.LinearSearchByID.Num] { | |
| for _, query := range cfg.LinearSearchByID.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.LinearSearchByID(ctx, &payload.Search_IDRequest{ | |
| Id: id, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s searched recall: %f, payload %s", vec, rid, calculateRecall(t, res, i), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Object.Concurrency)) | |
| for i := range ds.Train[cfg.Object.Offset:cfg.Object.Offset+cfg.Object.Num] { | |
| id := strconv.Itoa(i) | |
| eg.Go(safety.RecoverFunc(func() error { | |
| obj, err := client.GetObject(ctx, &payload.Object_VectorRequest{ | |
| Id: &payload.Object_ID{Id: id}, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to get object: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to get object: %v", err) | |
| } | |
| } | |
| t.Logf("id %s got object: %v", id, obj.String()) | |
| exists, err := client.Exists(ctx, &payload.Object_ID{Id: id}) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to check object exists: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to check object exitst: %v", err) | |
| } | |
| } | |
| t.Logf("id %s exists: %v", id, exists.String()) | |
| res, err := client.GetTimestamp(ctx, &payload.Object_TimestampRequest{ | |
| Id: &payload.Object_ID{Id: id}, | |
| }) | |
| if err != nil { | |
| t.Errorf("failed to get timestamp: %v", err) | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to get object timestamp: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to get object timestamp: %v", err) | |
| } | |
| } | |
| t.Logf("id %s got timestamp: %v", id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Update.Concurrency)) | |
| for i, vec := range ds.Train[cfg.Update.Offset:cfg.Update.Offset+cfg.Update.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Update.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Update(ctx, &payload.Update_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: id, | |
| Vector: vec, | |
| Timestamp: ts, | |
| }, | |
| Config: &payload.Update_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Update.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to update vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to update vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s updated to %s", vec, id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Remove.Concurrency)) | |
| for i := range ds.Train[cfg.Remove.Offset:cfg.Remove.Offset+cfg.Remove.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Remove.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Remove(ctx, &payload.Remove_Request{ | |
| Id: &payload.Object_ID{Id: id}, | |
| Config: &payload.Remove_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Remove.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to remove vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to remove vector: %v", err) | |
| } | |
| } | |
| t.Logf("id %s'd vector removed to %s", id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Upsert.Concurrency)) | |
| for i, vec := range ds.Train[cfg.Upsert.Offset:cfg.Upsert.Offset+cfg.Upsert.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Upsert.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Upsert(ctx, &payload.Upsert_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: id, | |
| Vector: vec, | |
| Timestamp: ts, | |
| }, | |
| Config: &payload.Upsert_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Upsert.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to upsert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to upsert vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s upserted to %s", vec, id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg.Wait() | |
| { | |
| rts := time.Now().Add(-time.Hour).UnixNano() | |
| res, err := client.RemoveByTimestamp(ctx, &payload.Remove_TimestampRequest{ | |
| Timestamps: []*payload.Remove_Timestamp{ | |
| { | |
| Timestamp: rts, | |
| Operator: payload.Remove_Timestamp_Le, | |
| }, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to remove by timestamp vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to remove by timestamp vector: %v", err) | |
| } | |
| } | |
| t.Logf("removed by timestamp %s to %s", time.Unix(0, rts).String(), res.String()) | |
| } | |
| { | |
| res, err := client.Flush(ctx, &payload.Flush_Request{}) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to flush %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to flush %v", err) | |
| } | |
| } | |
| t.Logf("flushed %s", res.String()) | |
| } | |
| indexStatus(t, ctx) | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (5)
tests/v2/e2e/kubernetes/rollout.go (3)
39-48: Refactor to allow configurable resource name and namespace.Right now, the values
"something"and"default"are hardcoded, which limits the reusability of this function. Consider making these parameters configurable, so this method can be used for different resources in various namespaces.Here's an example demonstrating how to pass the resource name and namespace dynamically:
-func (c *client) RolloutRestartDeployment(wait bool) (err error) { - res := Deployment{ - Name: "something", - Namespace: "default", - } +func (c *client) RolloutRestartDeployment(name, namespace string, wait bool) error { + res := Deployment{ + Name: name, + Namespace: namespace, + } err = RolloutRestart(context.TODO(), c.clientset, res) ... }
78-104: Consider adding a timeout to avoid indefinite waiting.Currently, the function will wait indefinitely for the rollout to complete, which can block the test suite if the resource never becomes ready. Incorporating a timeout (or leveraging
ctxwith a predefined deadline) would help avoid indefinitely waiting for a rollout that might not succeed.Below is a sample illustration of using a context with timeout:
-func WaitForRestart[R metav1.Object, T ResourceType[R]]( - ctx context.Context, - clientset *kubernetes.Clientset, - rt T, -) error { +func WaitForRestart[R metav1.Object, T ResourceType[R]]( + ctx context.Context, + clientset *kubernetes.Clientset, + rt T, +) error { + // Example: enforce a 5-minute overall timeout for the rollout wait + ctxWithTimeout, cancel := context.WithTimeout(ctx, 5*time.Minute) + defer cancel() ticker := time.NewTicker(5 * time.Second) defer ticker.Stop() for { select { - case <-ctx.Done(): + case <-ctxWithTimeout.Done(): return ctxWithTimeout.Err() case <-ticker.C: ... } } }
106-139: ExtendcheckRolloutStatusfunctionality or add handling for additional resource types.This function supports only
Deployment,StatefulSet, andDaemonSet. If you need to manage or verify other Kubernetes resources (e.g.,ReplicaSet,Job,CronJob), consider extending this function or creating a generic mechanism to check rollouts.tests/v2/e2e/kubernetes/client.go (2)
145-175: Consider handling "not found" during readiness checks.If the Pod does not exist immediately, the function returns an error rather than waiting for it to appear. A loop retrying on
IsNotFoundcould give the Pod time to be created.
190-203: Verify job name collisions before creating a new job.To avoid conflicts, you might check for an existing job of the same name or generate unique names for each test.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/rollout_types.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (2)
tests/v2/e2e/kubernetes/client.go (1)
38-71: Interface structure looks good.The exposed methods cover essential Kubernetes operations for E2E tests.
tests/v2/e2e/kubernetes/rollout_types.go (1)
38-44: Interface conformance check is well-structured.These assignments cleanly verify that each struct satisfies the
ResourceTypeinterface.
tests/v2/e2e/kubernetes/client.go
Outdated
| func (cli *client) DeletePod(ctx context.Context, namespace, name string) error { | ||
| cli.clientset.CoreV1().Pods( | ||
| namespace, | ||
| ).Delete(ctx, name, metav1.DeleteOptions{}) | ||
|
|
||
| return nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle the error from the DeletePod operation.
Currently, the request to delete a Pod does not capture any potential error, and the function always returns nil. Users may miss critical feedback if the deletion fails.
func (cli *client) DeletePod(ctx context.Context, namespace, name string) error {
- cli.clientset.CoreV1().Pods(
+ err := cli.clientset.CoreV1().Pods(
namespace,
- ).Delete(ctx, name, metav1.DeleteOptions{})
- return nil
+ ).Delete(ctx, name, metav1.DeleteOptions{})
+ return err
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (cli *client) DeletePod(ctx context.Context, namespace, name string) error { | |
| cli.clientset.CoreV1().Pods( | |
| namespace, | |
| ).Delete(ctx, name, metav1.DeleteOptions{}) | |
| return nil | |
| } | |
| func (cli *client) DeletePod(ctx context.Context, namespace, name string) error { | |
| err := cli.clientset.CoreV1().Pods( | |
| namespace, | |
| ).Delete(ctx, name, metav1.DeleteOptions{}) | |
| return err | |
| } |
| type DaemonSet struct { | ||
| Name string | ||
| NameSpace string | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Standardize field naming for DaemonSet.
The field NameSpace is inconsistent with Namespace used elsewhere. This can cause confusion and potential bugs.
type DaemonSet struct {
Name string
- NameSpace string
+ Namespace string
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| type DaemonSet struct { | |
| Name string | |
| NameSpace string | |
| } | |
| type DaemonSet struct { | |
| Name string | |
| Namespace string | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
tests/v2/e2e/kubernetes/rollout.go (3)
39-43: Parameterize the resource name instead of hardcoding "something".Currently, the resource name is set to "something," which can reduce reuse and make testing tied to a single deployment name. Consider passing the resource name as a parameter for greater flexibility.
Here is a sample diff to illustrate:
-func (c *client) RolloutRestartDeployment(ctx context.Context, wait bool) (err error) { - res := Deployment{ - Name: "something", - Namespace: "default", - } +func (c *client) RolloutRestartDeployment(ctx context.Context, name string, wait bool) (err error) { + res := Deployment{ + Name: name, + Namespace: "default", + } err = RolloutRestart(ctx, c.clientset, res) ... }
98-99: Consider making the polling interval configurable.The
WaitForRestartfunction uses a hardcoded 5-second ticker interval. Allowing the interval to be passed as a parameter or set from configuration would provide more control and flexibility in different testing environments.
113-113: Use logging rather than printing to stdout.
fmt.Println(info)is helpful during development, but in a production or CI environment, a structured logger or testing framework logger (e.g.,t.Log()) is usually more appropriate.tests/v2/e2e/kubernetes/resources.go (2)
60-66: Document or improve the generic list conversion function.
listConvertconverts a slice of items into a slice of pointers, which can be efficient but might be confusing to maintain. Adding comments or exploring direct iteration over the original slice may enhance readability.
101-106: Reduce repetitive watch logic across resource structs.Each resource type has nearly identical
Watchmethods. Consider extracting shared logic into a helper function that accepts the namespace, label selector, field selector, and resource-specificWatchcall to avoid repetition.tests/v2/e2e/kubernetes/client.go (2)
55-79: Fallback to in-cluster config if no kubeconfig is found.If no kubeconfig file is present and the code is running inside a Kubernetes cluster, consider calling
rest.InClusterConfig()for a more robust approach.
87-117: Improve visibility when a pod fails to become ready.The current logic returns an error if the pod is not ready within the timeout, but it does not provide detailed failure reasons (e.g., CrashLoopBackOff). Logging or returning additional information about unready conditions can help diagnose issues quickly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Cloudflare Pages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
internal/net/grpc/errdetails/errdetails.go (1)
306-324: 🛠️ Refactor suggestionRevisit JSON structure and base64 usage in
Serialize.
(lines 316, 318, 320, 323)
- Returning a single object if there's only one detail vs. a
Detailscontainer when multiple exist can result in inconsistent JSON shapes. Consider consistently returning an array or wrapping in a top-level container for uniformity.- Using
conv.Btoa(b)yields a base64-encoded result. If the goal is to produce JSON, this may increase complexity when parsing. Verify whether returning raw JSON (e.g.,string(b)) would be simpler for consumers.
🧹 Nitpick comments (2)
internal/net/grpc/errdetails/errdetails.go (2)
81-83: Consider renaming fields to reduce confusion.
The structDetailshas a field also calledDetails. This might be a bit confusing for maintainers. Renaming the field (e.g.,Items) or the struct (e.g.,DetailList) could improve clarity.
90-174: Assess potential for large or sensitive data in logs.
(lines 94, 99, 104, 109, 114, 119, 124, 129, 134, 139, 144, 149, 155, 160, 165, 167, 168, 169, 170, 171, 173)
- Robust fallback with
protojson.Marshal(d.Message)ensures unknown types are still handled.- Logging entire object can risk large logs or potential sensitive data leaks. Consider sanitizing or limiting log output if
d.Messagemight contain confidential fields.- The approach is comprehensive, covering all known types. Good implementation detail.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/net/grpc/errdetails/errdetails.go(5 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/v2/e2e/kubernetes/client.go
🔇 Additional comments (2)
internal/net/grpc/errdetails/errdetails.go (2)
26-26: No major concerns with the new imports.
These additions forconvandprotojsonlook straightforward and don’t introduce issues.Also applies to: 36-36
326-428: Check logging level for unknown types inAnyToErrorDetail.
(lines 331, 417-422, 425)
- Attempting
a.UnmarshalNew()for unknown types is an extensible approach.- Logging unknown types at WARN level might be too aggressive if encountering unknown types is normal; consider reducing the severity to INFO or DEBUG.
- Confirm the safety of logging the entire object in case of sensitive data.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2836 +/- ##
==========================================
- Coverage 16.93% 16.91% -0.02%
==========================================
Files 136 136
Lines 15057 15057
==========================================
- Hits 2550 2547 -3
- Misses 12211 12213 +2
- Partials 296 297 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
♻️ Duplicate comments (5)
tests/v2/e2e/crud/stream_test.go (1)
432-432:⚠️ Potential issueResolve suspicious slice range when removing vectors.
The slice range is using the wrong offset and number values, referencing
cfg.Updateinstead ofcfg.Remove.- datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] + datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]tests/v2/e2e/crud/unary_test.go (3)
449-753: 🛠️ Refactor suggestionRefactor test cases using subtests.
The test cases should be organized using
t.Run()for better clarity and independent execution.func TestE2EUnarySkipStrictExistsCheckCRUD(t *testing.T) { timestamp := time.Now().UnixNano() { // IndexProperty call } - t.Log("starting test #1 run Update with SkipStrictExistCheck=true and check that it fails.") - eg, _ := errgroup.New(ctx) - // Test case implementation + t.Run("UpdateWithSkipStrictExistCheckTrue", func(t *testing.T) { + t.Log("Test #1: Run Update with SkipStrictExistCheck=true and check that it fails.") + eg, _ := errgroup.New(ctx) + // Test case implementation + }) - t.Log("starting test #2 run Update with SkipStrictExistCheck=false, and check that the internal Remove Operation returns a NotFound error.") + t.Run("UpdateWithSkipStrictExistCheckFalse", func(t *testing.T) { + t.Log("Test #2: Run Update with SkipStrictExistCheck=false, and check that the internal Remove Operation returns a NotFound error.") + // Test case implementation + }) // Continue with other test cases
634-634: 🛠️ Refactor suggestionFix misleading log message about error type.
The log message incorrectly states "NotFound" when the actual error code being tested is "AlreadyExists".
-t.Logf("received a NotFound error on #5: %s", err.Error()) +t.Logf("received an AlreadyExists error on #5: %s", err.Error())
49-49:⚠️ Potential issueCheck for nil response before calling String().
If
IndexPropertyreturns an error,resmight be nil, which would cause a nil pointer dereference when callingString().if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to get IndexProperty %v status: %s", err, st.String()) } else { t.Errorf("failed to get IndexProperty %v", err) } + return } -t.Logf("IndexProperty: %v", res.String()) +if res != nil { + t.Logf("IndexProperty: %v", res.String()) +}tests/v2/e2e/crud/crud_test.go (1)
68-71:⚠️ Potential issueValidate dataset size before slicing neighbor data.
When calling
ds.Neighbors[idx][:len(topKIDs)], there's a potential out-of-bounds slice ifds.Neighbors[idx]is shorter thanlen(topKIDs). Consider adding a length check to avoid a runtime panic.if len(ds.Neighbors[idx]) < len(topKIDs) { t.Errorf("neighbors slice (%d) is shorter than resultIDs slice (%d)", len(ds.Neighbors[idx]), len(topKIDs)) return 0 } rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)])
🧹 Nitpick comments (17)
tests/v2/e2e/crud/stream_test.go (7)
66-68: Fix condition for vector data range check.The condition
if len(datas) < idx {is checking if the index is out of range, but it should beif idx >= len(datas) {for more accurate bounds checking.- if len(datas) < idx { + if idx >= len(datas) { return nil }
112-116: Fix nested loop boundary check logic.The conditions for range checks should be reversed. Currently, the code checks
len(datas) < idxand thenlen(cfg.Search.Queries) < qidx, but it should check if the index exceeds or equals the length.- if len(datas) < idx { + if idx >= len(datas) { return nil } - if len(cfg.Search.Queries) < qidx { + if qidx >= len(cfg.Search.Queries) { qidx = 0 idx++ }
162-163: Fix error message for StreamSearch operation.The error message incorrectly references "insert stream" when it should refer to "search stream".
- t.Errorf("failed to complete insert stream %v", err) + t.Errorf("failed to complete search stream %v", err)
223-224: Fix error message for StreamSearchByID operation.The error message incorrectly references "insert stream" when it should refer to "search by ID stream".
- t.Errorf("failed to complete insert stream %v", err) + t.Errorf("failed to complete search by ID stream %v", err)
284-285: Fix error message for StreamLinearSearch operation.The error message incorrectly references "insert stream" when it should refer to "linear search stream".
- t.Errorf("failed to complete insert stream %v", err) + t.Errorf("failed to complete linear search stream %v", err)
345-346: Fix error message for StreamLinearSearchByID operation.The error message incorrectly references "insert stream" when it should refer to "linear search by ID stream".
- t.Errorf("failed to complete insert stream %v", err) + t.Errorf("failed to complete linear search by ID stream %v", err)
35-541: Add clean-up mechanism to ensure resources are properly released.The test doesn't have a cleanup mechanism to ensure resources are released if the test fails in the middle. Consider using
t.Cleanup()to register cleanup handlers.func TestE2EStreamCRUD(t *testing.T) { t.Cleanup(func() { _, err := client.Flush(ctx, &payload.Flush_Request{}) if err != nil { t.Logf("cleanup failed: %v", err) } }) // Rest of the test... }internal/params/params.go (3)
45-59: Add documentation for new fields in theparserstruct.The newly added fields
overrideDefault,f, anddefaultslack proper documentation explaining their purpose and usage.Add documentation for the new fields:
type parser struct { + // overrideDefault determines whether to override the default command line flags overrideDefault bool + // f is the flag set used for parsing command line arguments f *flag.FlagSet + // defaults stores the original command line flags for restoration defaults *flag.FlagSet filePath struct { keys []string defaultPath string description string } // ...
112-116: Add documentation for theRestoremethod.The
Restoremethod lacks documentation explaining its purpose and when it should be used.+// Restore resets the command line flags to the original defaults. func (p *parser) Restore() { if p.defaults != nil { flag.CommandLine = p.defaults } }
118-122: Add documentation for theOverridemethod.The
Overridemethod lacks documentation explaining its purpose and when it should be used.+// Override sets the command line flags to the new flag set f. func (p *parser) Override() { if p.f != nil { flag.CommandLine = p.f } }tests/v2/e2e/kubernetes/client.go (2)
114-118: Add error handling to thePortforwardmethod.The
Portforwardmethod doesn't handle any errors that might occur during the creation of the port-forwarding connection.func (cli *client) Portforward( namespace, podName string, localPort, podPort int, -) *portforward.Portforward { - return portforward.New(cli.rest, namespace, podName, localPort, podPort) +) (*portforward.Portforward, error) { + pf := portforward.New(cli.rest, namespace, podName, localPort, podPort) + if pf == nil { + return nil, errors.New("failed to create port forward") + } + return pf, nil }This would require updating the interface as well:
type Client interface { Portforward( namespace, podName string, localPort, podPort int, - ) *portforward.Portforward + ) (*portforward.Portforward, error) // ... }
39-49: Add comprehensive documentation to theClientinterface.The
Clientinterface lacks detailed documentation explaining its purpose and the functionality it provides.+// Client defines the interface for Kubernetes client operations used in E2E testing. +// It provides methods for port forwarding and waiting for pod readiness. type Client interface { + // Portforward creates a port forward connection to a pod. + // It takes the namespace, pod name, local port, and pod port as parameters + // and returns a Portforward instance. Portforward( namespace, podName string, localPort, podPort int, ) *portforward.Portforward + // WaitForPodReady waits for a pod to be ready within the specified timeout. + // It returns true if the pod is ready, false otherwise, and any error encountered. WaitForPodReady( ctx context.Context, namespace, name string, timeout time.Duration, ) (ok bool, err error) }tests/v2/e2e/crud/unary_test.go (2)
36-447: Add proper test cleanup mechanism.The test doesn't ensure proper cleanup if it fails in the middle. Add a cleanup function using
t.Cleanup()to ensure resources are released.func TestE2EUnaryCRUD(t *testing.T) { + t.Cleanup(func() { + _, err := client.Flush(ctx, &payload.Flush_Request{}) + if err != nil { + t.Logf("cleanup failed: %v", err) + } + }) // Rest of the test... }
36-753: Extract common timestamp handling logic.The timestamp handling pattern is repeated throughout the code. Consider extracting it into a helper function.
func getEffectiveTimestamp(configTimestamp, defaultTimestamp int64) int64 { if configTimestamp != 0 { return configTimestamp } return defaultTimestamp }Then use it throughout the code:
-ts := cfg.Insert.Timestamp -if ts == 0 { - ts = timestamp -} +ts := getEffectiveTimestamp(cfg.Insert.Timestamp, timestamp)tests/v2/e2e/crud/crud_test.go (1)
124-125: Propagate test exit code in TestMain.
m.Run()returns an exit code, but it’s not being passed toos.Exit(). To ensure the correct exit code is returned when tests fail, capture and pass the code toos.Exit().m.Run() +exitCode := m.Run() +os.Exit(exitCode)tests/v2/e2e/kubernetes/rollout.go (1)
113-134: Implement a timeout or limit for WaitForRestart.Currently,
WaitForRestartuses a ticker and returns only if the context is canceled or the resource is ready. Adding a timeout within the context or a maximum retry limit helps prevent indefinite hangs if the resource never becomes ready.tests/v2/e2e/kubernetes/resources.go (1)
188-195: Consolidate repetitive validity checks for b or b.Client.Each CRUD operation checks
if b == nil || b.Client == nil { ... }. Extracting this pattern into a small helper method could reduce duplication and improve readability.Also applies to: 197-204, 206-213, 215-222, 224-231, 233-240, 242-249, 251-258, 260-267, 269-276
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (35)
.github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(5 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(3 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (23)
- .github/workflows/reviewdog.yaml
- .github/workflows/build-binaries.yaml
- .github/workflows/e2e-profiling.yaml
- .github/workflows/update-deps.yaml
- internal/params/option.go
- .github/workflows/test-hack.yaml
- .github/workflows/helm.yaml
- .github/workflows/e2e-code-bench-agent.yaml
- .github/workflows/build-protobuf.yaml
- .github/workflows/unit-test-rust.yaml
- .github/workflows/codeql-analysis.yml
- .github/workflows/unit-test.yaml
- .github/workflows/_docker-image.yaml
- .github/workflows/chatops.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/fossa.yaml
- .github/workflows/e2e.yaml
- .github/workflows/helm-lint.yaml
- .github/workflows/coverage.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/reviewdog-k8s.yaml
- .github/workflows/e2e-chaos.yaml
- .github/workflows/format.yaml
⏰ Context from checks skipped due to timeout of 90000ms (173)
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
🔇 Additional comments (13)
internal/errors/errors.go (1)
86-88: Good addition of a standard error handler for unimplemented features.The new
ErrUnimplementedfunction follows the established pattern in this file - it's consistent with other error generator functions and properly formatted. This will be useful for providing clear error messages when features are not yet implemented.internal/net/grpc/errdetails/errdetails.go (4)
81-83: Good addition of the Details type.This new type provides a proper container for multiple Detail objects, which improves the structured representation of error details for serialization.
94-172: Improved handling of unknown message types in MarshalJSON.The enhancement to use
protojson.Marshalfor types that don't match predefined message names improves flexibility in JSON serialization.
316-323: Validate error handling result inSerialize.Currently, if marshaling
Detailsfails, the function falls back tofmt.Sprint(objs...), which may mask the underlying error cause. Consider logging or returning the error more explicitly if possible.
417-422: Good addition of default handling for unknown types.The new default case that attempts to unmarshal any unknown types improves the robustness of the
AnyToErrorDetailfunction when handling various message types.internal/net/http/client/client.go (2)
36-38: Good refactoring of New function.The function now delegates to
NewWithTransportwith the default transport, which improves code reuse and maintains backward compatibility.
40-46: Well-designed new function for custom transport initialization.The
NewWithTransportfunction enhances flexibility by allowing clients to provide custom transport configurations. The implementation correctly clones the transport to avoid modifying the original.tests/v2/e2e/crud/multi_test.go (2)
36-646: Refactor test for better maintainability.The test function is too large and contains significant code duplication. Consider:
- Extracting helper functions for common operations.
- Splitting the test into smaller, focused test cases.
Here are some suggested improvements:
- Extract helper functions for common operations:
func createMultiRequest[T any](cfg *Config, data []float32, createReq func(id string, vec []float32) T) []T { reqs := make([]T, 0, cfg.BulkSize) for i, vec := range data { id := strconv.Itoa(i) reqs = append(reqs, createReq(id, vec)) } return reqs } func handleError(t *testing.T, err error, msg string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("%s: %v, status: %s", msg, err, st.String()) } else { t.Errorf("%s: %v", msg, err) } } }
- Split the test into smaller test cases:
func TestE2EMultiInsert(t *testing.T) { /* ... */ } func TestE2EMultiSearch(t *testing.T) { /* ... */ } func TestE2EMultiSearchByID(t *testing.T) { /* ... */ } func TestE2EMultiLinearSearch(t *testing.T) { /* ... */ } func TestE2EMultiUpdate(t *testing.T) { /* ... */ } func TestE2EMultiRemove(t *testing.T) { /* ... */ } func TestE2EMultiUpsert(t *testing.T) { /* ... */ }
423-431: Avoid potential nil pointer dereference whenGetObjectfails.If
erris non-nil,objmay benil, causingobj.String()to panic. Consider returning immediately or skipping theobj.String()call whenerr != nil.Here's an example fix:
- if err != nil { - t.Errorf("failed to get object: %v", err) - } - t.Logf("id %s got object: %v", id, obj.String()) + if err != nil { + t.Errorf("failed to get object: %v", err) + return nil + } + if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) + } else { + t.Errorf("failed to get object: object is nil") + return nil + }internal/params/params.go (1)
69-71: Consider adding a check to verify ifp.fis initialized before callingOverride.The
Overridemethod is called immediately after initializingp.f, but there's no check to ensurep.fis not nil whenOverrideis called.p.f = flag.NewFlagSet(filepath.Base(os.Args[0]), flag.ContinueOnError) -if p.overrideDefault { +if p.overrideDefault && p.f != nil { p.Override() }tests/v2/e2e/config/config.go (3)
1-21: Overall structure and licensing look good.The initial section, including the license and package declaration, appears consistent and adheres to standard practices.
110-113: Fix YAML tag and doc mismatch for LocalPort.
You are reusingyaml:"pod_port"for bothPodPortandLocalPort, which can cause confusion or incorrect parsing since they share the same key in YAML. Moreover, the doc states thatLocalPortdefaults toPodPortif unset, but your code sets it to a constant value instead.Apply the following diffs:
- PodPort uint16 `yaml:"pod_port"` // The port number on the pod. - LocalPort uint16 `yaml:"pod_port"` // The local port number; if not set, it defaults to PodPort. + PodPort uint16 `yaml:"pod_port"` // The port number on the pod. + LocalPort uint16 `yaml:"local_port"` // The local port number; if not set, it defaults to PodPort.- if pf.LocalPort == 0 { - pf.LocalPort = localPort - } + if pf.LocalPort == 0 { + pf.LocalPort = pf.PodPort + }
466-466: Avoid logging entire configuration at debug level.
This was previously flagged to prevent potential leakage of sensitive data. Consider removing or sanitizing the raw YAML dump.- log.Debug(config.ToRawYaml(cfg)) + // log.Debugf("Configuration loaded successfully") // or remove entirely
| func TestE2EStreamCRUD(t *testing.T) { | ||
| timestamp := time.Now().UnixNano() | ||
|
|
||
| { | ||
| res, err := client.IndexProperty(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexProperty %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexProperty %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexProperty: %v", res.String()) | ||
| } | ||
|
|
||
| var ( | ||
| stream grpc.ClientStream | ||
| err error | ||
| ) | ||
| stream, err = client.StreamInsert(ctx) | ||
| if err != nil { | ||
| t.Error(err) | ||
| } | ||
| var idx int | ||
| ts := cfg.Insert.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| datas := ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] | ||
| err = grpc.BidirectionalStreamClient(stream, func() *payload.Insert_Request { | ||
| id := strconv.Itoa(idx) | ||
| if len(datas) < idx { | ||
| return nil | ||
| } | ||
| vec := datas[idx] | ||
| idx++ | ||
| return &payload.Insert_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Insert_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | ||
| }, | ||
| } | ||
| }, func(res *payload.Object_Location, err error) bool { | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to insert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector id %s inserted to %s", res.GetUuid(), res.String()) | ||
| return true | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to complete insert stream %v", err) | ||
| } | ||
|
|
||
| sleep(t, cfg.Index.WaitAfterInsert) | ||
|
|
||
| indexStatus(t, ctx) | ||
|
|
||
| stream, err = client.StreamSearch(ctx) | ||
| if err != nil { | ||
| t.Error(err) | ||
| } | ||
| qidx := 0 | ||
| idx = 0 | ||
| datas = ds.Test[cfg.Search.Offset : cfg.Search.Offset+cfg.Search.Num] | ||
| err = grpc.BidirectionalStreamClient(stream, func() *payload.Search_Request { | ||
| id := strconv.Itoa(idx) | ||
| if len(datas) < idx { | ||
| return nil | ||
| } | ||
| if len(cfg.Search.Queries) < qidx { | ||
| qidx = 0 | ||
| idx++ | ||
| } | ||
| query := cfg.Search.Queries[qidx] | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | ||
| vec := datas[idx] | ||
| qidx++ | ||
| var ratio *wrapperspb.FloatValue | ||
| if query.Ratio != 0 { | ||
| ratio = wrapperspb.Float(query.Ratio) | ||
| } else { | ||
| ratio = nil | ||
| } | ||
| return &payload.Search_Request{ | ||
| Vector: vec, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: query.K, | ||
| Radius: query.Radius, | ||
| Epsilon: query.Epsilon, | ||
| Timeout: query.Timeout.Nanoseconds(), | ||
| AggregationAlgorithm: query.Algorithm, | ||
| MinNum: query.MinNum, | ||
| Ratio: ratio, | ||
| Nprobe: query.Nprobe, | ||
| }, | ||
| } | ||
| }, func(res *payload.Search_Response, err error) bool { | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to search vector: %v", err) | ||
| } | ||
| } | ||
| id, _, _ := strings.Cut(res.GetRequestId(), "-") | ||
| idx, _ := strconv.Atoi(id) | ||
| t.Logf("request id %s searched recall: %f, payload %s", res.GetRequestId(), calculateRecall(t, &payload.Search_Response{ | ||
| RequestId: res.GetRequestId(), | ||
| Results: res.GetResults(), | ||
| }, idx), res.String()) | ||
|
|
||
| return true | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to complete insert stream %v", err) | ||
| } | ||
|
|
||
| stream, err = client.StreamSearchByID(ctx) | ||
| if err != nil { | ||
| t.Error(err) | ||
| } | ||
| qidx = 0 | ||
| idx = 0 | ||
| datas = ds.Train[cfg.SearchByID.Offset : cfg.SearchByID.Offset+cfg.SearchByID.Num] | ||
| err = grpc.BidirectionalStreamClient(stream, func() *payload.Search_IDRequest { | ||
| id := strconv.Itoa(idx) | ||
| if len(datas) < idx { | ||
| return nil | ||
| } | ||
| if len(cfg.Search.Queries) < qidx { | ||
| qidx = 0 | ||
| idx++ | ||
| } | ||
| query := cfg.Search.Queries[qidx] | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | ||
| qidx++ | ||
| var ratio *wrapperspb.FloatValue | ||
| if query.Ratio != 0 { | ||
| ratio = wrapperspb.Float(query.Ratio) | ||
| } else { | ||
| ratio = nil | ||
| } | ||
| return &payload.Search_IDRequest{ | ||
| Id: id, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: query.K, | ||
| Radius: query.Radius, | ||
| Epsilon: query.Epsilon, | ||
| Timeout: query.Timeout.Nanoseconds(), | ||
| AggregationAlgorithm: query.Algorithm, | ||
| MinNum: query.MinNum, | ||
| Ratio: ratio, | ||
| Nprobe: query.Nprobe, | ||
| }, | ||
| } | ||
| }, func(res *payload.Search_Response, err error) bool { | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to search vector: %v", err) | ||
| } | ||
| } | ||
| id, _, _ := strings.Cut(res.GetRequestId(), "-") | ||
| idx, _ := strconv.Atoi(id) | ||
| t.Logf("request id %s searched recall: %f, payload %s", res.GetRequestId(), calculateRecall(t, &payload.Search_Response{ | ||
| RequestId: res.GetRequestId(), | ||
| Results: res.GetResults(), | ||
| }, idx), res.String()) | ||
|
|
||
| return true | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to complete insert stream %v", err) | ||
| } | ||
| stream, err = client.StreamLinearSearch(ctx) | ||
| if err != nil { | ||
| t.Error(err) | ||
| } | ||
| qidx = 0 | ||
| idx = 0 | ||
| datas = ds.Test[cfg.LinearSearch.Offset : cfg.LinearSearch.Offset+cfg.LinearSearch.Num] | ||
| err = grpc.BidirectionalStreamClient(stream, func() *payload.Search_Request { | ||
| id := strconv.Itoa(idx) | ||
| if len(datas) < idx { | ||
| return nil | ||
| } | ||
| if len(cfg.LinearSearch.Queries) < qidx { | ||
| qidx = 0 | ||
| idx++ | ||
| } | ||
| query := cfg.LinearSearch.Queries[qidx] | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | ||
| vec := datas[idx] | ||
| qidx++ | ||
| var ratio *wrapperspb.FloatValue | ||
| if query.Ratio != 0 { | ||
| ratio = wrapperspb.Float(query.Ratio) | ||
| } else { | ||
| ratio = nil | ||
| } | ||
| return &payload.Search_Request{ | ||
| Vector: vec, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: query.K, | ||
| Radius: query.Radius, | ||
| Epsilon: query.Epsilon, | ||
| Timeout: query.Timeout.Nanoseconds(), | ||
| AggregationAlgorithm: query.Algorithm, | ||
| MinNum: query.MinNum, | ||
| Ratio: ratio, | ||
| Nprobe: query.Nprobe, | ||
| }, | ||
| } | ||
| }, func(res *payload.Search_Response, err error) bool { | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to search vector: %v", err) | ||
| } | ||
| } | ||
| id, _, _ := strings.Cut(res.GetRequestId(), "-") | ||
| idx, _ := strconv.Atoi(id) | ||
| t.Logf("request id %s searched recall: %f, payload %s", res.GetRequestId(), calculateRecall(t, &payload.Search_Response{ | ||
| RequestId: res.GetRequestId(), | ||
| Results: res.GetResults(), | ||
| }, idx), res.String()) | ||
|
|
||
| return true | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to complete insert stream %v", err) | ||
| } | ||
|
|
||
| stream, err = client.StreamLinearSearchByID(ctx) | ||
| if err != nil { | ||
| t.Error(err) | ||
| } | ||
| qidx = 0 | ||
| idx = 0 | ||
| datas = ds.Train[cfg.LinearSearchByID.Offset : cfg.LinearSearchByID.Offset+cfg.LinearSearchByID.Num] | ||
| err = grpc.BidirectionalStreamClient(stream, func() *payload.Search_IDRequest { | ||
| id := strconv.Itoa(idx) | ||
| if len(datas) < idx { | ||
| return nil | ||
| } | ||
| if len(cfg.LinearSearch.Queries) < qidx { | ||
| qidx = 0 | ||
| idx++ | ||
| } | ||
| query := cfg.LinearSearch.Queries[qidx] | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | ||
| qidx++ | ||
| var ratio *wrapperspb.FloatValue | ||
| if query.Ratio != 0 { | ||
| ratio = wrapperspb.Float(query.Ratio) | ||
| } else { | ||
| ratio = nil | ||
| } | ||
| return &payload.Search_IDRequest{ | ||
| Id: id, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: query.K, | ||
| Radius: query.Radius, | ||
| Epsilon: query.Epsilon, | ||
| Timeout: query.Timeout.Nanoseconds(), | ||
| AggregationAlgorithm: query.Algorithm, | ||
| MinNum: query.MinNum, | ||
| Ratio: ratio, | ||
| Nprobe: query.Nprobe, | ||
| }, | ||
| } | ||
| }, func(res *payload.Search_Response, err error) bool { | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to search vector: %v", err) | ||
| } | ||
| } | ||
| id, _, _ := strings.Cut(res.GetRequestId(), "-") | ||
| idx, _ := strconv.Atoi(id) | ||
| t.Logf("request id %s searched recall: %f, payload %s", res.GetRequestId(), calculateRecall(t, &payload.Search_Response{ | ||
| RequestId: res.GetRequestId(), | ||
| Results: res.GetResults(), | ||
| }, idx), res.String()) | ||
|
|
||
| return true | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to complete insert stream %v", err) | ||
| } | ||
|
|
||
| stream, err = client.StreamGetObject(ctx) | ||
| if err != nil { | ||
| t.Error(err) | ||
| } | ||
| idx = 0 | ||
| datas = ds.Train[cfg.Object.Offset : cfg.Object.Offset+cfg.Object.Num] | ||
| err = grpc.BidirectionalStreamClient(stream, func() *payload.Object_VectorRequest { | ||
| id := strconv.Itoa(idx) | ||
| if len(datas) < idx { | ||
| return nil | ||
| } | ||
| idx++ | ||
| return &payload.Object_VectorRequest{ | ||
| Id: &payload.Object_ID{Id: id}, | ||
| } | ||
| }, func(res *payload.Object_Vector, err error) bool { | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector id %s loaded %s", res.GetId(), res.String()) | ||
| return true | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to complete get object stream %v", err) | ||
| } | ||
|
|
||
| stream, err = client.StreamUpdate(ctx) | ||
| if err != nil { | ||
| t.Error(err) | ||
| } | ||
| ts = cfg.Update.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| idx = 0 | ||
| datas = ds.Train[cfg.Update.Offset : cfg.Update.Offset+cfg.Update.Num] | ||
| err = grpc.BidirectionalStreamClient(stream, func() *payload.Update_Request { | ||
| id := strconv.Itoa(idx) | ||
| if len(datas) < idx { | ||
| return nil | ||
| } | ||
| vec := datas[idx] | ||
| idx++ | ||
| return &payload.Update_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Update_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Update.SkipStrictExistCheck, | ||
| }, | ||
| } | ||
| }, func(res *payload.Object_Location, err error) bool { | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to update vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to update vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector id %s updated to %s", res.GetUuid(), res.String()) | ||
| return true | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to complete update stream %v", err) | ||
| } | ||
|
|
||
| stream, err = client.StreamRemove(ctx) | ||
| if err != nil { | ||
| t.Error(err) | ||
| } | ||
| ts = cfg.Remove.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| idx = 0 | ||
| datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] | ||
| err = grpc.BidirectionalStreamClient(stream, func() *payload.Remove_Request { | ||
| id := strconv.Itoa(idx) | ||
| if len(datas) < idx { | ||
| return nil | ||
| } | ||
| idx++ | ||
| return &payload.Remove_Request{ | ||
| Id: &payload.Object_ID{Id: id}, | ||
| Config: &payload.Remove_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Remove.SkipStrictExistCheck, | ||
| }, | ||
| } | ||
| }, func(res *payload.Object_Location, err error) bool { | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to remove vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to remove vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector id %s removed to %s", res.GetUuid(), res.String()) | ||
| return true | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to complete remove stream %v", err) | ||
| } | ||
|
|
||
| stream, err = client.StreamUpsert(ctx) | ||
| if err != nil { | ||
| t.Error(err) | ||
| } | ||
| ts = cfg.Upsert.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| idx = 0 | ||
| datas = ds.Train[cfg.Upsert.Offset : cfg.Upsert.Offset+cfg.Upsert.Num] | ||
| err = grpc.BidirectionalStreamClient(stream, func() *payload.Upsert_Request { | ||
| id := strconv.Itoa(idx) | ||
| if len(datas) < idx { | ||
| return nil | ||
| } | ||
| vec := datas[idx] | ||
| idx++ | ||
| return &payload.Upsert_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Upsert_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Upsert.SkipStrictExistCheck, | ||
| }, | ||
| } | ||
| }, func(res *payload.Object_Location, err error) bool { | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to upsert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to upsert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector id %s upserted to %s", res.GetUuid(), res.String()) | ||
| return true | ||
| }) | ||
| if err != nil { | ||
| t.Errorf("failed to complete upsert stream %v", err) | ||
| } | ||
|
|
||
| { | ||
| rts := time.Now().Add(-time.Hour).UnixNano() | ||
| res, err := client.RemoveByTimestamp(ctx, &payload.Remove_TimestampRequest{ | ||
| Timestamps: []*payload.Remove_Timestamp{ | ||
| { | ||
| Timestamp: rts, | ||
| Operator: payload.Remove_Timestamp_Le, | ||
| }, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to remove by timestamp vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to remove by timestamp vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("removed by timestamp %s to %s", time.Unix(0, rts).String(), res.String()) | ||
| } | ||
|
|
||
| { | ||
| res, err := client.Flush(ctx, &payload.Flush_Request{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to flush %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to flush %v", err) | ||
| } | ||
| } | ||
| t.Logf("flushed %s", res.String()) | ||
| } | ||
|
|
||
| indexStatus(t, ctx) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Extract common error handling into a utility function.
The error handling pattern is repeated throughout the test code. Consider extracting it into a helper function to reduce duplication.
func handleStreamError(t *testing.T, err error, operation string) {
if err != nil {
st, ok := status.FromError(err)
if ok && st != nil {
t.Errorf("failed to %s: %v, status: %s", operation, err, st.String())
} else {
t.Errorf("failed to %s: %v", operation, err)
}
}
}Then use it throughout the code:
if err != nil {
- st, ok := status.FromError(err)
- if ok && st != nil {
- t.Errorf("failed to search vector: %v, status: %s", err, st.String())
- } else {
- t.Errorf("failed to search vector: %v", err)
- }
+ handleStreamError(t, err, "search vector")
}
tests/v2/e2e/kubernetes/client.go
Outdated
| pod := Pod{ | ||
| Name: name, | ||
| Namespace: namespace, | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Missing declaration of the Pod type.
The code uses a Pod type at line 129, but this type is not defined in the file. It's likely defined in another file, but this dependency should be made clear.
Run the following script to locate the Pod type definition:
🏁 Script executed:
#!/bin/bash
# Find the definition of the Pod type used in the kubernetes client
rg -A 2 -B 2 "type Pod struct" --type goLength of output: 907
Clarify and explicitly import the correct Pod dependency
The code in tests/v2/e2e/kubernetes/client.go instantiates a Pod using Name and Namespace fields. Our repository search confirmed that while there are several Pod definitions, the one in internal/k8s/metrics/pod/pod.go provides the expected fields. To avoid ambiguity and ensure maintainability, please update the file to explicitly import the package that defines this Pod—for example, by importing "internal/k8s/metrics/pod". This makes the external dependency clear and avoids potential confusion with other Pod definitions in the codebase.
| func newClient(cfg *rest.Config) (Client, error) { | ||
| clientset, err := kubernetes.NewForConfig(cfg) | ||
| if err != nil { | ||
| log.Debugf("failed to create clientset from config %s,\terror: %v", cfg.String(), err) | ||
| c, ierr := inClusterConfigClient() | ||
| if ierr == nil { | ||
| return nil, errors.Join(err, ierr) | ||
| } | ||
| return c, nil | ||
|
|
||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Clarify error handling logic in newClient function.
The current error handling logic is confusing. If creating the clientset fails, it attempts to get an in-cluster client but returns errors inconsistently. It either returns a nil client and both errors or returns a non-nil client and nil error.
func newClient(cfg *rest.Config) (Client, error) {
clientset, err := kubernetes.NewForConfig(cfg)
if err != nil {
log.Debugf("failed to create clientset from config %s,\terror: %v", cfg.String(), err)
c, ierr := inClusterConfigClient()
- if ierr == nil {
- return nil, errors.Join(err, ierr)
+ if ierr != nil {
+ return nil, errors.Join(err, ierr)
}
- return c, nil
+ return c, nil
}
return &client{
rest: cfg,
clientset: clientset,
}, nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func newClient(cfg *rest.Config) (Client, error) { | |
| clientset, err := kubernetes.NewForConfig(cfg) | |
| if err != nil { | |
| log.Debugf("failed to create clientset from config %s,\terror: %v", cfg.String(), err) | |
| c, ierr := inClusterConfigClient() | |
| if ierr == nil { | |
| return nil, errors.Join(err, ierr) | |
| } | |
| return c, nil | |
| } | |
| func newClient(cfg *rest.Config) (Client, error) { | |
| clientset, err := kubernetes.NewForConfig(cfg) | |
| if err != nil { | |
| log.Debugf("failed to create clientset from config %s,\terror: %v", cfg.String(), err) | |
| c, ierr := inClusterConfigClient() | |
| if ierr != nil { | |
| return nil, errors.Join(err, ierr) | |
| } | |
| return c, nil | |
| } | |
| return &client{ | |
| rest: cfg, | |
| clientset: clientset, | |
| }, nil | |
| } |
| eg, _ := errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Insert.Concurrency)) | ||
| for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Insert.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Insert(ctx, &payload.Insert_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Insert_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to insert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) | ||
| return nil | ||
| })) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix closure capture in goroutines.
The goroutines capture loop variables directly, which can lead to unexpected behavior as the variables change in the loop. Copy the variables before using them in the closure.
for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] {
id := strconv.Itoa(i)
ts := cfg.Insert.Timestamp
if ts == 0 {
ts = timestamp
}
+ idCopy, vecCopy, tsCopy := id, vec, ts
eg.Go(safety.RecoverFunc(func() error {
res, err := client.Insert(ctx, &payload.Insert_Request{
Vector: &payload.Object_Vector{
- Id: id,
- Vector: vec,
- Timestamp: ts,
+ Id: idCopy,
+ Vector: vecCopy,
+ Timestamp: tsCopy,
},
Config: &payload.Insert_Config{
- Timestamp: ts,
+ Timestamp: tsCopy,
SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck,
},
})
// ...
- t.Logf("vector %v id %s inserted to %s", vec, id, res.String())
+ t.Logf("vector %v id %s inserted to %s", vecCopy, idCopy, res.String())
return nil
}))
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| eg, _ := errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Insert.Concurrency)) | |
| for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Insert.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Insert(ctx, &payload.Insert_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: id, | |
| Vector: vec, | |
| Timestamp: ts, | |
| }, | |
| Config: &payload.Insert_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to insert vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) | |
| return nil | |
| })) | |
| } | |
| eg, _ := errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Insert.Concurrency)) | |
| for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Insert.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| idCopy, vecCopy, tsCopy := id, vec, ts | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.Insert(ctx, &payload.Insert_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: idCopy, | |
| Vector: vecCopy, | |
| Timestamp: tsCopy, | |
| }, | |
| Config: &payload.Insert_Config{ | |
| Timestamp: tsCopy, | |
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to insert vector: %v", err) | |
| } | |
| } | |
| t.Logf("vector %v id %s inserted to %s", vecCopy, idCopy, res.String()) | |
| return nil | |
| })) | |
| } |
| func indexStatus(t *testing.T, ctx context.Context) { | ||
| t.Helper() | ||
| { | ||
| res, err := client.IndexInfo(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexInfo %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexInfo %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexInfo: %v", res.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid nil-pointer dereferences when handling errors.
In each block, when an error occurs, the code still calls res.String() even if res might be nil. Consider exiting early or conditionally logging the response to prevent nil-pointer panics.
res, err := client.IndexInfo(ctx, &payload.Empty{})
if err != nil {
st, ok := status.FromError(err)
...
+ return
}
if res != nil {
t.Logf("IndexInfo: %v", res.String())
}Also applies to: 197-207, 209-219, 221-231
tests/v2/e2e/kubernetes/rollout.go
Outdated
| if deploy.Spec.Replicas != nil { | ||
| desired = *deploy.Spec.Replica |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect reference to Deployment replicas field.
The field should be deploy.Spec.Replicas rather than deploy.Spec.Replica, or this code will not compile.
if deploy.Spec.Replicas != nil {
- desired = *deploy.Spec.Replica
+ desired = *deploy.Spec.Replicas
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if deploy.Spec.Replicas != nil { | |
| desired = *deploy.Spec.Replica | |
| if deploy.Spec.Replicas != nil { | |
| desired = *deploy.Spec.Replicas | |
| } |
| sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent potential index out-of-range panic.
Referencing Default.Search.Queries[0] can panic if Default.Search.Queries is empty. Provide a safe fallback:
- sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout)
+ if len(Default.Search.Queries) > 0 {
+ sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout)
+ } else {
+ sq.Timeout = time.Second * 3 // fallback if Queries is empty
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) | |
| if len(Default.Search.Queries) > 0 { | |
| sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) | |
| } else { | |
| sq.Timeout = time.Second * 3 // fallback if Queries is empty | |
| } |
efff98c to
b6a36ef
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
♻️ Duplicate comments (5)
tests/v2/e2e/crud/multi_test.go (1)
423-431:⚠️ Potential issueAvoid potential nil pointer dereference when
GetObjectfails.If
erris non-nil,objmay benil, causingobj.String()to panic.Apply this fix:
if err != nil { t.Errorf("failed to get object: %v", err) + return nil +} +if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) +} else { + t.Errorf("failed to get object: object is nil") + return nil } -t.Logf("id %s got object: %v", id, obj.String())tests/v2/e2e/crud/stream_test.go (1)
432-432:⚠️ Potential issueFix incorrect slice range in remove operation.
The code is using
cfg.Updatefields instead ofcfg.Removefields for the slice range, leading to incorrect behavior.-datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] +datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]tests/v2/e2e/crud/unary_test.go (2)
616-637:⚠️ Potential issueFix mismatch in logged error vs. actual condition.
At lines 630-634, the code checks for
codes.AlreadyExistsbut logs a "NotFound" message. Update the log message or the error condition to stay consistent. This discrepancy can confuse debugging.-if !ok || st == nil || st.Code() != codes.AlreadyExists { - t.Errorf("update vector response is not AlreadyExists: %v with SkipStrictExistCheck=false", err) -} -t.Logf("received a NotFound error on #5: %s", err.Error()) +if !ok || st == nil || st.Code() != codes.AlreadyExists { + t.Errorf("update vector response is not AlreadyExists: %v with SkipStrictExistCheck=false", err) +} +t.Logf("received an AlreadyExists error on #5: %s", err.Error())
56-85: 🛠️ Refactor suggestionAvoid capturing loop variables in concurrent goroutines.
The loop directly references
iandvecinsideeg.Go(...). If subsequent iterations overwrite their values before the goroutine runs, it can lead to unexpected results. Copy the loop variables before usage:for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { - id := strconv.Itoa(i) - ... + iCopy, vecCopy := i, vec + id := strconv.Itoa(iCopy) eg.Go(safety.RecoverFunc(func() error { - res, err := client.Insert(ctx, ...) + res, err := client.Insert(ctx, &payload.Insert_Request{ + Vector: &payload.Object_Vector{ + Id: id, + Vector: vecCopy, + ... + }, + }) ... })) }tests/v2/e2e/crud/crud_test.go (1)
152-166:⚠️ Potential issuePrevent division by zero in
recall.If
neighborsis empty,float64(len(neighbors))is 0. Check for a non-empty slice before dividing or log a warning if it’s empty.+if len(neighbors) == 0 { + t.Errorf("neighbors is empty; cannot compute recall") + return 0 +} return recall / float64(len(neighbors))
🧹 Nitpick comments (12)
tests/v2/e2e/crud/stream_test.go (4)
160-163: Fix incorrect error message.The error message refers to "insert stream" but this is in the search stream section.
-t.Errorf("failed to complete insert stream %v", err) +t.Errorf("failed to complete search stream %v", err)
222-224: Fix incorrect error message.The error message refers to "insert stream" but this is in the search by ID stream section.
-t.Errorf("failed to complete insert stream %v", err) +t.Errorf("failed to complete search by ID stream %v", err)
283-285: Fix incorrect error message.The error message refers to "insert stream" but this is in the linear search stream section.
-t.Errorf("failed to complete insert stream %v", err) +t.Errorf("failed to complete linear search stream %v", err)
344-346: Fix incorrect error message.The error message refers to "insert stream" but this is in the linear search by ID stream section.
-t.Errorf("failed to complete insert stream %v", err) +t.Errorf("failed to complete linear search by ID stream %v", err)internal/params/params.go (5)
46-48: Add concurrency caveat to the new fields.Storing independent
FlagSetinstances and conditionally overriding the globalflag.CommandLineis powerful but can risk concurrency issues if multiple parsers exist in the same process. Document this behavior or ensure callers do not parse flags concurrently.
67-71: Consider deferring a restore for the global flag set.When
overrideDefaultis true,p.New()switchesflag.CommandLinetop.fwithout restoring it after usage. If the rest of the application also relies onflag.CommandLine, it could cause side effects. Provide a usage pattern (e.g.,defer p.Restore()) to avoid unintentional global overrides.
94-101: Handle special-case errors differently.The code checks
if !errors.Is(err, flag.ErrHelp), returningerrors.ErrArgumentParseFailed(err)for other errors. Consider distinguishing between parse errors that require usage instructions and hard failures that should abort the process, giving more context to users on what went wrong.
112-116: Restore may need concurrency guard.
Restore()revertsflag.CommandLineto the original. If multiple goroutines parse or override flags concurrently, you might inadvertently restore while another parser is in use. Consider documenting that this function is not concurrency-safe.
118-122: Ensure consistent usage withOverride().Overriding global flags is a non-trivial behavior that can cause confusion if other packages rely on
flag.CommandLine. Where possible, keep the usage local to tests or well-defined lifecycle phases, or add logging to show these transitions.tests/v2/e2e/crud/unary_test.go (1)
94-136: Provide sub-tests for searching logic.Replacing repeated search logic with sub-tests (
t.Run("search case description", func(t *testing.T) {...})) improves clarity and allows running each search scenario independently. This also makes test output more organized.tests/v2/e2e/kubernetes/resources.go (1)
278-300: Check ephemeral container support
Ephemeral container updates may not be supported on older Kubernetes clusters or require special feature flags. Ensure compatibility or handle errors gracefully if ephemeral containers are not available.tests/v2/e2e/kubernetes/status.go (1)
83-117: Consider adding a max timeout or failure threshold
TheWaitForStatusfunction can block indefinitely if the resource never transitions to the desired status. Consider adding a maximum duration or retry limit to reduce the risk of hanging tests.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (36)
.github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(5 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(3 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (26)
- .github/workflows/helm-lint.yaml
- .github/workflows/test-hack.yaml
- .github/workflows/e2e-code-bench-agent.yaml
- .github/workflows/coverage.yaml
- .github/workflows/unit-test-rust.yaml
- .github/workflows/codeql-analysis.yml
- .github/workflows/unit-test.yaml
- .github/workflows/build-binaries.yaml
- .github/workflows/helm.yaml
- .github/workflows/fossa.yaml
- internal/errors/errors.go
- .github/workflows/reviewdog.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/chatops.yaml
- internal/params/option.go
- .github/workflows/reviewdog-k8s.yaml
- .github/workflows/e2e-profiling.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/_docker-image.yaml
- .github/workflows/update-deps.yaml
- .github/workflows/e2e.yaml
- .github/workflows/e2e-chaos.yaml
- .github/workflows/format.yaml
- .github/workflows/build-protobuf.yaml
- tests/v2/e2e/kubernetes/client.go
- internal/net/grpc/errdetails/errdetails.go
⏰ Context from checks skipped due to timeout of 90000ms (117)
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
🔇 Additional comments (14)
internal/net/http/client/client.go (2)
36-38: Good refactoring by delegating to the new function.The modification to
Newfunction maintains backward compatibility while delegating to the more flexibleNewWithTransportfunction.
40-45: Well-implemented constructor for custom transports.The new
NewWithTransportfunction improves code reusability by allowing custom transports. The type assertion and cloning is properly handled to prevent modifying the original transport.tests/v2/e2e/kubernetes/rollout.go (2)
30-32: Well-defined constant for rollout annotation.Using a constant for the Kubernetes rollout annotation key ensures consistency throughout the codebase.
54-76: Clean generic implementation for Kubernetes resource rollouts.The
RolloutRestartfunction:
- Leverages Go generics for reusability across different Kubernetes resource types
- Uses the retry mechanism correctly to handle potential conflicts
- Properly creates annotations map if none exist
- Follows Kubernetes best practices for triggering rollouts via annotations
The implementation is concise and handles error conditions appropriately.
tests/v2/e2e/crud/multi_test.go (1)
36-646: Refactor test for better maintainability.The test function is too large and contains significant code duplication. Consider:
- Extracting helper functions for common operations
- Splitting the test into smaller, focused test cases
This would improve readability, maintainability, and reduce the chance of errors.
tests/v2/e2e/crud/stream_test.go (1)
35-541: Extract common error handling into a utility function.The error handling pattern is repeated throughout the test code. Consider extracting it into a helper function to reduce duplication.
func handleStreamError(t *testing.T, err error, operation string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to %s: %v, status: %s", operation, err, st.String()) } else { t.Errorf("failed to %s: %v", operation, err) } } }internal/params/params.go (1)
102-110: Add guard to confirm config file existence before usage.The current check short-circuits if
d.configFilePathis empty or nonexistent, printing usage. Confirm whether a better user message or fallback path is needed, especially if a default file path might exist.tests/v2/e2e/crud/unary_test.go (2)
412-431: Validate that removal by timestamp works as expected for edge cases.This block removes vectors older than an hour. Ensure that test data actually has such vectors, or the test might not catch real issues. If none qualify, the test won't truly verify the timestamp-based removal logic.
449-496: Consistent concurrency fix for loop variables.The concurrency variable capture hazard appears again. Copy
iandvecinside each goroutine to ensure each goroutine references the correct values. This was noted earlier but also applies to all loops here in the skip-strict-exist-check tests.tests/v2/e2e/crud/crud_test.go (1)
179-179: Safeguard collection slicing incalculateRecall.
ds.Neighbors[idx][:len(topKIDs)]can panic ifds.Neighbors[idx]is shorter thanlen(topKIDs). Ensurelen(ds.Neighbors[idx]) >= len(topKIDs)before slicing.tests/v2/e2e/config/config.go (3)
110-111: Fix YAML tag & doc mismatch for LocalPort
TheLocalPortfield reuses the same YAML tag asPodPort, which can cause confusion or misconfiguration. Additionally, the doc states “it defaults to PodPort” but your code defaults to8081.- PodPort uint16 `yaml:"pod_port"` // The port number on the pod. - LocalPort uint16 `yaml:"pod_port"` // The local port number; if not set, it defaults to PodPort. + PodPort uint16 `yaml:"pod_port"` // The port number on the pod. + LocalPort uint16 `yaml:"local_port"` // The local port number; if not set, it defaults to PodPort.
233-234: Check for potential index out-of-range panic
ReferencingDefault.Search.Queries[0]can panic ifDefault.Search.Queriesis empty. Provide a safe fallback.- sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) + var defaultTimeout time.Duration + if len(Default.Search.Queries) > 0 { + defaultTimeout = Default.Search.Queries[0].Timeout + } else { + defaultTimeout = time.Second * 3 // fallback if Queries is empty + } + sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, defaultTimeout)
466-466: Avoid potential sensitive data exposure in logs
Logging the entire configuration as raw YAML may leak sensitive or private information (PII). Consider sanitizing or removing this debug log.- log.Debug(config.ToRawYaml(cfg)) + // log.Debug(config.ToRawYaml(cfg)) // Remove or sanitize to prevent sensitive data leakagetests/v2/e2e/kubernetes/resources.go (1)
188-232: Revisit read locks for write operations
Each write operation (Create, Update, Delete, etc.) acquires only a read lock. While this may be intended to guard against concurrent modification of the underlying client pointer, it might also allow concurrency in stateful calls that alter resources in the cluster. Consider using an exclusive lock on cluster write operations or verify that the client pointer will not be reassigned during these calls.
| if len(datas) < idx { | ||
| return nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix array bounds check in search by ID stream.
Same issue appears throughout the file in similar pattern functions.
-if len(datas) < idx {
+if idx >= len(datas) {
return nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if len(datas) < idx { | |
| return nil | |
| } | |
| if idx >= len(datas) { | |
| return nil | |
| } |
| if len(datas) < idx { | ||
| return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix array bounds check in search stream.
The condition should be idx >= len(datas) to correctly detect when we've processed all elements.
-if len(datas) < idx {
+if idx >= len(datas) {
return nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if len(datas) < idx { | |
| return nil | |
| if idx >= len(datas) { | |
| return nil | |
| } |
| return nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix array bounds check.
The condition should be idx >= len(datas) to correctly detect when we've processed all elements.
-if len(datas) < idx {
+if idx >= len(datas) {
return nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return nil | |
| } | |
| if idx >= len(datas) { | |
| return nil | |
| } |
| t.Errorf("failed to insert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check for nil response before calling res.String().
When an error occurs, the returned res might be nil. Calling res.String() can panic. Condition the logging on res being non-nil.
-if err != nil {
- ...
-}
-t.Logf("vector %v inserted to %s", vec, res.String())
+if err != nil {
+ ...
+} else if res != nil {
+ t.Logf("vector %v inserted to %s", vec, res.String())
+}Also applies to: 130-130, 174-174, 218-218, 260-260, 294-294, 341-341, 372-372, 406-406, 430-430, 443-443
| res, err := client.IndexInfo(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexInfo %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexInfo %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexInfo: %v", res.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exit early on error to avoid potential res.String() panics.
When client.IndexInfo (or other index calls) fails, res might be nil. Always return or skip logging if err != nil.
Also applies to: 198-207, 210-219, 222-231
b6a36ef to
451eab3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (7)
tests/v2/e2e/crud/multi_test.go (1)
423-431:⚠️ Potential issueAvoid potential nil pointer dereference when
GetObjectfails.If
erris non-nil,objmay benil, causingobj.String()to panic. Consider returning immediately or skipping theobj.String()call whenerr != nil.Here's an example fix:
if err != nil { t.Errorf("failed to get object: %v", err) + return nil +} +if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) +} else { + t.Errorf("failed to get object: object is nil") + return nil } -t.Logf("id %s got object: %v", id, obj.String())tests/v2/e2e/crud/stream_test.go (1)
432-432:⚠️ Potential issueFix incorrect slice range when removing vectors.
The current implementation uses
cfg.Update.Offsetandcfg.Update.Numinstead of the corresponding Remove configuration values, which will likely select the wrong vectors for removal.-datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] +datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]tests/v2/e2e/crud/unary_test.go (4)
36-50:⚠️ Potential issueCheck for potential nil
resinIndexPropertylogging.If an error occurs at lines 41-47,
resmight be nil, but line 49 unconditionally callsres.String(). Return early or conditionally log to avoid nil pointer panics.if err != nil { ... + return } -t.Logf("IndexProperty: %v", res.String()) +if res != nil { + t.Logf("IndexProperty: %v", res.String()) +}
412-431: 🛠️ Refactor suggestionSkip logging
res.String()ifresis nil.If an error occurs when calling
RemoveByTimestamp,resmight be nil. Accessingres.String()at line 430 could panic.if err != nil { ... + return } -t.Logf("removed by timestamp %s to %s", time.Unix(0, rts).String(), res.String()) +if res != nil { + t.Logf("removed by timestamp %s to %s", time.Unix(0, rts).String(), res.String()) +}
267-310:⚠️ Potential issueAvoid calling
obj.String()ifobjmight be nil.Line 283 logs
obj.String()even if an error occurred at lines 275-281. Ifobjis nil, this will cause a panic.if err != nil { ... + return } -t.Logf("id %s got object: %v", id, obj.String()) +if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) +}
52-90: 🛠️ Refactor suggestionUse goroutine variable capture to avoid data race.
In lines 56-62, the loop variables
iandvecare captured by the goroutine. To ensure correctness, copy them into local variables before spawning the goroutine.for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { + iCopy, vecCopy := i, vec eg.Go(safety.RecoverFunc(func() error { - res, err := client.Insert(ctx, &payload.Insert_Request{ /* uses i, vec */ }) + res, err := client.Insert(ctx, &payload.Insert_Request{ /* uses iCopy, vecCopy */ }) ... })) }tests/v2/e2e/crud/crud_test.go (1)
186-196: 🛠️ Refactor suggestionShort-circuit on error to prevent dereferencing
res.When
client.IndexInfofails,resmight be nil. Line 195 callsres.String(), risking a panic.if err != nil { ... + return } -t.Logf("IndexInfo: %v", res.String()) +if res != nil { + t.Logf("IndexInfo: %v", res.String()) +}
🧹 Nitpick comments (6)
tests/v2/e2e/crud/multi_test.go (2)
76-106: Extract error handling pattern into a reusable function.This error handling pattern is repeated throughout the test code. Consider creating a helper function to reduce duplication and improve maintainability.
func handleError(t *testing.T, err error, operation string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to %s: %v, status: %s", operation, err, st.String()) } else { t.Errorf("failed to %s: %v", operation, err) } } }Then use it throughout the code:
handleError(t, err, "insert vector")
57-104: Consider using a helper function for creating insert requests.The request creation logic is duplicated across multiple operations (insert, update, upsert). Extract common patterns to improve maintainability.
func createMultiRequest(t *testing.T, vectors [][]float32, idOffset int, ts int64, bulkSize int, createReqFunc func(id string, vec []float32, timestamp int64) interface{}) []interface{} { reqs := make([]interface{}, 0, bulkSize) for i, vec := range vectors { id := strconv.Itoa(i + idOffset) reqs = append(reqs, createReqFunc(id, vec, ts)) if len(reqs) >= bulkSize { break } } return reqs }tests/v2/e2e/kubernetes/portforward.go (2)
38-50: Consider more descriptive naming or doc comment expansion.The
Portforwardstruct is straightforward but consider adding more details in the doc comment about the typical usage pattern or rename it toPortForwarderfor clarity. This helps future maintainers quickly understand the type’s role.
120-197: Provide an isolated example or renamesamplefor clarity.The
samplefunction is a helpful usage guide but is mixed with production code. Consider moving it to anexamplesdirectory or giving it a name that clarifies its purpose (e.g.,ExamplePortforwardUsage).tests/v2/e2e/crud/unary_test.go (1)
640-674: Fix the inconsistent log message about an error code.At line 634, the text references "NotFound" error, but the check above expects
codes.AlreadyExists. Align the log message with the actual error code.- t.Logf("received a NotFound error on #5: %s", err.Error()) + t.Logf("received an AlreadyExists error on #5: %s", err.Error())tests/v2/e2e/crud/crud_test.go (1)
127-144: Reduce duplication innewClient.Multiple fields in this constructor are repeated across e2e tests. Consider consolidating the client creation logic in a shared utility function or package.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (37)
.github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(5 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(3 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/portforward.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (27)
- .github/workflows/e2e-profiling.yaml
- .github/workflows/build-protobuf.yaml
- .github/workflows/test-hack.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/helm.yaml
- .github/workflows/helm-lint.yaml
- .github/workflows/format.yaml
- .github/workflows/unit-test.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/e2e-chaos.yaml
- .github/workflows/unit-test-rust.yaml
- .github/workflows/coverage.yaml
- internal/net/http/client/client.go
- tests/v2/e2e/kubernetes/rollout.go
- .github/workflows/_docker-image.yaml
- .github/workflows/reviewdog.yaml
- internal/params/option.go
- .github/workflows/build-binaries.yaml
- tests/v2/e2e/kubernetes/client.go
- .github/workflows/fossa.yaml
- .github/workflows/codeql-analysis.yml
- .github/workflows/e2e.yaml
- .github/workflows/chatops.yaml
- .github/workflows/reviewdog-k8s.yaml
- .github/workflows/update-deps.yaml
- .github/workflows/e2e-code-bench-agent.yaml
- internal/errors/errors.go
⏰ Context from checks skipped due to timeout of 90000ms (123)
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
🔇 Additional comments (21)
internal/net/grpc/errdetails/errdetails.go (6)
81-83: Good addition of theDetailsstruct for better organization.This struct provides a more structured representation for error details collection, improving JSON serialization.
94-95: Improved type handling with prefix trimming.The refactored code now correctly handles both standard and v1-prefixed type URLs more cleanly.
166-168: Great addition of default case for handling unknown message types.Adding support for unmarshaling unknown types using
protojson.Marshalmakes the code more robust when encountering unexpected message types.
169-172: Enhanced error logging with better context.The improved error logging now includes the type name and object details, making debugging easier.
316-319: Validate error handling result inSerialize.Currently, if marshaling
Detailsfails, the function falls back tofmt.Sprint(objs...), which may mask the underlying error cause. Consider logging or returning the error more explicitly if possible.
417-422: Good enhancement with default case for handling unknown types inAnyToErrorDetail.The new default case allows the function to attempt unmarshaling any unknown types, improving robustness.
tests/v2/e2e/crud/multi_test.go (1)
36-646: Refactor test for better maintainability.The test function is too large and contains significant code duplication. Consider:
- Extracting helper functions for common operations.
- Splitting the test into smaller, focused test cases.
tests/v2/e2e/crud/stream_test.go (1)
35-541: Extract common error handling into a utility function.The error handling pattern is repeated throughout the stream test code. Consider creating a helper function to reduce duplication and improve maintainability.
internal/params/params.go (4)
45-59: Good enhancement of parser struct with flag management capabilities.The added fields allow better control over flag overriding, which is particularly useful in testing scenarios.
67-71: Well-designed flag management initialization.The approach of saving defaults and conditionally overriding them provides flexibility while maintaining the ability to restore original settings.
112-122: Good addition of Restore and Override methods.These methods provide clean mechanisms for managing command line flags, which is especially useful for tests that need to manipulate flags temporarily.
96-97: Improved error handling using errors.Is.The change to use
errors.Is(err, flag.ErrHelp)is more idiomatic and follows best practices for error comparison.tests/v2/e2e/kubernetes/portforward.go (2)
1-18: Good initial structure and licensing.The file header and licensing information are well-organized and follow best practices.
52-62: Validate parameters inNewPF.The constructor doesn’t validate critical fields such as
localPortandpodPort. Consider adding explicit checks to avoid runtime failures.tests/v2/e2e/crud/crud_test.go (2)
210-220: Ensureresis non-nil before logging inIndexStatisticsblock.The same nil-pointer concern applies here if the gRPC request fails.
168-181: Consider bounds-checking fords.Neighbors[idx]before slicing.Line 179 slices
ds.Neighbors[idx]up tolen(topKIDs). Ifneighborsis shorter, this causes an out-of-bounds panic. Add a length guard to ensure safe slicing.tests/v2/e2e/kubernetes/resources.go (1)
1-372: All-around solid implementation of Kubernetes resource abstractions.The use of
sync.RWMutexfor concurrency is well-handled, ensuring thread safety across create/read/update/delete methods. The code cleanly separates interfaces for different resource categories and enforces necessary validations. Overall, this file looks good to merge.tests/v2/e2e/config/config.go (3)
110-113: Doc mismatch for local port default behavior.
233-234: Prevent potential index out-of-range panic.
465-466: Potential PII exposure in debug log.tests/v2/e2e/kubernetes/status.go (1)
1-511: Robust status handling.This file provides comprehensive checks for various Kubernetes resources, covering many edge cases. The ticker-based polling approach is reasonable. The code is cohesive and well structured, and each evaluation function is clearly scoped to a specific resource type.
| if len(cfg.Search.Queries) < qidx { | ||
| qidx = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix query array bounds check in stream search functions.
The condition should be qidx >= len(cfg.Search.Queries) to correctly detect when all queries have been processed.
-if len(cfg.Search.Queries) < qidx {
+if qidx >= len(cfg.Search.Queries) {
qidx = 0
idx++
}Also applies to: 177-178, 237-238, 299-300
| func (p *Portforward) Start() error { | ||
| stream := genericclioptions.IOStreams{ | ||
| In: os.Stdin, | ||
| Out: os.Stdout, | ||
| ErrOut: os.Stderr, | ||
| } | ||
|
|
||
| path := fmt.Sprintf("/api/v1/namespaces/%s/pods/%s/portforward", | ||
| p.namespace, p.podName) | ||
|
|
||
| hostIP := strings.TrimPrefix(strings.TrimPrefix(p.restConfig.Host, "http://"), "https://") | ||
|
|
||
| transport, upgrader, err := spdy.RoundTripperFor(p.restConfig) | ||
| if err != nil { | ||
| return err | ||
| } | ||
|
|
||
| ech := make(chan error, 1) | ||
| go func() { | ||
| fw, err := portforward.New( | ||
| spdy.NewDialer( | ||
| upgrader, | ||
| &http.Client{Transport: transport}, | ||
| http.MethodPost, | ||
| &url.URL{Scheme: "https", Path: path, Host: hostIP}, | ||
| ), | ||
| []string{fmt.Sprintf("%d:%d", p.localPort, p.podPort)}, | ||
| p.stopCh, | ||
| p.readyCh, | ||
| stream.Out, | ||
| stream.ErrOut, | ||
| ) | ||
| if err != nil { | ||
| ech <- err | ||
| } | ||
|
|
||
| err = fw.ForwardPorts() | ||
| if err != nil { | ||
| ech <- err | ||
| } | ||
| }() | ||
|
|
||
| select { | ||
| case <-p.readyCh: | ||
| return nil | ||
| case err = <-ech: | ||
| return err | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential missing imports for strings and net/url.
Line 74 uses strings.TrimPrefix, and line 88 references url.URL, but there are no imports for them. This likely results in a compilation error.
Apply this diff to fix the issue:
+import (
+ "net/url"
+ "strings"
+ ...
+)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (p *Portforward) Start() error { | |
| stream := genericclioptions.IOStreams{ | |
| In: os.Stdin, | |
| Out: os.Stdout, | |
| ErrOut: os.Stderr, | |
| } | |
| path := fmt.Sprintf("/api/v1/namespaces/%s/pods/%s/portforward", | |
| p.namespace, p.podName) | |
| hostIP := strings.TrimPrefix(strings.TrimPrefix(p.restConfig.Host, "http://"), "https://") | |
| transport, upgrader, err := spdy.RoundTripperFor(p.restConfig) | |
| if err != nil { | |
| return err | |
| } | |
| ech := make(chan error, 1) | |
| go func() { | |
| fw, err := portforward.New( | |
| spdy.NewDialer( | |
| upgrader, | |
| &http.Client{Transport: transport}, | |
| http.MethodPost, | |
| &url.URL{Scheme: "https", Path: path, Host: hostIP}, | |
| ), | |
| []string{fmt.Sprintf("%d:%d", p.localPort, p.podPort)}, | |
| p.stopCh, | |
| p.readyCh, | |
| stream.Out, | |
| stream.ErrOut, | |
| ) | |
| if err != nil { | |
| ech <- err | |
| } | |
| err = fw.ForwardPorts() | |
| if err != nil { | |
| ech <- err | |
| } | |
| }() | |
| select { | |
| case <-p.readyCh: | |
| return nil | |
| case err = <-ech: | |
| return err | |
| } | |
| } | |
| package kubernetes | |
| import ( | |
| "fmt" | |
| "net/http" | |
| "os" | |
| "net/url" | |
| "strings" | |
| // ... other necessary imports (e.g., genericclioptions, spdy, portforward) | |
| ) | |
| func (p *Portforward) Start() error { | |
| stream := genericclioptions.IOStreams{ | |
| In: os.Stdin, | |
| Out: os.Stdout, | |
| ErrOut: os.Stderr, | |
| } | |
| path := fmt.Sprintf("/api/v1/namespaces/%s/pods/%s/portforward", | |
| p.namespace, p.podName) | |
| hostIP := strings.TrimPrefix(strings.TrimPrefix(p.restConfig.Host, "http://"), "https://") | |
| transport, upgrader, err := spdy.RoundTripperFor(p.restConfig) | |
| if err != nil { | |
| return err | |
| } | |
| ech := make(chan error, 1) | |
| go func() { | |
| fw, err := portforward.New( | |
| spdy.NewDialer( | |
| upgrader, | |
| &http.Client{Transport: transport}, | |
| http.MethodPost, | |
| &url.URL{Scheme: "https", Path: path, Host: hostIP}, | |
| ), | |
| []string{fmt.Sprintf("%d:%d", p.localPort, p.podPort)}, | |
| p.stopCh, | |
| p.readyCh, | |
| stream.Out, | |
| stream.ErrOut, | |
| ) | |
| if err != nil { | |
| ech <- err | |
| } | |
| err = fw.ForwardPorts() | |
| if err != nil { | |
| ech <- err | |
| } | |
| }() | |
| select { | |
| case <-p.readyCh: | |
| return nil | |
| case err = <-ech: | |
| return err | |
| } | |
| } |
451eab3 to
0aa2a87
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (11)
tests/v2/e2e/crud/multi_test.go (1)
423-431:⚠️ Potential issueAvoid potential nil pointer dereference when
GetObjectfails.If
erris non-nil,objmay benil, causingobj.String()to panic.- if err != nil { - t.Errorf("failed to get object: %v", err) - } - t.Logf("id %s got object: %v", id, obj.String()) + if err != nil { + t.Errorf("failed to get object: %v", err) + return nil + } + if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) + } else { + t.Errorf("failed to get object: object is nil") + return nil + }tests/v2/e2e/crud/stream_test.go (4)
67-69:⚠️ Potential issueFix array bounds check logic across all stream handlers.
The condition should be
idx >= len(datas)to correctly detect when the index is out of bounds.-if len(datas) < idx { +if idx >= len(datas) { return nil }This pattern appears multiple times throughout the file in similar functions.
Also applies to: 112-113, 174-176, 234-236, 296-298, 356-358, 391-393, 435-437, 474-476
115-116:⚠️ Potential issueFix query array bounds check in stream search functions.
The condition should be
qidx >= len(cfg.Search.Queries)to correctly detect when all queries have been processed.-if len(cfg.Search.Queries) < qidx { +if qidx >= len(cfg.Search.Queries) { qidx = 0 idx++ }Also applies to: 177-178, 237-238, 299-300
161-162:⚠️ Potential issueFix error messages for different stream operations.
The error messages for non-insert operations incorrectly mention "insert" in the completion error messages.
-t.Errorf("failed to complete insert stream %v", err) +t.Errorf("failed to complete %s stream %v", "search", err)Update each error message to reference the correct operation (search, linearSearch, get, update, remove, upsert).
Also applies to: 223-224, 284-285, 345-346, 376-377, 420-421, 458-460, 502-504
432-432:⚠️ Potential issueResolve suspicious slice range when removing vectors.
You're using
ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num]instead of referencingcfg.Removefor both the offset and length.-datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] +datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]tests/v2/e2e/kubernetes/client.go (1)
69-79:⚠️ Potential issueClarify error handling logic in
newClientfunction.The current error handling logic is confusing. If creating the clientset fails, it attempts to get an in-cluster client but returns errors inconsistently.
func newClient(cfg *rest.Config) (Client, error) { clientset, err := kubernetes.NewForConfig(cfg) if err != nil { log.Debugf("failed to create clientset from config %s,\terror: %v", cfg.String(), err) c, ierr := inClusterConfigClient() - if ierr == nil { - return nil, errors.Join(err, ierr) + if ierr != nil { + return nil, errors.Join(err, ierr) } - return c, nil + return c, nil } return &client{ rest: cfg, clientset: clientset, }, nil }tests/v2/e2e/crud/unary_test.go (2)
74-83:⚠️ Potential issueCheck for nil response object before calling
res.String().If
err != nil, the server might not return a validres. Loggingres.String()without verifyingresis non-nil can cause a nil-pointer panic.Example fix:
if err != nil { ... } -if res != nil { - t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) -} +t.Logf("vector %v id %s inserted to %s", vec, id, res.String())Should be changed to:
if err != nil { ... } else if res != nil { t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) }
52-85: 🛠️ Refactor suggestionAvoid variable capture pitfalls in goroutines.
The loop variable
vecis captured by the closure without creating a local copy, which can lead to unexpected behavior if the loop index or slice value is modified by the time the goroutine executes. To reliably capture the current iteration values, introduce local copies inside the loop.Example fix:
for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { - id := strconv.Itoa(i) ts := cfg.Insert.Timestamp ... eg.Go(safety.RecoverFunc(func() error { + idCopy, vecCopy := strconv.Itoa(i), append([]float32(nil), vec...) res, err := client.Insert(ctx, &payload.Insert_Request{ Vector: &payload.Object_Vector{ - Id: id, - Vector: vec, + Id: idCopy, + Vector: vecCopy, Timestamp: ts, }, ... }) ... - t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) + t.Logf("vector %v id %s inserted to %s", vecCopy, idCopy, res.String()) return nil })) }tests/v2/e2e/config/config.go (3)
110-111:⚠️ Potential issueFix YAML tag mismatch for LocalPort.
Currently,
LocalPortreuses the same YAML tag asPodPort, which can lead to misconfiguration or confusion.Apply this diff to correct the YAML tag:
- LocalPort uint16 `yaml:"pod_port"` + LocalPort uint16 `yaml:"local_port"`
233-234:⚠️ Potential issuePrevent potential index out-of-range panic.
Referencing
Default.Search.Queries[0]can panic ifDefault.Search.Queriesis empty. Provide a safe fallback to avoid runtime errors.- sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) + if Default.Search != nil && Default.Search.Queries != nil && len(Default.Search.Queries) > 0 { + sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) + } else { + sq.Timeout = time.Second * 3 // fallback if Queries is empty + }
465-466:⚠️ Potential issuePotential PII exposure in debug logs.
Logging the entire configuration may inadvertently expose sensitive or private data. Recommend removing or sanitizing this debug log to prevent accidental data leakage.
- log.Debug(config.ToRawYaml(cfg)) + // log.Debug(config.ToRawYaml(cfg)) // Remove or sanitize to prevent sensitive data leakage
🧹 Nitpick comments (7)
tests/v2/e2e/kubernetes/portforward.go (2)
67-67: Fix typo in variable name.The variable
portPiarsshould beportPairsfor better readability and correct terminology.-portPiars := make([]string, 0, len(ports)) +portPairs := make([]string, 0, len(ports))Also update the variable throughout the function.
86-94: Add context handling in goroutine.The goroutine should handle context cancellation to ensure proper cleanup.
errgroup.Go(safety.RecoverFunc(func() (err error) { defer cancel() defer close(ech) + select { + case <-ctx.Done(): + return ctx.Err() + default: if err = pf.ForwardPorts(); err != nil { ech <- err } + } return nil }))tests/v2/e2e/crud/unary_test.go (1)
372-372: Refine contrasting grammar in log messages.The log message
'id %s'd vector removed to %s'can seem awkward and confusing to readers. Consider simplifying or clarifying the grammar.Example fix:
- t.Logf("id %s'd vector removed to %s", id, res.String()) + t.Logf("id %s vector removed, response: %s", id, res.String())Also applies to: 699-699
tests/v2/e2e/kubernetes/resources.go (3)
180-185: Review necessity ofb == nilchecks.These methods check
b == nilbefore reference; however, ifbis truly nil, invoking the method on a nil receiver often leads to a panic in Go. Check if this is truly needed or if it can be removed to reduce complexity.
277-287: Consolidate error handling in extended pod operations.When
UpdateEphemeralContainersis not implemented, you returnerrors.ErrUnimplemented. Ensure that this is the intended behavior, or provide a fully implemented method, since calling code might inadvertently fail at runtime.
301-335: Handle unimplemented Scale operations gracefully.If the
ScaleInterfaceis not supported byb.Interface, you returnerrors.ErrUnimplemented(...). Verify that the calling code or documentation warns of these partial implementations, so users do not rely on an incomplete contract.tests/v2/e2e/kubernetes/status.go (1)
74-117: Consider adding a context deadline or timeout.The
WaitForStatusfunction loops indefinitely unless the context is canceled. If the user forgets to set a deadline or cancel the context, the loop might never exit. Consider adding a default timeout or reinforcing this requirement in documentation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (37)
.github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(5 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(3 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/portforward.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (28)
- .github/workflows/format.yaml
- .github/workflows/fossa.yaml
- .github/workflows/update-deps.yaml
- .github/workflows/helm-lint.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/codeql-analysis.yml
- .github/workflows/e2e-code-bench-agent.yaml
- internal/net/http/client/client.go
- .github/workflows/reviewdog.yaml
- .github/workflows/build-binaries.yaml
- .github/workflows/test-hack.yaml
- .github/workflows/e2e-profiling.yaml
- .github/workflows/build-protobuf.yaml
- .github/workflows/helm.yaml
- .github/workflows/unit-test.yaml
- .github/workflows/coverage.yaml
- internal/net/grpc/errdetails/errdetails.go
- .github/workflows/_docker-image.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/e2e.yaml
- .github/workflows/reviewdog-k8s.yaml
- internal/errors/errors.go
- .github/workflows/unit-test-rust.yaml
- internal/params/params.go
- .github/workflows/e2e-chaos.yaml
- tests/v2/e2e/kubernetes/rollout.go
- .github/workflows/chatops.yaml
- internal/params/option.go
⏰ Context from checks skipped due to timeout of 90000ms (138)
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
🔇 Additional comments (7)
tests/v2/e2e/crud/multi_test.go (1)
36-646: Refactor test for better maintainability.The test function is very large (600+ lines) with significant code duplication in error handling and request creation patterns. This makes maintenance difficult and increases the risk of bugs.
Consider:
- Extracting helper functions for common operations
- Splitting the test into smaller, focused test cases
Example helper functions:
func handleError(t *testing.T, err error, operation string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to %s: %v, status: %s", operation, err, st.String()) } else { t.Errorf("failed to %s: %v", operation, err) } } } func createMultiRequest[T any](cfg *Config, data []float32, createReq func(id string, vec []float32) T) []T { reqs := make([]T, 0, cfg.BulkSize) for i, vec := range data { id := strconv.Itoa(i) reqs = append(reqs, createReq(id, vec)) } return reqs }Smaller test functions:
func TestE2EMultiInsert(t *testing.T) { /* ... */ } func TestE2EMultiSearch(t *testing.T) { /* ... */ }tests/v2/e2e/crud/stream_test.go (1)
35-541: Extract common error handling into a utility function.The error handling pattern is repeated throughout the test code, creating unnecessary duplication.
func handleStreamError(t *testing.T, err error, operation string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to %s: %v, status: %s", operation, err, st.String()) } else { t.Errorf("failed to %s: %v", operation, err) } } }Then use it throughout the code:
if err != nil { - st, ok := status.FromError(err) - if ok && st != nil { - t.Errorf("failed to search vector: %v, status: %s", err, st.String()) - } else { - t.Errorf("failed to search vector: %v", err) - } + handleStreamError(t, err, "search vector") }tests/v2/e2e/kubernetes/portforward.go (1)
22-36:⚠️ Potential issue✅ Verification successful
Missing imports for
stringsandnet/url.Line 74 uses
strings.TrimPrefix, and line 88 referencesurl.URL, but there are no imports for these packages.
🏁 Script executed:
#!/bin/bash # Check if strings.TrimPrefix or url.URL is actually used in this file rg -q "strings\.TrimPrefix|url\.URL" tests/v2/e2e/kubernetes/portforward.go echo "Found references to strings.TrimPrefix or url.URL: $?"Length of output: 192
Verification Complete: No Missing Imports Detected
After running the check, neither
strings.TrimPrefixnorurl.URLis found in the filetests/v2/e2e/kubernetes/portforward.go. It appears that the previously mentioned code lines (74 and 88) referencing these packages have been removed or updated. No additional import statements are necessary at this time.tests/v2/e2e/crud/unary_test.go (2)
161-174: Validateds.Neighbors[idx]length before slicing for recall calculation.When computing recall, the code slices
ds.Neighbors[idx][:len(topKIDs)]. Iflen(ds.Neighbors[idx])is less thanlen(topKIDs), this may cause an out-of-bounds error.Would you like to add a length check before slicing? For instance:
if len(ds.Neighbors[idx]) < len(topKIDs) { t.Errorf("neighbors array length (%d) is smaller than topKIDs length (%d)", len(ds.Neighbors[idx]), len(topKIDs)) return 0 } rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)])
630-634: Fix mismatch between error code and log message.This block checks for
codes.AlreadyExistsbut logs a"NotFound"error message, which is inconsistent. Ensure the log message reflects the actual gRPC status code to avoid confusion.Example fix:
if !ok || st == nil || st.Code() != codes.AlreadyExists { t.Errorf("update vector response is not AlreadyExists: %v with SkipStrictExistCheck=false", err) } -t.Logf("received a NotFound error on #5: %s", err.Error()) +t.Logf("received an AlreadyExists error on #5: %s", err.Error())tests/v2/e2e/crud/crud_test.go (1)
158-164: Prevent potential out-of-bounds slicing.When slicing
ds.Neighbors[idx]to match the length oftopKIDs, ensure thatds.Neighbors[idx]has at least as many elements aslen(topKIDs). Otherwise, this will cause a runtime panic.tests/v2/e2e/kubernetes/resources.go (1)
343-347: Avoid potential race conditions when swapping interfaces.
SetInterfaceacquires a write lock, but consider whether additional synchronization is required to ensure the safe usage of the updated interface in concurrently running goroutines. If the usage pattern is always single-threaded or well-sequenced, this is fine.
| eg.Wait() | ||
|
|
||
| eg, _ = errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.LinearSearch.Concurrency)) | ||
| for i, vec := range ds.Test[cfg.LinearSearch.Offset : cfg.LinearSearch.Offset+cfg.LinearSearch.Num] { | ||
| for _, query := range cfg.LinearSearch.Queries { | ||
| id := strconv.Itoa(i) | ||
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| var ratio *wrapperspb.FloatValue | ||
| if query.Ratio != 0 { | ||
| ratio = wrapperspb.Float(query.Ratio) | ||
| } else { | ||
| ratio = nil | ||
| } | ||
|
|
||
| res, err := client.LinearSearch(ctx, &payload.Search_Request{ | ||
| Vector: vec, | ||
| Config: &payload.Search_Config{ | ||
| RequestId: rid, | ||
| Num: query.K, | ||
| Radius: query.Radius, | ||
| Epsilon: query.Epsilon, | ||
| Timeout: query.Timeout.Nanoseconds(), | ||
| AggregationAlgorithm: query.Algorithm, | ||
| MinNum: query.MinNum, | ||
| Ratio: ratio, | ||
| Nprobe: query.Nprobe, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | ||
| } else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check for nil res in indexStatus before logging.
Several blocks in indexStatus call res.String() even after an error is returned. If the server didn’t return a valid res, logging will panic.
Example fix:
if err != nil {
...
-}
-t.Logf("IndexInfo: %v", res.String())
+ return
+} else if res != nil {
+ t.Logf("IndexInfo: %v", res.String())
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.LinearSearch.Concurrency)) | |
| for i, vec := range ds.Test[cfg.LinearSearch.Offset : cfg.LinearSearch.Offset+cfg.LinearSearch.Num] { | |
| for _, query := range cfg.LinearSearch.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.LinearSearch(ctx, &payload.Search_Request{ | |
| Vector: vec, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| eg.Wait() | |
| eg, _ = errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.LinearSearch.Concurrency)) | |
| for i, vec := range ds.Test[cfg.LinearSearch.Offset : cfg.LinearSearch.Offset+cfg.LinearSearch.Num] { | |
| for _, query := range cfg.LinearSearch.Queries { | |
| id := strconv.Itoa(i) | |
| rid := id + "-" + payload.Search_AggregationAlgorithm_name[int32(query.Algorithm)] | |
| eg.Go(safety.RecoverFunc(func() error { | |
| var ratio *wrapperspb.FloatValue | |
| if query.Ratio != 0 { | |
| ratio = wrapperspb.Float(query.Ratio) | |
| } else { | |
| ratio = nil | |
| } | |
| res, err := client.LinearSearch(ctx, &payload.Search_Request{ | |
| Vector: vec, | |
| Config: &payload.Search_Config{ | |
| RequestId: rid, | |
| Num: query.K, | |
| Radius: query.Radius, | |
| Epsilon: query.Epsilon, | |
| Timeout: query.Timeout.Nanoseconds(), | |
| AggregationAlgorithm: query.Algorithm, | |
| MinNum: query.MinNum, | |
| Ratio: ratio, | |
| Nprobe: query.Nprobe, | |
| }, | |
| }) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to search vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to search vector: %v", err) | |
| } | |
| return err | |
| } else if res != nil { | |
| t.Logf("IndexInfo: %v", res.String()) | |
| } | |
| return nil | |
| })) | |
| } | |
| } |
| res, err := client.IndexInfo(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexInfo %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexInfo %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexInfo: %v", res.String()) | ||
| } | ||
| { | ||
| res, err := client.IndexDetail(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexDetail %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexDetail %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexDetail: %v", res.String()) | ||
| } | ||
| { | ||
| res, err := client.IndexStatistics(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexStatistics %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexStatistics %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexStatistics: %v", res.String()) | ||
| } | ||
| { | ||
| res, err := client.IndexStatisticsDetail(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexStatisticsDetail %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexStatisticsDetail %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexStatisticsDetail: %v", res.String()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against nil pointers in indexStatus.
Similar to unary_test.go, these blocks log res.String() even if there's an error. If res is nil, you risk a panic. Return early or check res != nil before logging.
0aa2a87 to
cdb35f5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
tests/v2/e2e/crud/multi_test.go (1)
422-430:⚠️ Potential issueFix potential nil pointer dereference.
If
erris non-nil,objmay be nil, causingobj.String()to panic. You need to check ifobjis nil before callingString()or return early.Apply this fix:
- if err != nil { - t.Errorf("failed to get object: %v", err) - } - t.Logf("id %s got object: %v", id, obj.String()) + if err != nil { + t.Errorf("failed to get object: %v", err) + return nil + } + if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) + } else { + t.Errorf("failed to get object: object is nil") + return nil + }tests/v2/e2e/kubernetes/client.go (1)
69-79: 🛠️ Refactor suggestionClarify error handling logic in
newClientfunction.The current error handling logic is confusing. If creating the clientset fails, it attempts to get an in-cluster client but returns errors inconsistently. It either returns a nil client and both errors or returns a non-nil client and nil error.
func newClient(cfg *rest.Config) (Client, error) { clientset, err := kubernetes.NewForConfig(cfg) if err != nil { log.Debugf("failed to create clientset from config %s,\terror: %v", cfg.String(), err) c, ierr := inClusterConfigClient() - if ierr == nil { - return nil, errors.Join(err, ierr) + if ierr != nil { + return nil, errors.Join(err, ierr) } - return c, nil + return c, nil } return &client{ rest: cfg, clientset: clientset, }, nil }tests/v2/e2e/crud/unary_test.go (2)
55-84:⚠️ Potential issueFix closure capture in goroutines.
The goroutines capture loop variables directly, which can lead to unexpected behavior as the variables change in the loop. Copy the variables before using them in the closure.
for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { id := strconv.Itoa(i) ts := cfg.Insert.Timestamp if ts == 0 { ts = timestamp } + idCopy, vecCopy, tsCopy := id, vec, ts eg.Go(safety.RecoverFunc(func() error { res, err := client.Insert(ctx, &payload.Insert_Request{ Vector: &payload.Object_Vector{ - Id: id, - Vector: vec, - Timestamp: ts, + Id: idCopy, + Vector: vecCopy, + Timestamp: tsCopy, }, Config: &payload.Insert_Config{ - Timestamp: ts, + Timestamp: tsCopy, SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, }, }) // ... - t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) + t.Logf("vector %v id %s inserted to %s", vecCopy, idCopy, res.String()) return nil })) }Apply this pattern to all goroutines created in loops throughout the file.
39-48:⚠️ Potential issueCheck for nil before calling String() methods.
Throughout the code, you call methods like
res.String()without checking if the object is nil after an error. This can cause panic. Apply this pattern at all similar locations in the code.if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to get IndexProperty %v status: %s", err, st.String()) } else { t.Errorf("failed to get IndexProperty %v", err) } + return + } + if res != nil { + t.Logf("IndexProperty: %v", res.String()) + } else { + t.Log("IndexProperty response is nil") } - t.Logf("IndexProperty: %v", res.String())This pattern should be applied to similar instances across the file where
String()methods are called after error handling.
🧹 Nitpick comments (6)
tests/v2/e2e/crud/unary_test.go (2)
633-634: Correct the log message to match the actual error type.The log message incorrectly says "NotFound" when the error code check at line 629 is actually for "AlreadyExists".
- t.Logf("received a NotFound error on #5: %s", err.Error()) + t.Logf("received an AlreadyExists error on #5: %s", err.Error())
298-305: Remove redundant error logging.There's double error logging here - first a simple error log at line 299, then the standard error handling pattern. Remove the first error log to avoid duplication.
if err != nil { - t.Errorf("failed to get timestamp: %v", err) st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to get object timestamp: %v, status: %s", err, st.String()) } else { t.Errorf("failed to get object timestamp: %v", err) } }tests/v2/e2e/config/config.go (2)
349-421: Consider reducing duplicated default setting logic.Multiple sections (e.g., Insert, Update, Upsert, Remove, etc.) repeat similar assignments for
Num,Offset, andConcurrency. Extracting common defaults into a helper function would adhere to DRY principles and simplify maintenance.
124-189: Enhance metadata parsing logic.Parsing metadata into
d.Metadatarelies on environment expansions and manual splits. Extracting this into a small, dedicated parser function could improve clarity, reduce errors, and simplify unit testing.tests/v2/e2e/kubernetes/status.go (2)
92-93: Consider parameterizing the ticker interval.A fixed 5-second polling interval may not be optimal for all environments or resource scales. Exposing it as a configuration parameter could provide more flexibility (e.g., faster feedback loops in small clusters, reduced load in large ones).
153-174: Replace the big type switch with a more modular strategy.A single switch handling multiple resource types can grow unwieldy. Consider extracting resource-specific checks into a registerable strategy or employing generics more extensively, reducing boilerplate in
CheckResourceState.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
.github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/errors/k8s.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(6 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(3 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/portforward.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (29)
- .github/workflows/e2e-code-bench-agent.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/unit-test.yaml
- .github/workflows/test-hack.yaml
- internal/errors/errors.go
- .github/workflows/e2e-profiling.yaml
- .github/workflows/unit-test-rust.yaml
- .github/workflows/_docker-image.yaml
- .github/workflows/build-binaries.yaml
- .github/workflows/helm.yaml
- .github/workflows/reviewdog-k8s.yaml
- .github/workflows/chatops.yaml
- .github/workflows/format.yaml
- .github/workflows/reviewdog.yaml
- tests/v2/e2e/kubernetes/rollout.go
- .github/workflows/e2e-chaos.yaml
- internal/params/option.go
- .github/workflows/e2e.yaml
- .github/workflows/fossa.yaml
- internal/net/http/client/client.go
- tests/v2/e2e/crud/stream_test.go
- internal/net/grpc/errdetails/errdetails.go
- .github/workflows/codeql-analysis.yml
- internal/params/params.go
- .github/workflows/helm-lint.yaml
- .github/workflows/update-deps.yaml
- .github/workflows/coverage.yaml
- .github/workflows/build-protobuf.yaml
⏰ Context from checks skipped due to timeout of 90000ms (148)
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
🔇 Additional comments (10)
tests/v2/e2e/crud/multi_test.go (1)
35-645: Refactor test for better maintainability.The test function is too large and contains significant code duplication. Consider:
- Extracting helper functions for common operations.
- Splitting the test into smaller, focused test cases.
Here are some suggested improvements:
- Extract helper functions for common operations:
func createMultiRequest[T any](cfg *Config, data []float32, createReq func(id string, vec []float32) T) []T { reqs := make([]T, 0, cfg.BulkSize) for i, vec := range data { id := strconv.Itoa(i) reqs = append(reqs, createReq(id, vec)) } return reqs } func handleError(t *testing.T, err error, msg string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("%s: %v, status: %s", msg, err, st.String()) } else { t.Errorf("%s: %v", msg, err) } } }
- Split the test into smaller test cases:
func TestE2EMultiInsert(t *testing.T) { /* ... */ } func TestE2EMultiSearch(t *testing.T) { /* ... */ } func TestE2EMultiSearchByID(t *testing.T) { /* ... */ } func TestE2EMultiLinearSearch(t *testing.T) { /* ... */ } func TestE2EMultiUpdate(t *testing.T) { /* ... */ } func TestE2EMultiRemove(t *testing.T) { /* ... */ } func TestE2EMultiUpsert(t *testing.T) { /* ... */ }tests/v2/e2e/kubernetes/portforward.go (1)
52-63:Details
✅ Verification successful
Verify dependencies for Pod-related functionality.
This code uses several undeclared functions and constants:
Pod(c, namespace)at line 52CheckResourceState(pod)at line 56StatusAvailableat line 61Ensure these are properly imported from other packages in the codebase or defined in this package in other files.
🏁 Script executed:
#!/bin/bash # Check where Pod, CheckResourceState, and StatusAvailable are defined echo "Searching for Pod function..." rg -A 2 "func Pod" --type go echo "Searching for CheckResourceState function..." rg -A 2 "func CheckResourceState" --type go echo "Searching for StatusAvailable constant..." rg "StatusAvailable" --type goLength of output: 2827
Review Update: Dependency Verification Confirmed
The external references in this code snippet are correctly defined elsewhere in the repository. Specifically:
- The
Pod(c, namespace)function is implemented in tests/v2/e2e/kubernetes/resources.go.- The
CheckResourceState(pod)function and theStatusAvailableconstant are defined in tests/v2/e2e/kubernetes/status.go.Please ensure that tests/v2/e2e/kubernetes/portforward.go imports the correct packages so that these definitions are properly referenced. Also, verify whether any previous concerns regarding missing imports (i.e., for
stringsandnet/url) are still applicable; they do not seem necessary based on the current code snippet.internal/errors/k8s.go (1)
20-38: All error declarations look good
These error variables are well-structured and descriptive. No immediate issues with correctness or consistency.tests/v2/e2e/crud/crud_test.go (2)
181-231: Guard against nil pointer dereferences
Iferr != nil, there's a possibility thatresis nil, which would cause a panic atres.String(). This has been flagged before in older review comments.
177-177:Details
❓ Verification inconclusive
Verify slice bounds when slicing
ds.Neighbors[idx]
Ifds.Neighbors[idx]has fewer elements thanlen(topKIDs), slicing with[:len(topKIDs)]can cause an out-of-range error.
🏁 Script executed:
#!/bin/bash # Searching for all references to ds.Neighbors in the codebase to confirm length checks. # Explanation: We want to ensure ds.Neighbors[idx] is always long enough before slicing. rg "ds\.Neighbors\[.*?\]\[" -A 2Length of output: 641
Ensure Safe Slicing of
ds.NeighborsThe current slicing of
ds.Neighbors[idx]using[:len(topKIDs)](e.g. intests/v2/e2e/crud/crud_test.goand similarly intests/e2e/operation/stream.go) assumes thatds.Neighbors[idx]always contains at leastlen(topKIDs)elements. If that invariant is not guaranteed in all cases, this could result in an out-of-range error. Please verify that this assumption holds; if not, consider adding an explicit length check or another safeguard to prevent potential runtime errors.
- Location:
tests/v2/e2e/crud/crud_test.go(line 177)- Related Occurrence: Similar slicing in
tests/e2e/operation/stream.gotests/v2/e2e/kubernetes/resources.go (1)
1-408: No major issues observed
The concurrency approach with read-write locks is consistent, and error handling is clear. Keep an eye on unimplemented method calls to ensure they're handled gracefully.tests/v2/e2e/config/config.go (3)
231-235: Potential index out-of-range when referencingDefault.Search.Queries[0].If
Default.Search.Queriesis ever empty, this code risks an index out-of-range panic. Consider adding a length check or providing a fallback whenlen(Default.Search.Queries) == 0.
338-340: Doc mismatch forLocalPortdefault behavior.The doc comment states: "if not set, it defaults to TargetPort," but the code assigns
pf.LocalPort = localPort(8081) instead of usingpf.TargetPort. Please reconcile the comment and logic to avoid confusion.
465-466: Risk of exposing sensitive configuration in logs.
log.Debug(config.ToRawYaml(cfg))can log potentially sensitive information. Consider removing or sanitizing this debug output to prevent unintentional leakage.tests/v2/e2e/kubernetes/status.go (1)
484-495: Verify edge cases for multi-ingress assignments.While marking the service as available if any ingress is assigned, consider verifying multi-ingress and ephemeral hostnames. Some cloud providers delay or alter hostnames, potentially causing race conditions or incomplete provisioning states.
| func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { | ||
| t.Helper() | ||
| ns := map[string]struct{}{} | ||
| for _, n := range neighbors { | ||
| ns[strconv.Itoa(n)] = struct{}{} | ||
| } | ||
|
|
||
| for _, r := range resultIDs { | ||
| if _, ok := ns[r]; ok { | ||
| recall++ | ||
| } | ||
| } | ||
|
|
||
| return recall / float64(len(neighbors)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential division by zero in recall function
If neighbors is empty, the division by float64(len(neighbors)) at line 163 will panic. Consider adding a length check for neighbors before performing this calculation.
func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) {
t.Helper()
ns := map[string]struct{}{}
for _, n := range neighbors {
ns[strconv.Itoa(n)] = struct{}{}
}
for _, r := range resultIDs {
if _, ok := ns[r]; ok {
recall++
}
}
- return recall / float64(len(neighbors))
+ if len(neighbors) == 0 {
+ t.Errorf("neighbors is empty, avoiding division by zero")
+ return 0
+ }
+ return recall / float64(len(neighbors))
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { | |
| t.Helper() | |
| ns := map[string]struct{}{} | |
| for _, n := range neighbors { | |
| ns[strconv.Itoa(n)] = struct{}{} | |
| } | |
| for _, r := range resultIDs { | |
| if _, ok := ns[r]; ok { | |
| recall++ | |
| } | |
| } | |
| return recall / float64(len(neighbors)) | |
| } | |
| func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { | |
| t.Helper() | |
| ns := map[string]struct{}{} | |
| for _, n := range neighbors { | |
| ns[strconv.Itoa(n)] = struct{}{} | |
| } | |
| for _, r := range resultIDs { | |
| if _, ok := ns[r]; ok { | |
| recall++ | |
| } | |
| } | |
| if len(neighbors) == 0 { | |
| t.Errorf("neighbors is empty, avoiding division by zero") | |
| return 0 | |
| } | |
| return recall / float64(len(neighbors)) | |
| } |
cdb35f5 to
b23885d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (7)
tests/v2/e2e/crud/multi_test.go (2)
35-645: 🛠️ Refactor suggestionRefactor test for better maintainability.
The test function is too large and contains significant code duplication. Consider:
- Extracting helper functions for common operations.
- Splitting the test into smaller, focused test cases.
Here are some suggested improvements:
- Extract helper functions for common operations:
func createMultiRequest[T any](cfg *Config, data []float32, createReq func(id string, vec []float32) T) []T { reqs := make([]T, 0, cfg.BulkSize) for i, vec := range data { id := strconv.Itoa(i) reqs = append(reqs, createReq(id, vec)) } return reqs } func handleError(t *testing.T, err error, msg string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("%s: %v, status: %s", msg, err, st.String()) } else { t.Errorf("%s: %v", msg, err) } } }
- Split the test into smaller test cases:
func TestE2EMultiInsert(t *testing.T) { /* ... */ } func TestE2EMultiSearch(t *testing.T) { /* ... */ } func TestE2EMultiSearchByID(t *testing.T) { /* ... */ } func TestE2EMultiLinearSearch(t *testing.T) { /* ... */ } func TestE2EMultiUpdate(t *testing.T) { /* ... */ } func TestE2EMultiRemove(t *testing.T) { /* ... */ } func TestE2EMultiUpsert(t *testing.T) { /* ... */ }
422-430:⚠️ Potential issueAvoid potential nil pointer dereference when
GetObjectfails.If
erris non-nil,objmay benil, causingobj.String()to panic. Consider returning immediately or skipping theobj.String()call whenerr != nil.Here's an example fix:
- if err != nil { - t.Errorf("failed to get object: %v", err) - } - t.Logf("id %s got object: %v", id, obj.String()) + if err != nil { + t.Errorf("failed to get object: %v", err) + return nil + } + if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) + } else { + t.Errorf("failed to get object: object is nil") + return nil + }tests/v2/e2e/crud/stream_test.go (5)
66-68:⚠️ Potential issueFix array bounds check logic across all stream handlers.
The condition should be
idx >= len(datas)to correctly detect when the index is out of bounds.-if len(datas) < idx { +if idx >= len(datas) { return nil }Also applies to: 112-113, 174-176, 234-236, 296-298, 356-358, 391-393, 435-437, 474-476
114-115:⚠️ Potential issueFix query array bounds check in stream search functions.
The condition should be
qidx >= len(cfg.Search.Queries)to correctly detect when all queries have been processed.-if len(cfg.Search.Queries) < qidx { +if qidx >= len(cfg.Search.Queries) { qidx = 0 idx++ }Also applies to: 177-178, 237-238, 299-300
160-161: 🛠️ Refactor suggestionFix error messages for different stream operations.
The error messages for non-insert operations incorrectly mention "insert" in the completion error messages.
-t.Errorf("failed to complete insert stream %v", err) +t.Errorf("failed to complete %s stream %v", "search", err)Update each error message to reference the correct operation (search, linearSearch, get, update, remove, upsert).
Also applies to: 223-224, 284-285, 345-346, 376-377, 420-421, 458-460, 502-504
431-432:⚠️ Potential issueResolve suspicious slice range when removing vectors.
Here, you use:
datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num]instead of
datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]This likely removes a different slice than intended. Consider correcting the slice boundaries to avoid accidentally referencing the
Updateconfig.- datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] + datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]
34-540: 🛠️ Refactor suggestionExtract common error handling into a utility function.
The error handling pattern is repeated throughout the test code. Consider extracting it into a helper function to reduce duplication.
func handleStreamError(t *testing.T, err error, operation string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to %s: %v, status: %s", operation, err, st.String()) } else { t.Errorf("failed to %s: %v", operation, err) } } }Then use it throughout the code:
if err != nil { - st, ok := status.FromError(err) - if ok && st != nil { - t.Errorf("failed to search vector: %v, status: %s", err, st.String()) - } else { - t.Errorf("failed to search vector: %v", err) - } + handleStreamError(t, err, "search vector") }
🧹 Nitpick comments (13)
internal/net/grpc/errdetails/errdetails.go (1)
424-425: Consider enhancing error logging inAnyToErrorDetail.The error logging has been improved but could benefit from more specific context about the expected type and the actual data.
- log.Warnf("failed to Unmarshal type: %s, object %#v to JSON error: %v", typeName, a, err) + log.Warnf("failed to Unmarshal type: %s, typeURL: %s, object %#v to JSON error: %v", + typeName, a.GetTypeUrl(), a, err)internal/params/option.go (1)
96-109: Clarify the filter logic in the function comment.The current comment for
WithArgumentFiltersis a bit ambiguous about how the filters are applied. It states that arguments not matching all filters will be added, but doesn't clearly explain the overall filtering process.-// WithArgumentFilters returns Option that sets filters. -// filters is a slice of functions that takes a string and returns a bool. -// If the string not matched all filters (means filter returns false), it will be added to the arguments. +// WithArgumentFilters returns Option that sets argument filters. +// Each filter is a function that takes a string argument and returns a bool. +// For an argument to be included in the final set, it must return false for ALL filters. +// If ANY filter returns true for an argument, that argument will be excluded.tests/v2/e2e/kubernetes/client.go (3)
20-20: Consider adding a package-level comment block or example usage.This file is well-structured, but adding a brief overview or usage example at the package level can help collaborators quickly understand how to use this client in E2E tests.
69-77: Unify or clarify error handling fallback logic.When
kubernetes.NewForConfigfails, the code triesinClusterConfigClient. This fallback mechanism appears in bothnewClientandNewClient. Consider refactoring to a single location, or explicitly logging the error chain to provide greater clarity.
35-36: Check possible typo in method name.
GetRESRConfig()might be intended asGetRESTConfig(). Renaming it would improve clarity and consistency with therest.Confignaming.- GetRESRConfig() *rest.Config + GetRESTConfig() *rest.Configinternal/params/params.go (3)
56-61: Consider grouping interdependent fields together or adding doc comments.New fields
overrideDefault,name,filters,f,defaults, andErrorHandlercan be grouped logically or documented to clarify their roles. This helps maintain readability and avoids confusion about each field's purpose.Also applies to: 71-71
130-130: Usage message triggers an immediate help flow.Invoking
p.f.Usage()here printing usage is fine, but consider returning a more user-friendly error that references the needed flags or the default usage flow.
137-141: Restore method does not log or confirm the reset.Consider logging or returning an error if the default
flag.CommandLineis nil, or if restoration is unexpected. This could help debug scenarios where the flags were not properly restored.tests/v2/e2e/crud/unary_test.go (4)
629-634: Log message and error code mismatch.The test checks for
codes.AlreadyExistsbut logs “NotFound error.” This can confuse maintainers and misrepresent the test outcome:- t.Logf("received a NotFound error on #5: %s", err.Error()) + t.Logf("received an AlreadyExists error on #5: %s", err.Error())
639-639: Typo in log message.The word "succeess" is misspelled; use "success" for clearer communication.
- t.Log("starting test #6 run Update with SkipStrictExistCheck=true & 4 and check that it succeess") + t.Log("starting test #6 run Update with SkipStrictExistCheck=true & 4 and check that it succeeds")
51-85: Check errors returned by errgroup.Wait().The calls to
eg.Wait()are not capturing or checking the returned error. Consider verifying it to detect any failed goroutines.- eg.Wait() + if err := eg.Wait(); err != nil { + t.Errorf("insert operation encountered errors: %v", err) + }
615-634: Improve clarity around expected and actual error codes.After the second update attempt, you expect a specific error code, but the log displays a generic message. Align error codes (e.g.,
codes.AlreadyExistsorcodes.NotFound) with consistent logging so test failures are less ambiguous.tests/v2/e2e/kubernetes/resources.go (1)
300-326: Consider adding additional testing for ephemeral container methods.
The methodsUpdateEphemeralContainersandUpdateResizeare delegated toPodExtendInterfaceimplementations, but they are unimplemented in the base interface itself. It would be beneficial to add unit or integration tests ensuring these methods gracefully handle errors or are properly overridden.Would you like me to open a new issue to outline the required tests and potential usage scenarios for these methods?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
.github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/errors/k8s.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(6 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(5 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/portforward.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (27)
- .github/workflows/test-hack.yaml
- .github/workflows/format.yaml
- .github/workflows/e2e-profiling.yaml
- .github/workflows/unit-test-rust.yaml
- .github/workflows/unit-test.yaml
- .github/workflows/helm.yaml
- .github/workflows/fossa.yaml
- internal/errors/errors.go
- .github/workflows/coverage.yaml
- .github/workflows/chatops.yaml
- .github/workflows/build-protobuf.yaml
- .github/workflows/update-deps.yaml
- .github/workflows/e2e-code-bench-agent.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/e2e-chaos.yaml
- .github/workflows/e2e-max-dim.yaml
- tests/v2/e2e/kubernetes/rollout.go
- .github/workflows/e2e.yaml
- .github/workflows/codeql-analysis.yml
- .github/workflows/_docker-image.yaml
- .github/workflows/helm-lint.yaml
- .github/workflows/build-binaries.yaml
- .github/workflows/reviewdog-k8s.yaml
- .github/workflows/reviewdog.yaml
- tests/v2/e2e/kubernetes/portforward.go
- internal/net/http/client/client.go
- internal/errors/k8s.go
⏰ Context from checks skipped due to timeout of 90000ms (136)
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
🔇 Additional comments (19)
internal/net/grpc/errdetails/errdetails.go (5)
80-82: Appropriate new type for organizing error details.The new
Detailsstruct provides a clean way to encapsulate multipleDetailobjects, making error handling more structured and consistent.
165-171: Great enhancement with default case for unknown message types.Adding a default case that uses
protojson.Marshalimproves the robustness of the error handling system by supporting any message type.
168-171: Validate error handling result inMarshalJSON.Currently, if marshaling
Detailsfails, the function logs the error and returns nil, which may mask the underlying error cause for the caller.Consider enhancing the error message to include the resulting byte content:
- if err != nil || body == nil { - log.Warnf("failed to Marshal type: %s, object %#v to JSON body %v, error: %v", typeName, d, body, err) - return nil, err - } + if err != nil || body == nil { + log.Warnf("failed to Marshal type: %s, object %#v to JSON body %q, error: %v", typeName, d, body, err) + return nil, fmt.Errorf("marshal error for type %s: %w", typeName, err) + }
317-318: Improved serialization architecture.The code now properly handles multiple messages by serializing them as a structured
Detailsobject rather than directly marshaling a slice, which improves consistency and error handling.
416-421: Good fallback handling for unknown message types.Adding a default case to attempt unmarshaling any unknown types improves the robustness of the
AnyToErrorDetailfunction.internal/params/option.go (3)
27-36: Good default options configuration.Setting base defaults including the executable name and disabling override is a sensible approach that improves usability.
38-45: Well-structured function with appropriate validation.The
WithNamefunction includes proper validation to ensure empty names are not set, following good defensive programming practices.
89-94: Clean implementation of configuration option.The
WithOverrideDefaultfunction is simple and effective, providing a clear way to control parser behavior.internal/params/params.go (4)
23-24: Validate necessity of new imports.The newly imported
regexpandslicesshould be verified to ensure they’re actually required. Ifregexpisn't used, consider removing it to keep dependencies minimal.
31-36: Exported error handling constants look good.Defining
ErrorHandlingand constantsContinueOnError,PanicOnError, andExitOnErrormatches the defaultflagstructure. This is a clean approach for customizing error handling.
80-84: Ensure consistent usage of command-line defaults override.When
p.overrideDefaultis true,Override()replacesflag.CommandLine. Ensure that this override behavior doesn’t cause conflicts if multipleparserinstances co-exist.
107-119: Confirm filtering logic correctness.You rely on
p.filtersto remove unwanted arguments. Double-check that no essential flags are filtered out. A potentially safer approach might be to identify and remove only known invalid flags.tests/v2/e2e/crud/crud_test.go (2)
160-174: Add safeguard to avoid division by zero in recall calculation.If
neighborsis empty,float64(len(neighbors))is 0, causing a potential panic. A quick check prior to the division would help avoid errors.
191-241: Check for nil responses before callingres.String()in indexStatus.If the server returns an error and
resisnil, callingres.String()will panic. Return early or conditionally log the response only when non-nil.tests/v2/e2e/kubernetes/resources.go (1)
180-185: Well-structured concurrency with generics.
The usage ofsync.RWMutexfor concurrent access to theInterfaceis clear and consistent, minimizing data race risks. This pattern is straightforward and maintainable.tests/v2/e2e/config/config.go (3)
233-234: Potential index out-of-range when referencing default search queries.
AccessingDefault.Search.Queries[0]can panic ifDefault.Search.Queriesis nil or empty. Please guard against this scenario by checking the length ofDefault.Search.Queriesbefore indexing.- sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) + var defaultTimeout time.Duration + if len(Default.Search.Queries) > 0 { + defaultTimeout = Default.Search.Queries[0].Timeout + } else { + defaultTimeout = 3 * time.Second + } + sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, defaultTimeout)
329-341: Doc mismatch for local port defaulting behavior.
The inline comment says, “The local port number; if not set, it defaults to TargetPort.” However, the code setspf.LocalPortto the constantlocalPort(8081), notpf.TargetPort. Please update the comment to match the actual logic or consider defaultingLocalPorttopf.TargetPortto reflect the doc accurately.- // The local port number; if not set, it defaults to TargetPort. + // The local port number; if not set, it defaults to 8081.Or change the assignment in code:
- if pf.LocalPort == 0 { - pf.LocalPort = localPort - } + if pf.LocalPort == 0 { + pf.LocalPort = pf.TargetPort + }
466-466: Potential PII exposure in debug log.
Dumping the entire configuration as raw YAML can expose sensitive data. Consider obfuscating or removing sensitive fields before logging.- log.Debug(config.ToRawYaml(cfg)) + // log.Debug(config.ToRawYaml(cfg)) // Comment out or sanitize sensitive fields.tests/v2/e2e/kubernetes/status.go (1)
83-114: Validate ticker intervals and potential extended wait times.
TheWaitForStatusfunction uses a fixed 5-second ticker. In large clusters or more involved resource update scenarios, consider adjusting or making this interval configurable to avoid unnecessary overhead or extended wait times if resources frequently change states.
| func (p *parser) Override() { | ||
| if p.f != nil { | ||
| flag.CommandLine = p.f | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Override method usage caution.
Overriding the global flag.CommandLine can be risky if multiple parsers or packages rely on it. Clearly document side effects so that future maintainers are prepared for changes in global state.
b23885d to
5ba68e7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (3)
tests/v2/e2e/crud/multi_test.go (1)
422-430:⚠️ Potential issueAvoid potential nil pointer dereference when
GetObjectfails.
Iferris non-nil,objmay benil, causingobj.String()to panic. Consider returning immediately or skipping theobj.String()call whenerr != nil.Here's an example fix:
if err != nil { t.Errorf("failed to get object: %v", err) + return nil } +if obj != nil { t.Logf("id %s got object: %v", id, obj.String()) +} else { + t.Errorf("failed to get object: object is nil") + return nil +}tests/v2/e2e/config/config.go (2)
107-112:⚠️ Potential issueDoc mismatch for local port default behavior.
The inline comment says “if not set, it defaults to TargetPort,” yet the code unconditionally assigns a hard-coded constant (8081) to bothTargetPortandLocalPortif either is zero. This likely bypasses any user-providedTargetPortif set but not forLocalPort.// If LocalPort is not set, default it to the same value as TargetPort. if pf.LocalPort == 0 { - pf.LocalPort = localPort + pf.LocalPort = pf.TargetPort }Also applies to: 333-340
466-466:⚠️ Potential issuePotential PII exposure in debug log.
Logging the entire configuration as raw YAML could leak sensitive or private items (e.g., tokens, environment variables). Consider removing, sanitizing, or conditionally enabling this debug output.-log.Debug(config.ToRawYaml(cfg)) +// log.Debug(config.ToRawYaml(cfg)) // Remove or sanitize to prevent sensitive data leakage
🧹 Nitpick comments (10)
tests/v2/e2e/kubernetes/client.go (1)
35-36: Rename method to avoid confusion.The method name
GetRESRConfig()appears to be a minor typo or misspelling. Consider renaming it toGetRESTConfig()to enhance clarity and maintain consistency with other K8s-related naming conventions.- GetRESRConfig() *rest.Config + GetRESTConfig() *rest.Config-func (c *client) GetRESRConfig() *rest.Config { +func (c *client) GetRESTConfig() *rest.Config {Also applies to: 105-106
internal/params/params.go (1)
79-82: Document the global flag override side effects.Changing
flag.CommandLinecan introduce unexpected behavior if multiple packages or goroutines rely on the default flag set. It's safer to fully document this side effect or implement a more isolated approach (e.g., using separate flag sets), especially in multi-threaded environments.Also applies to: 141-145
tests/v2/e2e/crud/unary_test.go (2)
629-634: Check for inconsistent or misleading log messages.The code checks for
codes.AlreadyExistsbut logs"received a NotFound error". This is confusing for readers. Consider updating the log message to match the actual error code.- t.Logf("received a NotFound error on #5: %s", err.Error()) + t.Logf("received an AlreadyExists error on #5: %s", err.Error())
639-639: Fix minor grammatical error in log message.The phrase
"check that it succeess"contains a typo. Replace it with"check that it succeeds"for clarity.tests/v2/e2e/kubernetes/resources.go (4)
22-40: Consider adding usage documentation for these interfaces.
Although the code is self-explanatory to an extent, providing brief GoDoc comments or usage examples for the various resource interfaces (e.g.,ObjectInterface,ScaleInterface,PodExtendInterface) can help future contributors quickly understand how to implement them.
80-89: Clarify the relationship between Client and ObjectInterface.
ClientControlInterfaceandResourceClientunify separate concepts of a generic GVK-specific interface and a cluster-wideClient. Some doc comments explaining how these two components interact (e.g., how they share a namespace, how the user obtains them, etc.) could improve clarity and maintainability.
242-251: Recommend clarifying error semantics in Get(...) method.
When theGetcall fails, it’s often valuable to distinguish between “not found” versus other errors for better upstream handling. Consider wrapping or checking fork8s.io/apimachinery/pkg/api/errors.IsNotFound(err)to provide a more descriptive return.
373-383: Optional: Provide an interface for concurrency customization.
While the built-insync.RWMutexusage is suitable for straightforward concurrency, some advanced scenarios may benefit from alternative concurrency approaches (e.g. ensuring fine-grained locking or supporting concurrency contexts). Consider whether you want a pluggable concurrency strategy in the future.tests/v2/e2e/kubernetes/status.go (2)
418-451: Clarify or unify Pod “Running but not ready” vs. “Terminating.”
Right now, a Pod with an ongoing container termination might also drop readiness but not show a “Terminating” status if theDeletionTimestampis not set. This can lead to confusion if the container is shutting down but readiness fails. Consider explicitly evaluating containers’Terminatedstates or readiness conditions to avoid ambiguity.
505-517: Add more context to Ingress statuses.
Beyond “Pending vs. Available,” some Ingress controllers add conditions or logs that can help debug issues (e.g., misconfigured backends). Exposing these details can be critical in diagnosing e2e test failures.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
.github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/errors/k8s.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(6 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(5 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/portforward.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (27)
- .github/workflows/unit-test-rust.yaml
- .github/workflows/test-hack.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/e2e-code-bench-agent.yaml
- .github/workflows/build-binaries.yaml
- .github/workflows/update-deps.yaml
- .github/workflows/build-protobuf.yaml
- .github/workflows/reviewdog.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/coverage.yaml
- .github/workflows/helm-lint.yaml
- .github/workflows/chatops.yaml
- .github/workflows/format.yaml
- .github/workflows/e2e-profiling.yaml
- internal/errors/errors.go
- .github/workflows/e2e-chaos.yaml
- .github/workflows/helm.yaml
- internal/net/http/client/client.go
- .github/workflows/e2e.yaml
- .github/workflows/unit-test.yaml
- .github/workflows/_docker-image.yaml
- .github/workflows/codeql-analysis.yml
- tests/v2/e2e/kubernetes/portforward.go
- .github/workflows/reviewdog-k8s.yaml
- .github/workflows/fossa.yaml
- tests/v2/e2e/kubernetes/rollout.go
- internal/errors/k8s.go
🔇 Additional comments (19)
internal/net/grpc/errdetails/errdetails.go (7)
80-82: Good addition of a container type for error details.The new
Detailsstruct provides a clean way to encapsulate multiple error details, with appropriate JSON/YAML tags and omitempty for efficient serialization.
93-94: Improved type name extraction for message handling.Using
strings.TrimPrefixfor both prefixes at once is more concise and efficient than sequential operations.
165-167: Good addition of default case for unknown message types.Adding support for unknown message types with
protojson.Marshalimproves robustness when handling various error details.
168-171: Enhanced error logging with detailed context.The improved error message now includes the type name and object details, making debugging easier when marshaling fails.
315-318: Validate error handling result inSerialize.Currently, if marshaling
Detailsfails, the function falls back tofmt.Sprint(objs...), which may mask the underlying error cause. Consider logging or returning the error more explicitly if possible.
416-421: Good addition of fallback unmarshal for unknown types.This default case attempts to unmarshal any unknown types, making the function more robust when handling diverse message types.
424-424: Enhanced error reporting in AnyToErrorDetail.Adding the type name to the error message provides more context for debugging failures.
internal/params/option.go (3)
28-36: Good default options setup for new functionality.The defaultOptions now includes the program name from command line arguments and a default value for the new override behavior, providing sensible defaults.
89-94: Appropriate option function for overriding defaults.This function follows the established pattern and provides a clear way to control whether the default CommandLine should be overridden.
96-109: Well-implemented argument filtering capability.The function includes proper null checks, initialization, and clear documentation explaining the filtering behavior. This adds flexibility for handling command-line arguments in tests and other specialized scenarios.
tests/v2/e2e/crud/multi_test.go (1)
35-645: Refactor test for better maintainability.The test function is too large and contains significant code duplication. Consider:
- Extracting helper functions for common operations.
- Splitting the test into smaller, focused test cases.
Here are some suggested improvements:
- Extract helper functions for common operations:
func createMultiRequest[T any](cfg *Config, data []float32, createReq func(id string, vec []float32) T) []T { reqs := make([]T, 0, cfg.BulkSize) for i, vec := range data { id := strconv.Itoa(i) reqs = append(reqs, createReq(id, vec)) } return reqs } func handleError(t *testing.T, err error, msg string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("%s: %v, status: %s", msg, err, st.String()) } else { t.Errorf("%s: %v", msg, err) } } }
- Split the test into smaller test cases:
func TestE2EMultiInsert(t *testing.T) { /* ... */ } func TestE2EMultiSearch(t *testing.T) { /* ... */ } func TestE2EMultiSearchByID(t *testing.T) { /* ... */ } func TestE2EMultiLinearSearch(t *testing.T) { /* ... */ } func TestE2EMultiUpdate(t *testing.T) { /* ... */ } func TestE2EMultiRemove(t *testing.T) { /* ... */ } func TestE2EMultiUpsert(t *testing.T) { /* ... */ }tests/v2/e2e/crud/stream_test.go (5)
34-540: Extract common error handling into a utility function.The error handling pattern is repeated throughout the test code. Consider extracting it into a helper function to reduce duplication.
func handleStreamError(t *testing.T, err error, operation string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to %s: %v, status: %s", operation, err, st.String()) } else { t.Errorf("failed to %s: %v", operation, err) } } }Then use it throughout the code:
if err != nil { - st, ok := status.FromError(err) - if ok && st != nil { - t.Errorf("failed to search vector: %v, status: %s", err, st.String()) - } else { - t.Errorf("failed to search vector: %v", err) - } + handleStreamError(t, err, "search vector") }
66-68: Fix array bounds check logic across all stream handlers.The condition should be
idx >= len(datas)to correctly detect when the index is out of bounds.-if len(datas) < idx { +if idx >= len(datas) { return nil }Also applies to: 112-113, 174-176, 234-236, 296-298, 356-358, 391-393, 435-437, 474-476
114-115: Fix query array bounds check in stream search functions.The condition should be
qidx >= len(cfg.Search.Queries)to correctly detect when all queries have been processed.-if len(cfg.Search.Queries) < qidx { +if qidx >= len(cfg.Search.Queries) { qidx = 0 idx++ }Also applies to: 177-178, 237-238, 299-300
160-161: Fix error messages for different stream operations.The error messages for non-insert operations incorrectly mention "insert" in the completion error messages.
-t.Errorf("failed to complete insert stream %v", err) +t.Errorf("failed to complete %s stream %v", "search", err)Update each error message to reference the correct operation (search, linearSearch, get, update, remove, upsert).
Also applies to: 223-224, 284-285, 345-346, 376-377, 420-421, 458-460, 502-504
431-432: Resolve suspicious slice range when removing vectors.Here, you use:
datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num]instead of
datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]This likely removes a different slice than intended. Consider correcting the slice boundaries to avoid accidentally referencing the
Updateconfig.-datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] +datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]tests/v2/e2e/kubernetes/resources.go (1)
300-312: Ensure PodExtendInterface calls are supported at runtime.
The code gracefully returns an unimplemented error if the underlying interface doesn’t support ephemeral containers or resizing. Make sure you handle these unimplemented methods in any higher-level logic or tests, so the system doesn’t silently fail.tests/v2/e2e/config/config.go (1)
233-233: Prevent potential index out-of-range panic.
ReferencingDefault.Search.Queries[0]can panic ifDefault.Search.Queriesis empty (e.g., if a future config version sets an empty slice). Provide a safe fallback:- sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) + var defaultTimeout time.Duration + if len(Default.Search.Queries) > 0 { + defaultTimeout = Default.Search.Queries[0].Timeout + } else { + defaultTimeout = time.Second * 3 + } + sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, defaultTimeout)tests/v2/e2e/kubernetes/status.go (1)
86-90: Safety check for possible never-matched statuses.
You are returningerrors.ErrStatusPatternNeverMatchedif none of the resource’s possible statuses match one of the requested. Confirm that you have accounted for all valid states, especially if new statuses get added in the future.
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.Insert(ctx, &payload.Insert_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Insert_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | ||
| }, | ||
| }) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to insert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) | ||
| return nil | ||
| })) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against potential nil-pointer dereferences.
Throughout the test (for example, lines 81, 129, 173, etc.), the code logs res.String() after encountering an error, without guaranteeing that res is non-nil. This can lead to a panic if the server returns an error without a valid response.
Below is a sample fix—apply it wherever you log res.String() immediately after an error:
- t.Logf("vector %v id %s inserted to %s", vec, id, res.String())
+ if err == nil && res != nil {
+ t.Logf("vector %v id %s inserted to %s", vec, id, res.String())
+ }Also applies to: 129-130, 173-174, 216-217, 259-260, 281-283, 293-294, 307-308, 341-342, 371-372, 405-406, 431-432, 442-443
| res, err := client.IndexInfo(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexInfo %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexInfo %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexInfo: %v", res.String()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid nil-pointer dereferences when logging res.String().
Each block logs res.String() even if an error occurred and could have led to a nil res. To prevent panics, either return early or only log res.String() if err is nil and res is non-nil.
res, err := client.IndexInfo(ctx, &payload.Empty{})
if err != nil {
...
+ return
}
-if res != nil {
- t.Logf("IndexInfo: %v", res.String())
-}
+t.Logf("IndexInfo: %v", res.String())(Apply similar guards to other index calls: IndexDetail, IndexStatistics, and IndexStatisticsDetail.)
Also applies to: 206-216, 218-228, 230-240
| type baseClient[T Object, L ObjectList, C NamedObject] struct { | ||
| Interface ObjectInterface[T, L, C] | ||
| Client Client | ||
| Namespace string | ||
| mu sync.RWMutex | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential nil-pointer panic in concurrency logic.
You’re using b.mu.RLock() at the beginning of each method and then checking for b == nil. If b is actually nil, calling its method to lock the mutex would panic before reaching the b == nil check. Consider adding an explicit check before locking if you truly need to handle a nil baseClient.
func (b *baseClient[T, L, C]) Create(
ctx context.Context, resource T, opts metav1.CreateOptions,
) (t T, err error) {
- b.mu.RLock()
- defer b.mu.RUnlock()
- if b == nil || b.Interface == nil {
+ if b == nil {
+ return t, errors.ErrKubernetesClientNotFound
+ }
+ b.mu.RLock()
+ defer b.mu.RUnlock()
+ if b.Interface == nil {
return t, errors.ErrKubernetesClientNotFound
}
return b.Interface.Create(ctx, resource, opts)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| type baseClient[T Object, L ObjectList, C NamedObject] struct { | |
| Interface ObjectInterface[T, L, C] | |
| Client Client | |
| Namespace string | |
| mu sync.RWMutex | |
| } | |
| func (b *baseClient[T, L, C]) Create( | |
| ctx context.Context, resource T, opts metav1.CreateOptions, | |
| ) (t T, err error) { | |
| if b == nil { | |
| return t, errors.ErrKubernetesClientNotFound | |
| } | |
| b.mu.RLock() | |
| defer b.mu.RUnlock() | |
| if b.Interface == nil { | |
| return t, errors.ErrKubernetesClientNotFound | |
| } | |
| return b.Interface.Create(ctx, resource, opts) | |
| } |
9dbc04d to
e9d5aef
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 3
♻️ Duplicate comments (2)
tests/v2/e2e/crud/stream_test.go (1)
431-432:⚠️ Potential issueCorrect the slice range for remove operation.
Using
cfg.Remove.Offset : cfg.Update.Offset + cfg.Update.Numis likely a copy-paste error. You should reference the remove config for the entire slice range to remove the intended data.-datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] +datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]tests/v2/e2e/crud/unary_test.go (1)
55-84: 🛠️ Refactor suggestionConsider Capturing Loop Variables
When iterating over
ds.Train, you're definingidandtsoutside the concurrency function, which is good. However, to fully avoid changes toi, vecif the loop’s state changes before the goroutine executes, consider capturing them in local copies:for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { - id := strconv.Itoa(i) - ts := cfg.Insert.Timestamp + iCopy, vecCopy := i, vec + id := strconv.Itoa(iCopy) + ts := cfg.Insert.Timestamp if ts == 0 { ts = timestamp } eg.Go(safety.RecoverFunc(func() error { - res, err := client.Insert(ctx, &payload.Insert_Request{ /* ... */ }) + res, err := client.Insert(ctx, &payload.Insert_Request{ + Vector: &payload.Object_Vector{ + Id: id, + Vector: vecCopy, + Timestamp: ts, + }, + /* ... */ + }) // ... })) }
🧹 Nitpick comments (20)
tests/v2/e2e/crud/multi_test.go (2)
51-104: Consider implementing timeout handling for insert operations.The current implementation runs insert operations concurrently but doesn't have explicit timeout handling. For long-running E2E tests, it's important to ensure operations don't hang indefinitely.
Implement timeout handling by using context with timeout:
- eg, _ := errgroup.New(ctx) + ctx, cancel := context.WithTimeout(ctx, 5*time.Minute) + defer cancel() + eg, _ := errgroup.New(ctx)And check for context cancellation in your goroutines:
eg.Go(safety.RecoverFunc(func() error { + if ctx.Err() != nil { + return ctx.Err() + } res, err := client.MultiInsert(ctx, req) // rest of the code }))
610-629: Add error validation for timestamp removal operation.The test logs the result of the removal operation but doesn't validate if the expected data was actually removed. Consider adding validation by checking if the vectors marked for removal are no longer retrievable.
// After RemoveByTimestamp, validate removal for i := range ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num] { id := strconv.Itoa(i) exists, err := client.Exists(ctx, &payload.Object_ID{Id: id}) if err != nil { t.Errorf("Error checking existence after removal: %v", err) continue } if exists.GetExists() { t.Errorf("ID %s should have been removed but still exists", id) } }tests/v2/e2e/crud/stream_test.go (4)
347-373: Enhance logging for object retrieval.Currently, only the vector is logged upon retrieval. Consider logging relevant metadata (like timestamps) or partial vector contents to aid debugging, especially when verifying correctness in a large E2E test.
378-420: Handle update timestamp constraints carefully.If the update operation has concurrency requirements or version checks in the backend, ensure the test reflects real-world usage (like verifying current timestamps or preventing outdated updates). This test currently just sets a static timestamp.
505-524: Improve validation for the RemoveByTimestamp operation.After removing items older than a certain timestamp, it’s good practice to confirm that no stale data remains. Consider adding an additional retrieval check or status verification.
526-540: Add verification step after flush.You might want to confirm that the index or storage is truly empty (or in the expected state) post-flush. A quick retrieval or search test here can ensure flush integrity.
tests/v2/e2e/kubernetes/client.go (2)
93-109: Consolidate error handling in newClient & inClusterConfigClient.Currently, if
kubernetes.NewForConfig(cfg)fails, we tryinClusterConfigClient(). Consider centralizing this fallback logic in one place to minimize duplication and keep error handling consistent.
118-124: Consider concurrency safety of client usage.While standard Kubernetes clientsets are generally safe for concurrent use, ensure the rest of the struct fields (manager, kclient) do not introduce concurrency issues. Document concurrency assumptions or usage patterns if multiple goroutines will share this client.
internal/params/params.go (3)
53-70: Document side effects of global state overrides.Overriding
flag.CommandLinecan affect other packages relying on the default flag set. Document the side effects in code comments and ensure it’s safe before enablingoverrideDefault = true.
105-115: Ensure filters handle all edge cases.Filtering out command-line arguments via
p.filtersis powerful but can mask user inputs. For maintainability, log or comment on which arguments get filtered out so users can debug why certain flags are removed.
135-145: Use caution with Restore & Override methods.Switching between default and custom flag sets at runtime can be confusing if multiple test suites or different modules rely on different flag sets. Consider scoping these overrides to the test context or adding explicit usage instructions.
tests/v2/e2e/crud/unary_test.go (2)
180-221: Consider Subtests for Complex FlowsThe test flows (search, searchByID, linearSearch, etc.) can be further segmented into subtests using
t.Run(...). This enables targeted runs and clearer logs for each operation type.
448-752: Add Documentation for Each New ScenarioThis file tests various scenarios with
SkipStrictExistCheck. Document the purpose of each scenario (#1 through #6) and the expected behavior more explicitly, so future maintainers can quickly grasp the test objectives.tests/v2/e2e/kubernetes/resources.go (3)
187-196: Inline Comments on Error ReturnCurrently, you return
errors.ErrKubernetesClientNotFoundbut do not log the missing client. Adding a trace log or context describing which resource was attempted can facilitate easier debugging.
300-312: Return Early for Unimplemented PodExtendInterface MethodsInstead of returning
nil, errors.ErrUnimplemented(...)only when the assert fails, consider capturing the error earlier to keep the logic forward. This is a style preference but can make the code more consistent:
373-407: Thread-Safe Accessors for Interface & ClientGetter and setter pairs use
b.mulocks in a consistent manner, which is good. However, add clarifying doc comments that the locks ensure concurrent reads/writes are safe, so future maintainers understand the rationale.tests/v2/e2e/config/config.go (2)
183-188: Consider More Robust Metadata Parsing.Splitting
MetaStringby commas may be error-prone for use cases that embed commas in values. If the metadata can contain complex strings, a more robust parser (e.g., semicolon-delimited or a KV parser) might be safer.
445-456: Suggestion: Improve Error Handling inLoad.When reading the config file fails, you merge with
Defaultand continue silently. Consider logging a warning or returning a more descriptive error so users realize the primary file was not loaded successfully.tests/v2/e2e/kubernetes/status.go (2)
83-114: Use a Configurable or Adaptive Wait Interval.
WaitForStatususes a fixed 5-second ticker. Providing a configurable polling interval or adopting a backoff strategy could enhance flexibility and reduce unnecessary polling during lengthy operations.
152-175: Consider Reducing Size of the Switch inCheckResourceState.A large switch statement can become unwieldy. Group related logic or use a polymorphic approach (e.g., separate resource-specific evaluators in a map). This can simplify readability and maintainability as new resource types are added.
🛑 Comments failed to post (3)
tests/v2/e2e/crud/stream_test.go (3)
160-161: 🛠️ Refactor suggestion
Correct error messages for each streaming operation.
Each of these error messages logs
"failed to complete insert stream"even though they occur within search or other operations (e.g., Search, SearchByID, LinearSearch). This can cause confusion during debugging.-t.Errorf("failed to complete insert stream %v", err) +t.Errorf("failed to complete search stream %v", err)Make sure to use the appropriate operation name (search, search by ID, linear search, etc.) for each error message.
Also applies to: 222-223, 283-284, 344-345
66-70: 🛠️ Refactor suggestion
Unify slice-bound checks across all streaming functions.
Multiple places use
if len(datas) < idxinstead ofif idx >= len(datas). This inconsistency can lead to off-by-one errors. Refactor for consistency and correctness.-if len(datas) < idx { +if idx >= len(datas) { return nil }Also applies to: 111-115, 173-177, 233-237, 295-299, 355-358, 391-393, 435-437, 473-475
63-71:
⚠️ Potential issueFix off-by-one condition for streaming insert.
Here, the condition
if len(datas) < idxat line 66 checks for the wrong boundary. It will skip an element whenidx == len(datas), effectively causing off-by-one errors.-if len(datas) < idx { +if idx >= len(datas) { return nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.datas := ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] err = grpc.BidirectionalStreamClient(stream, func() *payload.Insert_Request { id := strconv.Itoa(idx) if idx >= len(datas) { return nil } vec := datas[idx] idx++ return &payload.Insert_Request{ // additional fields here } })
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (20)
tests/v2/e2e/crud/multi_test.go (9)
110-185:⚠️ Potential issue** Reuse the same concurrency & nil-check approach.**
As in the multi-insert block, ensure that goroutine errors are properly captured via
eg.Wait()and that you verifyresis non-nil before invoking methods on it.
187-262:⚠️ Potential issue** Concurrency and nil response handling.**
Same concerns apply: safely handle errors from
eg.Wait()and confirmresis non-nil before accessingres.GetResponses().
264-340:⚠️ Potential issue** Concurrency and response nil check.**
Continue to ensure aggregated error handling and guard against nil
res.
341-417:⚠️ Potential issue** Maintain consistent error handling checks for multiSearchByID.**
449-503:⚠️ Potential issue** Concurrency & result nil checks for multiUpdate.**
504-554:⚠️ Potential issue** Concurrency & result nil checks for multiRemove.**
555-609:⚠️ Potential issue** Concurrency & result nil checks for multiUpsert.**
631-645:⚠️ Potential issue** Confirm flush success & handle
eg.Wait()error if used concurrently.**
418-448:⚠️ Potential issueVerify nil-object scenario in GetObject results.
If
client.GetObjectfails,objmight be nil. Loggingobj.String()could cause a panic ifobj == nil. Consider short-circuiting whenerr != nil.if err != nil { t.Errorf("failed to get object: %v", err) + return nil } if obj != nil { t.Logf("id %s got object: %v", id, obj.String()) } else { t.Errorf("object is nil") }tests/v2/e2e/crud/stream_test.go (6)
110-113:⚠️ Potential issue** Off-by-one array bound check**
174-175:⚠️ Potential issue** Off-by-one array bound check**
234-236:⚠️ Potential issue** Off-by-one array bound check**
296-298:⚠️ Potential issue** Off-by-one array bound check**
356-358:⚠️ Potential issue** Off-by-one array bound check**
422-431:⚠️ Potential issuePotential mismatch in remove slice range.
Line 431 uses
ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num]instead of referencingcfg.Remove. This can remove a slice of data different from what’s intended.-datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] +datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]tests/v2/e2e/crud/crud_test.go (2)
163-177:⚠️ Potential issuePrevent division by zero in
recallfunction.Currently, the function unconditionally performs a division by
len(neighbors)at line 176. Ifneighborsis empty, this will cause a runtime panic.You can safely guard against an empty slice. For example:
func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { t.Helper() ns := map[string]struct{}{} for _, n := range neighbors { ns[strconv.Itoa(n)] = struct{}{} } for _, r := range resultIDs { if _, ok := ns[r]; ok { recall++ } } - return recall / float64(len(neighbors)) + if len(neighbors) == 0 { + t.Errorf("neighbors is empty, avoiding division by zero") + return 0 + } + return recall / float64(len(neighbors)) }
194-243:⚠️ Potential issueAvoid nil-pointer panics when logging
res.String().Inside
indexStatus, each block logsres.String()even when an error has occurred andresmay benil. This can lead to a panic.A safer approach is to log
res.String()only iferris nil andresis non-nil. For example:res, err := client.IndexInfo(ctx, &payload.Empty{}) if err != nil { ... return } -t.Logf("IndexInfo: %v", res.String()) +if res != nil { + t.Logf("IndexInfo: %v", res.String()) +}Apply the same pattern where other index calls are made (e.g.,
IndexDetail,IndexStatistics,IndexStatisticsDetail).tests/v2/e2e/kubernetes/resources.go (1)
187-371:⚠️ Potential issueLocking before
b == nilcheck can lead to panics.Each method acquires
b.mu.RLock()orb.mu.Lock()and then checksif b == nil. Ifbis actuallynil, the program will panic upon callingb.mu.RLock().A possible fix is to check
b != nilbefore locking:func (b *baseClient[T, L, C]) Create( ctx context.Context, resource T, opts metav1.CreateOptions, ) (t T, err error) { - b.mu.RLock() - defer b.mu.RUnlock() - if b == nil || b.Interface == nil { + if b == nil { + return t, errors.ErrKubernetesClientNotFound + } + b.mu.RLock() + defer b.mu.RUnlock() + if b.Interface == nil { return t, errors.ErrKubernetesClientNotFound } return b.Interface.Create(ctx, resource, opts) }Apply the same pattern to
Update,UpdateStatus,Delete, etc., to ensure thread-safety without risking a panic.tests/v2/e2e/config/config.go (2)
233-233:⚠️ Potential issueAvoid referencing
Default.Search.Queries[0]whenQueriesmight be empty.Accessing
Default.Search.Queries[0]at line 233 can cause an out-of-range panic ifDefault.Search.Queriesis empty.Consider a safe fallback:
sq.TimeoutString = config.GetActualValue(sq.TimeoutString) -if len(Default.Search.Queries) > 0 { - sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) -} else { - sq.Timeout = time.Second * 3 -}
466-466: 🛠️ Refactor suggestionPotential exposure of sensitive fields in debug logs.
Line 466 logs the entire configuration as YAML, which may contain private or sensitive data such as metadata fields.
Consider redacting or removing sensitive fields before logging, or avoid logging the entire config structure altogether, for example:
-log.Debug(config.ToRawYaml(cfg)) +// log.Debug(config.ToRawYaml(cfg)) // Avoid logging full config to prevent sensitive data exposure
🧹 Nitpick comments (17)
internal/params/option.go (1)
38-45: Consider handling empty name scenarios.
WithNameonly setsp.nameif the provided argument is non-empty. Confirm that an empty name won't cause unexpected behavior elsewhere in the code.tests/v2/e2e/kubernetes/client.go (2)
65-79: Warn about ignoringcurrentContextif in-cluster fallback is triggered.When
clientcmd.NewNonInteractiveDeferredLoadingClientConfigfails, you revert toinClusterConfigClient. If an environment is incorrectly set up, you might silently fallback. Users might expect an explicit error.
93-108: Confirm naming ofGetRESRConfig().It may be a typo for
GetRESTConfig(). If so, consider renaming for clarity. Otherwise, the code is consistent, but the function name might cause confusion.-func (c *client) GetRESRConfig() *rest.Config { +func (c *client) GetRESTConfig() *rest.Config { return c.rest }tests/v2/e2e/kubernetes/portforward.go (3)
19-21: Validate package naming and file-level docstring.The package is declared at line 20 without a preceding file-level comment explaining the rationale or usage of this specific port-forward functionality within the
kubernetespackage. Consider adding a brief docstring to clarify the purpose and usage context to future maintainers.
51-63: Clarify pod readiness logic.Currently, the function retrieves the pod and checks
status != StatusAvailablebefore erroring out. In a real-world scenario, pods can have transitional phases (e.g., ContainerCreating). Consider adding a retry mechanism or waiting for the pod to transition to a running state to avoid flaky scenarios.
84-94: Consider custom I/O streams or logs for port-forward output.By default, writing port-forward logs to
os.Stdoutandos.Stderrcan clutter the console or CI logs. Providing an option to redirect these streams to more controlled logging or dev/null might improve test readability and reduce noise in large test runs.internal/params/params.go (3)
54-59: Document the new fields inparser.Fields
overrideDefault, name, filters, f, defaultsare added with no direct documentation. Adding comments explaining their usage and impact (especially howoverrideDefaultinteracts with the global flag set) can improve maintainability.
69-69: Consider clarifyingErrorHandler.
ErrorHandlerreferences the standard library’sflag.ErrorHandlingconstants. Provide inline doc or usage examples to clarify when to useContinueOnError,ExitOnError, orPanicOnErrorin your parser logic.
78-82: Preserve defaultflag.CommandLineonly when needed.Storing
p.defaults = flag.CommandLine(line 78) can be convenient, but some users might expect to rely on the original global flags beyond the parser’s lifecycle. Clarify or rename to indicate the consequences (i.e., “originalFlagSet”).tests/v2/e2e/crud/unary_test.go (5)
35-35: Capture loop variables in goroutines.When you call
eg.Goin loops (e.g., lines 56, 97, 139, etc.), you rely oniandvecclosures. Although you’re not mutating them further, capturing them by value (“copying” intoiCopy, vecCopy := i, vec) is safer to prevent any unintended concurrency issues if the loop or external variables change in the future.
94-130: Use subtests for varied search queries.Within the nested loops, each search query is tested with concurrency. Using
t.Run(queryName, func(t *testing.T){ ... })can clarify the test scope, facilitate parallel subtest execution, and produce more granular results in case of failures.
223-264: Refactor repeated search logic.
SearchByIDandLinearSearchByIDshare similar logic for concurrency and config. Extracting a helper function to handle repeated steps (building requests, logging results, error checks) could improve readability and reduce duplication.
411-430: More robust verification after RemoveByTimestamp.You remove vectors older than an hour. Consider verifying that these vectors were indeed removed or that the total count of available vectors is as expected, to confirm correctness of the remove operation.
606-634: Mismatch in error message.Line 633 logs “received a NotFound error on #5,” while the actual check at line 629 expects
codes.AlreadyExists. If the code is correct, update the log message to reflect the correct status code for clarity.tests/v2/e2e/kubernetes/status.go (3)
76-114: Consider adding a configurable retry interval or a maximum wait duration.
The current polling interval is hard-coded to 5 seconds, and there's no built-in maximum wait. This could lead to indefinite waiting if a resource never transitions (e.g., a transient K8s error). A more flexible approach could make E2E tests more resilient:func WaitForStatus[T Object, L ObjectList, C NamedObject, I ObjectInterface[T, L, C]]( ctx context.Context, client I, name string, statuses ...ResourceStatus, ) (obj T, matched bool, err error) { - ticker := time.NewTicker(5 * time.Second) + interval := 5 * time.Second // Make it configurable or pass as a parameter + ticker := time.NewTicker(interval) defer ticker.Stop() ... }
177-401: Repeated evaluation logic across resources.
Multiple evaluate* functions encapsulate almost identical checks. While it’s understandable for clarity, you might reduce duplication by factoring out common patterns (e.g., checkingObservedGenerationvsGeneration) into shared helper functions:-func evaluateDeployment(...) {...} -func evaluateStatefulSet(...) {...} -... +// Possibly unify repeated checks into shared helper(s): +func checkObservedGeneration(gen, observedGen int64, details *string) (bool, ResourceStatus) { + // ... +}
402-452: Good handling of Pod readiness, but watchers may improve real-time feedback.
Currently, you poll for phase changes. For large clusters or time-sensitive e2e tests, consider leveraging Pod watchers to get immediate updates. However, for simpler or smaller-scale testing, your approach is sufficient.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (39)
.github/actions/setup-k3d/action.yaml(1 hunks).github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/errors/k8s.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(6 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(5 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/portforward.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (28)
- .github/workflows/update-deps.yaml
- .github/workflows/format.yaml
- .github/workflows/fossa.yaml
- .github/workflows/helm-lint.yaml
- .github/workflows/test-hack.yaml
- .github/workflows/build-binaries.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/e2e-profiling.yaml
- .github/workflows/_docker-image.yaml
- .github/workflows/coverage.yaml
- .github/workflows/chatops.yaml
- .github/workflows/helm.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/codeql-analysis.yml
- internal/net/grpc/errdetails/errdetails.go
- .github/actions/setup-k3d/action.yaml
- internal/errors/errors.go
- .github/workflows/e2e-code-bench-agent.yaml
- .github/workflows/unit-test.yaml
- .github/workflows/e2e.yaml
- .github/workflows/e2e-chaos.yaml
- internal/errors/k8s.go
- .github/workflows/build-protobuf.yaml
- .github/workflows/unit-test-rust.yaml
- tests/v2/e2e/kubernetes/rollout.go
- .github/workflows/reviewdog-k8s.yaml
- internal/net/http/client/client.go
- .github/workflows/reviewdog.yaml
🔇 Additional comments (20)
internal/params/option.go (5)
20-23: No issues with the new imports.Imported packages
osandpath/filepathare appropriate for the added functionality.
28-28: Confirm default application name logic.Deriving the default name from
filepath.Base(os.Args[0])is valid. Ensure that this behavior is desired in all runtime contexts, including when the executable name may be different than expected.
35-35: Clarify “overrideDefault” usage.By setting
WithOverrideDefault(false)here, the default is never overridden. Confirm that this aligns with the intended usage downstream—especially if you plan to enable override behavior later.
89-94: Check alignment with doc comment.The doc comment suggests this option overrides
flag.CommandLine, but the code simply setsp.overrideDefault. Verify that the actual usage inparams.Parse()or related code will respect this flag.
96-109: Validate filter usage order.
WithArgumentFiltersappends filters while preserving the order. If filter execution order is relevant, ensure you document it. Otherwise, the current approach looks good.tests/v2/e2e/crud/multi_test.go (2)
35-49: Check error handling after IndexProperty retrieval.If
erris non-nil, a log is printed, but the test proceeds. Confirm that continuing the test in a broken state doesn’t hide downstream issues.
610-629: Consider a safer approach to “RemoveByTimestamp”.While removing older data, verify that the timestamp logic matches desired lower bound or equality. Currently it’s set to
payload.Remove_Timestamp_Le, which removes all data at or older thanrts.tests/v2/e2e/crud/stream_test.go (1)
66-541:⚠️ Potential issue** Concurrency and nil pointer concerns.**
Similar to the multi-file patterns, each streaming block processes responses but does not safeguard if
resis nil after an error, nor do we robustly capture or handle errors from the stream. A short-circuit or aggregated error approach is recommended.tests/v2/e2e/kubernetes/client.go (1)
48-61: Check fallback approach on missing kubeConfig.If no config is found, you default to in-cluster config. Confirm that skipping local dev config is intended in all cases. Consider logging a warning or info message.
tests/v2/e2e/kubernetes/portforward.go (2)
65-75: Check for potential nil or invalid transport.Right after creating the transport and upgrader (
spdy.RoundTripperFor), it’s set on line 69 tohc.Transport. Iftransportis nil due to a configuration or environment issue, you could encounter an unexpected panic on subsequent usage. Adding a short nil-check can prevent runtime errors.
98-106: Graceful goroutine exit and error propagation.While the goroutine closes
echafterForwardPorts()returns an error, consider capturing all relevant error details (e.g., partial port bindings) and verifying if partial successes or ephemeral errors occurred. This can help with diagnosing network flakes or ephemeral environment issues.internal/params/params.go (2)
23-23: Confirm Go version compatibility forslices.DeleteFunc.
slices.DeleteFuncis introduced in Go 1.21. Ensure your CI environment and module constraints support Go 1.21+; otherwise, this call will fail.
135-145: Caution with global override.
Restore()andOverride()change the globalflag.CommandLine. This can cause unexpected behavior in other packages. A doc block or usage warning is strongly recommended to prevent confusion in larger codebases.tests/v2/e2e/crud/unary_test.go (2)
266-308: Guard against nilobjandexists.Similar to earlier lines, always confirm
obj != nilandexists != nilbefore loggingobj.String()orexists.String(). These calls could panic otherwise, especially whenerr != nil.
639-670: Check concurrency in Update with SkipStrictExistCheck.Multiple updates run in parallel with skip checks at line 657. If this is intended to test concurrency handling, ensure no data race occurs from external state or shared data (like
ds.Train). If concurrency is not required, consider sequential updates for clarity.tests/v2/e2e/kubernetes/status.go (5)
1-18: File header and licensing look good.
No issues spotted with the build tags or license headers. Everything appears consistent with project conventions.
19-74: Enum usage is clear, but consider verifying overlap and clarity of statuses.
Defining an extensive list of resource statuses is helpful for granular reporting. Just confirm that these cover all real-world states you intend to track. Ensure you consistently document or handle each status (e.g., "LoadBalancing") in the codebase and external docs for clarity.
116-144: Reflection-based approach works, but verify cost vs. convenience.
Usingany(obj).(type)helps handle multiple resource types in a single function. For performance-critical code, a dedicated typed approach might be faster, though for E2E tests, this is likely fine. Just ensure that new resource types are added to this function to avoid a silent fallback toStatusUnknown.
146-175: CheckResourceState method usage is straightforward.
The method properly delegates the evaluation to resource-specific functions. Be mindful that adding new statuses or resources requires updating both this function andPossibleStatuses.
453-517: No major issues across PVC, Service, and Ingress checks.
These functions correctly handle K8s resource fields and states. The fallback toStatusUnknownfor unrecognized states is sensible. Overall, this final portion neatly rounds out the e2e coverage.
| eg, _ := errgroup.New(ctx) | ||
| eg.SetLimit(int(cfg.Insert.Concurrency)) | ||
| mireq := &payload.Insert_MultiRequest{ | ||
| Requests: make([]*payload.Insert_Request, 0, cfg.Insert.BulkSize), | ||
| } | ||
| for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { | ||
| id := strconv.Itoa(i) | ||
| ts := cfg.Insert.Timestamp | ||
| if ts == 0 { | ||
| ts = timestamp | ||
| } | ||
| mireq.Requests = append(mireq.Requests, &payload.Insert_Request{ | ||
| Vector: &payload.Object_Vector{ | ||
| Id: id, | ||
| Vector: vec, | ||
| Timestamp: ts, | ||
| }, | ||
| Config: &payload.Insert_Config{ | ||
| Timestamp: ts, | ||
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | ||
| }, | ||
| }) | ||
| if len(mireq.GetRequests()) >= cfg.Insert.BulkSize { | ||
| req := mireq.CloneVT() | ||
| mireq.Reset() | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.MultiInsert(ctx, req) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to insert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vectors %s inserted %s", req.String(), res.String()) | ||
| return nil | ||
| })) | ||
| } | ||
| } | ||
| eg.Go(safety.RecoverFunc(func() error { | ||
| res, err := client.MultiInsert(ctx, mireq) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to insert vector: %v", err) | ||
| } | ||
| } | ||
| t.Logf("vectors %s inserted %s", mireq.String(), res.String()) | ||
| return nil | ||
| })) | ||
| eg.Wait() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Capture errgroup errors & handle potential nil responses.
Inside the goroutines, errors are logged but never aggregated. Also, you call eg.Wait() without checking its returned error. If a goroutine returns an error, it might go unnoticed, potentially hiding issues. Consider capturing that error:
-eg.Wait()
+if err := eg.Wait(); err != nil {
+ t.Errorf("concurrent insert error: %v", err)
+}Furthermore, after client.MultiInsert, confirm res is non-nil before using res.String() to avoid a nil-pointer in edge cases.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| eg, _ := errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Insert.Concurrency)) | |
| mireq := &payload.Insert_MultiRequest{ | |
| Requests: make([]*payload.Insert_Request, 0, cfg.Insert.BulkSize), | |
| } | |
| for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Insert.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| mireq.Requests = append(mireq.Requests, &payload.Insert_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: id, | |
| Vector: vec, | |
| Timestamp: ts, | |
| }, | |
| Config: &payload.Insert_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if len(mireq.GetRequests()) >= cfg.Insert.BulkSize { | |
| req := mireq.CloneVT() | |
| mireq.Reset() | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.MultiInsert(ctx, req) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to insert vector: %v", err) | |
| } | |
| } | |
| t.Logf("vectors %s inserted %s", req.String(), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.MultiInsert(ctx, mireq) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to insert vector: %v", err) | |
| } | |
| } | |
| t.Logf("vectors %s inserted %s", mireq.String(), res.String()) | |
| return nil | |
| })) | |
| eg.Wait() | |
| eg, _ := errgroup.New(ctx) | |
| eg.SetLimit(int(cfg.Insert.Concurrency)) | |
| mireq := &payload.Insert_MultiRequest{ | |
| Requests: make([]*payload.Insert_Request, 0, cfg.Insert.BulkSize), | |
| } | |
| for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { | |
| id := strconv.Itoa(i) | |
| ts := cfg.Insert.Timestamp | |
| if ts == 0 { | |
| ts = timestamp | |
| } | |
| mireq.Requests = append(mireq.Requests, &payload.Insert_Request{ | |
| Vector: &payload.Object_Vector{ | |
| Id: id, | |
| Vector: vec, | |
| Timestamp: ts, | |
| }, | |
| Config: &payload.Insert_Config{ | |
| Timestamp: ts, | |
| SkipStrictExistCheck: cfg.Insert.SkipStrictExistCheck, | |
| }, | |
| }) | |
| if len(mireq.GetRequests()) >= cfg.Insert.BulkSize { | |
| req := mireq.CloneVT() | |
| mireq.Reset() | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.MultiInsert(ctx, req) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to insert vector: %v", err) | |
| } | |
| } | |
| // Additional nil check for res can be added here if needed: | |
| // if res != nil { | |
| // t.Logf("vectors %s inserted %s", req.String(), res.String()) | |
| // } else { | |
| // t.Errorf("received nil response for request: %s", req.String()) | |
| // } | |
| t.Logf("vectors %s inserted %s", req.String(), res.String()) | |
| return nil | |
| })) | |
| } | |
| } | |
| eg.Go(safety.RecoverFunc(func() error { | |
| res, err := client.MultiInsert(ctx, mireq) | |
| if err != nil { | |
| st, ok := status.FromError(err) | |
| if ok && st != nil { | |
| t.Errorf("failed to insert vector: %v, status: %s", err, st.String()) | |
| } else { | |
| t.Errorf("failed to insert vector: %v", err) | |
| } | |
| } | |
| // Additional nil check for res can be added here if needed: | |
| // if res != nil { | |
| // t.Logf("vectors %s inserted %s", mireq.String(), res.String()) | |
| // } else { | |
| // t.Errorf("received nil response for request: %s", mireq.String()) | |
| // } | |
| t.Logf("vectors %s inserted %s", mireq.String(), res.String()) | |
| return nil | |
| })) | |
| if err := eg.Wait(); err != nil { | |
| t.Errorf("concurrent insert error: %v", err) | |
| } |
| if len(datas) < idx { | ||
| return nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Off-by-one array bound check.
if len(datas) < idx should likely be if idx >= len(datas) to detect out-of-range indices. This appears multiple times below as well.
| func PortforwardExtended( | ||
| ctx context.Context, | ||
| c Client, | ||
| namespace, podName string, | ||
| addresses []string, | ||
| ports map[uint16]uint16, | ||
| hc *http.Client, | ||
| ) (cancel context.CancelFunc, ech chan<- error, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure function parameters are validated.
PortforwardExtended accepts multiple complex parameters including namespace, podName, addresses, and ports without explicit validation checks for empty strings or invalid port mappings. Adding parameter validation enhances error resilience and user feedback.
| { | ||
| res, err := client.IndexProperty(ctx, &payload.Empty{}) | ||
| if err != nil { | ||
| st, ok := status.FromError(err) | ||
| if ok && st != nil { | ||
| t.Errorf("failed to get IndexProperty %v status: %s", err, st.String()) | ||
| } else { | ||
| t.Errorf("failed to get IndexProperty %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexProperty: %v", res.String()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possible nil pointer in res.
After an error is detected, res can be nil. You invoke res.String() at line 48. For safety, ensure err == nil && res != nil before calling res.String(). This pattern repeats in lines 81, 129, 174, 216, etc.
| func calculateRecall(t *testing.T, res *payload.Search_Response, idx int) (rc float64) { | ||
| t.Helper() | ||
| topKIDs := make([]string, 0, len(res.GetResults())) | ||
| for _, d := range res.GetResults() { | ||
| topKIDs = append(topKIDs, d.GetId()) | ||
| } | ||
|
|
||
| if len(topKIDs) == 0 { | ||
| t.Errorf("empty result is returned for test ID %s: %#v", res.GetRequestId(), topKIDs) | ||
| return | ||
| } | ||
| rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)]) | ||
| return rc | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent potential index out-of-range errors in calculateRecall.
Accessing ds.Neighbors[idx][:len(topKIDs)] at line 190 may panic if ds.Neighbors[idx] has fewer entries than len(topKIDs).
Consider verifying the slice length:
if len(ds.Neighbors[idx]) < len(topKIDs) {
t.Errorf("neighbors slice is smaller than search results, idx=%d", idx)
return 0
}
rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)])📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func calculateRecall(t *testing.T, res *payload.Search_Response, idx int) (rc float64) { | |
| t.Helper() | |
| topKIDs := make([]string, 0, len(res.GetResults())) | |
| for _, d := range res.GetResults() { | |
| topKIDs = append(topKIDs, d.GetId()) | |
| } | |
| if len(topKIDs) == 0 { | |
| t.Errorf("empty result is returned for test ID %s: %#v", res.GetRequestId(), topKIDs) | |
| return | |
| } | |
| rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)]) | |
| return rc | |
| } | |
| func calculateRecall(t *testing.T, res *payload.Search_Response, idx int) (rc float64) { | |
| t.Helper() | |
| topKIDs := make([]string, 0, len(res.GetResults())) | |
| for _, d := range res.GetResults() { | |
| topKIDs = append(topKIDs, d.GetId()) | |
| } | |
| if len(topKIDs) == 0 { | |
| t.Errorf("empty result is returned for test ID %s: %#v", res.GetRequestId(), topKIDs) | |
| return | |
| } | |
| if len(ds.Neighbors[idx]) < len(topKIDs) { | |
| t.Errorf("neighbors slice is smaller than search results, idx=%d", idx) | |
| return 0 | |
| } | |
| rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)]) | |
| return rc | |
| } |
e9d5aef to
3710cbb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (12)
tests/v2/e2e/crud/stream_test.go (4)
66-68:⚠️ Potential issueFix array bounds check.
The condition should be
idx >= len(datas)to correctly detect when all elements have been processed.-if len(datas) < idx { +if idx >= len(datas) { return nil }
111-112:⚠️ Potential issueFix all array bounds checks throughout the file.
Similar incorrect bounds checks appear throughout the file. The condition should be
idx >= len(datas)in all cases.Apply the same fix to all similar instances:
-if len(datas) < idx { +if idx >= len(datas) { return nil }Also applies to: 173-175, 233-235, 295-297, 355-357, 390-392, 434-436, 473-475
114-115:⚠️ Potential issueFix query array bounds checks.
The condition for query index should be
qidx >= len(queries)to correctly detect when all queries have been processed.-if len(cfg.Search.Queries) < qidx { +if qidx >= len(cfg.Search.Queries) { qidx = 0 idx++ }Also applies to: 176-178, 236-238, 298-300
431-431:⚠️ Potential issueFix incorrect slice range in remove operation.
The slice range incorrectly uses Update offset and size instead of Remove.
-datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] +datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]tests/v2/e2e/crud/unary_test.go (2)
56-84: 🛠️ Refactor suggestionRefactor to avoid loop variable capture in goroutines.
When launching goroutines in a for loop using
i, vec, the values may change if the loop proceeds before the goroutine reads them. Copy them into local variables to ensure safety.for i, vec := range ds.Train[cfg.Insert.Offset : cfg.Insert.Offset+cfg.Insert.Num] { - eg.Go(safety.RecoverFunc(func() error { + iCopy, vecCopy := i, vec + eg.Go(safety.RecoverFunc(func() error { id := strconv.Itoa(iCopy) ... - t.Logf("vector %v id %s inserted to %s", vec, id, res.String()) + t.Logf("vector %v id %s inserted to %s", vecCopy, id, res.String()) return nil })) }
38-49:⚠️ Potential issueAvoid calling
res.String()ifresmight be nil after an error.If
err != nil, the returnedrescould benil. Callingres.String()without checking may cause a nil-pointer panic.if err != nil { ... } -t.Logf("IndexProperty: %v", res.String()) +if err == nil && res != nil { + t.Logf("IndexProperty: %v", res.String()) +}Also applies to: 81-81, 129-129, 173-173
tests/v2/e2e/crud/crud_test.go (2)
160-174:⚠️ Potential issuePrevent potential division by zero in
recall.If
neighborsis empty, the function divides byfloat64(len(neighbors)). Add a length check to avoid a panic.if len(neighbors) == 0 { t.Errorf("neighbors is empty, cannot compute recall") return 0.0 } return recall / float64(len(neighbors))
194-240:⚠️ Potential issueReturn early on error before logging
res.String().Repeating pattern: if an error occurs,
rescan be nil. Avoid callingres.String()in that case to prevent nil-pointer dereferences.res, err := client.IndexInfo(ctx, &payload.Empty{}) if err != nil { ... + return } -t.Logf("IndexInfo: %v", res.String()) +if res != nil { + t.Logf("IndexInfo: %v", res.String()) +}tests/v2/e2e/kubernetes/resources.go (1)
187-372:⚠️ Potential issuePotential nil-pointer panic in concurrency logic
You're locking the mutex (b.mu.RLock()) before checking whetherbisnil. Ifbis actuallynil, attempting to call its method to lock the mutex will panic. To avoid this, consider adding a pre-check before locking, similar to the following:-func (b *baseClient[T, L, C]) Create( - ctx context.Context, resource T, opts metav1.CreateOptions, -) (t T, err error) { - b.mu.RLock() - defer b.mu.RUnlock() - if b == nil || b.Interface == nil { - return t, errors.ErrKubernetesClientNotFound - } - return b.Interface.Create(ctx, resource, opts) +func (b *baseClient[T, L, C]) Create( + ctx context.Context, resource T, opts metav1.CreateOptions, +) (t T, err error) { + if b == nil { + return t, errors.ErrKubernetesClientNotFound + } + b.mu.RLock() + defer b.mu.RUnlock() + if b.Interface == nil { + return t, errors.ErrKubernetesClientNotFound + } + return b.Interface.Create(ctx, resource, opts) }Apply this fix or equivalent to all methods using the same pattern (e.g.,
Update,UpdateStatus,Delete,Patch,Apply, etc.).tests/v2/e2e/config/config.go (3)
231-234:⚠️ Potential issuePrevent potential index out-of-range panic
You're referencingDefault.Search.Queries[0]during timeout parsing. This could panic ifDefault.Search.Queriesis empty or has no elements. Provide a safe fallback check to avoid accessing an invalid index:-sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) +if len(Default.Search.Queries) > 0 { + sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) +} else { + log.Warn("Default.Search.Queries is empty; using fallback timeout of 3s") + sq.Timeout = time.Second * 3 +}
111-111: 🛠️ Refactor suggestionDocumentation mismatch for local port default behavior
Your doc string at line 111 states “if not set, it defaults to TargetPort,” but the code at lines 333–340 sets bothTargetPortand unnamed local ports to the constantlocalPort(8081). This behavior conflicts with the comment. Update either the code or the doc string to reflect the actual default logic accurately.Also applies to: 333-340
465-466:⚠️ Potential issuePotential PII leakage in log output
You’re logging the entire configuration as raw YAML (includingMetadata) at line 465. If it contains sensitive information, it could leak into logs. Consider sanitizing or removing it:- log.Debug(config.ToRawYaml(cfg)) + // log.Debug(config.ToRawYaml(cfg)) // Remove or sanitize for potentially sensitive data
🧹 Nitpick comments (3)
tests/v2/e2e/kubernetes/client.go (1)
118-124: Simple and effective accessor methods.The GetClientSet and GetRESRConfig methods provide clean access to the internal components. Consider fixing the typo in the method name from "GetRESRConfig" to "GetRESTConfig" in a future update.
-func (c *client) GetRESRConfig() *rest.Config { +func (c *client) GetRESTConfig() *rest.Config { return c.rest } -type Client interface { - GetClientSet() kubernetes.Interface - GetRESRConfig() *rest.Config +type Client interface { + GetClientSet() kubernetes.Interface + GetRESTConfig() *rest.Config }tests/v2/e2e/kubernetes/portforward.go (1)
43-50: Consider validating thepodNameandnamespace.Currently,
PortforwardExtendedchecks foraddressesandportsbut not for empty or invalidpodNameandnamespace. Validating them early can help detect usage errors.func PortforwardExtended( ctx context.Context, c Client, - namespace, podName string, + namespace, podName string, // validate non-empty addresses []string, ports map[uint16]uint16, hc *http.Client, ) (cancel context.CancelFunc, ech chan<- error, err error) { + if namespace == "" || podName == "" { + return cancel, nil, errors.New("namespace and podName must not be empty") + } ... }tests/v2/e2e/kubernetes/status.go (1)
483-497: Validation for LoadBalancer services
InevaluateService, you handle LoadBalancer services by waiting foringressassignment. This is correct. However, consider logging or tracking the time spent waiting, especially if the external LB provisioning may take too long in some environments, to facilitate debugging or timeouts in CI/CD pipelines.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (40)
.github/actions/setup-k3d/action.yaml(1 hunks).github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/errors/k8s.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(6 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(5 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/rollout.yaml(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/portforward.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/v2/e2e/crud/rollout.yaml
🚧 Files skipped from review as they are similar to previous changes (29)
- .github/workflows/build-protobuf.yaml
- .github/workflows/format.yaml
- .github/workflows/coverage.yaml
- .github/workflows/helm.yaml
- .github/workflows/chatops.yaml
- .github/workflows/fossa.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/codeql-analysis.yml
- .github/workflows/e2e-code-bench-agent.yaml
- .github/workflows/_docker-image.yaml
- .github/workflows/reviewdog-k8s.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/e2e-profiling.yaml
- internal/errors/errors.go
- .github/workflows/reviewdog.yaml
- .github/workflows/unit-test.yaml
- .github/workflows/build-binaries.yaml
- .github/workflows/test-hack.yaml
- .github/workflows/unit-test-rust.yaml
- internal/net/grpc/errdetails/errdetails.go
- .github/workflows/update-deps.yaml
- .github/workflows/helm-lint.yaml
- .github/workflows/e2e.yaml
- .github/workflows/e2e-chaos.yaml
- .github/actions/setup-k3d/action.yaml
- tests/v2/e2e/kubernetes/rollout.go
- internal/errors/k8s.go
- internal/params/params.go
- internal/net/http/client/client.go
⏰ Context from checks skipped due to timeout of 90000ms (148)
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / dump-contexts-to-log
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: runner / textlint
🔇 Additional comments (16)
internal/params/option.go (4)
20-23: Clean import additions for new functionality.The new imports for
osandpath/filepathare well placed and appropriately used to enhance the package's functionality, enabling default command name extraction from os.Args[0].
27-36: LGTM: Good default option setup.The defaultOptions slice now includes two new option functions:
- Using the binary name as the default name
- Setting overrideDefault to false by default
This provides sensible defaults while maintaining backward compatibility.
89-94: Well-defined function for controlling flag behavior.The WithOverrideDefault function follows the established pattern and is properly documented. It provides a clean way to control whether the parser should override default flag.CommandLine behavior.
96-109: Good implementation of argument filtering.This function is well-designed with:
- Clear documentation explaining its purpose
- Proper nil handling for the filters slice
- Efficient slice allocation with capacity pre-set
The function enhances the flexibility of the parser by allowing custom filtering of command-line arguments.
tests/v2/e2e/crud/multi_test.go (3)
51-105: Handle errgroup.Wait() errors.The error from eg.Wait() is not checked, which could lead to missed errors from goroutines.
-eg.Wait() +if err := eg.Wait(); err != nil { + t.Errorf("concurrent insert error: %v", err) +}
422-430: Avoid potential nil pointer dereference.If
erris non-nil,objmay benil, causingobj.String()to panic.- if err != nil { - t.Errorf("failed to get object: %v", err) - } - t.Logf("id %s got object: %v", id, obj.String()) + if err != nil { + t.Errorf("failed to get object: %v", err) + return nil + } + if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) + } else { + t.Errorf("failed to get object: object is nil") + return nil + }
35-645: Refactor test for better maintainability.The test function is large (600+ lines) with significant code duplication in error handling and request creation patterns.
Consider:
- Extracting helper functions for common operations
- Splitting the test into smaller, focused test cases
For example:
func handleError(t *testing.T, err error, msg string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("%s: %v, status: %s", msg, err, st.String()) } else { t.Errorf("%s: %v", msg, err) } } } func createMultiRequest[T any](cfg *Config, data []float32, createReq func(id string, vec []float32) T) []T { reqs := make([]T, 0, cfg.BulkSize) for i, vec := range data { id := strconv.Itoa(i) reqs = append(reqs, createReq(id, vec)) } return reqs }Then divide the test into multiple focused tests:
func TestE2EMultiInsert(t *testing.T) { /* ... */ } func TestE2EMultiSearch(t *testing.T) { /* ... */ } // etc.tests/v2/e2e/crud/stream_test.go (1)
34-540: Extract common error handling into helper function.The error handling pattern is duplicated throughout the test code. Consider extracting it to a utility function.
func handleStreamError(t *testing.T, err error, operation string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to %s: %v, status: %s", operation, err, st.String()) } else { t.Errorf("failed to %s: %v", operation, err) } } }Then use it throughout the code:
if err != nil { - st, ok := status.FromError(err) - if ok && st != nil { - t.Errorf("failed to search vector: %v, status: %s", err, st.String()) - } else { - t.Errorf("failed to search vector: %v", err) - } + handleStreamError(t, err, "search vector") }tests/v2/e2e/kubernetes/client.go (5)
36-39: Well-defined Kubernetes client interface.The interface is clean and focused, exposing only the essential methods needed for Kubernetes client operations. This minimalist approach promotes good separation of concerns.
41-46: Good encapsulation in client struct.The client struct properly encapsulates the necessary components for Kubernetes interaction:
- REST configuration
- Kubernetes clientset
- Manager for controller operations
- Client for watch capabilities
This structure provides flexibility for different types of Kubernetes operations.
48-91: Robust configuration fallback mechanism.The NewClient function implements a well-thought-out fallback mechanism:
- Tries user-provided config path
- Falls back to environment variable
- Tries default home directory config
- Finally attempts in-cluster config
This ensures the client works in various environments (local development, CI/CD, in-cluster) without requiring environment-specific code changes.
93-108: Good default QPS and Burst settings.The function sets reasonable defaults for QPS (20.0) and Burst (30) when not specified, which helps prevent API rate limiting issues when making multiple Kubernetes API calls.
110-116: Clean in-cluster configuration handling.The in-cluster configuration is handled properly, making it easy to run the code both inside and outside a Kubernetes cluster.
tests/v2/e2e/kubernetes/portforward.go (1)
69-69: Potential sharedTransportconcurrency risk.Assigning
transporttohc.Transportmay cause concurrency issues ifhcis shared across multiple port-forward requests. If each port-forward needs a unique transport, consider using a distincthttp.Clientor clarifying that this is safe.tests/v2/e2e/crud/crud_test.go (1)
187-187: Check slice bounds before slicingds.Neighbors[idx].If
len(ds.Neighbors[idx]) < len(topKIDs), this expression may cause an out-of-bounds error.if len(ds.Neighbors[idx]) < len(topKIDs) { t.Errorf("neighbors slice is smaller than search results, idx=%d", idx) return 0 } rc = recall(t, topKIDs, ds.Neighbors[idx][:len(topKIDs)])tests/v2/e2e/kubernetes/status.go (1)
83-114: Good use of context-based waiting for resource status
YourWaitForStatusfunction nicely leverages the context to avoid infinite waits. It periodically checks the resource and cleanly returns when the context is canceled. This is a solid approach for e2e testing scenarios.
3710cbb to
e7bbdb8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
tests/v2/e2e/crud/multi_test.go (2)
418-448:⚠️ Potential issueValidate retrieved object fields.
After calling
client.GetObject, confirm thatobjis not nil before printingobj.String(). Ifobjis nil, the log message might panic.-t.Logf("id %s got object: %v", id, obj.String()) +if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) +} else { + t.Errorf("object is nil for id %s", id) +}
51-105:⚠️ Potential issueHandle error group’s return.
After all
MultiInsertgoroutines complete, you calleg.Wait()but do not explicitly check its returned error. If any goroutine returned an error, the test may continue silently. Capture and asserteg.Wait()’s error to avoid missing concurrency-related issues.-eg.Wait() +if err := eg.Wait(); err != nil { + t.Errorf("multi-insert concurrency error: %v", err) +}tests/v2/e2e/crud/crud_test.go (1)
160-174:⚠️ Potential issueCheck division by zero in
recall.If
neighborsis empty,float64(len(neighbors))will be zero, causing a panic or NaN.Example fix:
- return recall / float64(len(neighbors)) + if len(neighbors) == 0 { + t.Errorf("neighbors is empty, can't compute recall") + return 0 + } + return recall / float64(len(neighbors))
🧹 Nitpick comments (21)
internal/net/grpc/errdetails/errdetails.go (4)
80-82: Ensure consistent naming.
Detailsis an appropriate container for multipleDetailobjects. However, confirm that naming is unambiguous and consistent with user expectations. If you plan to embed fields directly, consider flattening the structure to reduce nesting.
166-172: Consider distinguishing unknown types from errors.In the default case, you marshal using
protojson.Marshal(d.Message). While logging a warning covers unexpected errors, consider using separate log levels or messages for unknown types vs. actual errors to ease debugging.
330-331: Guard for empty type name.If
typeNameends up being empty, the switch cases won't match, and the fallback logic will be triggered. Confirm this case is intended behavior.
416-421: Fallback logic inAnyToErrorDetail.Falling back to
a.ProtoReflect().Interface()ensures at least some representation when new message types are encountered. Consider explicitly logging unrecognized types at a suitable log level to facilitate debugging.internal/params/option.go (2)
38-45: Validate empty-name scenario.
WithNamechecks ifname != ""before settingp.name. Confirm that the parser gracefully handles a parser instance with no name set, or consider using a fallback name if not specified.
96-109: Flexibility in argument filtering.
WithArgumentFiltersis a powerful extension point. However, if multiple filters reject the same argument, consider logging which filter triggered the rejection for easier debugging.tests/v2/e2e/crud/multi_test.go (6)
35-49: Log index property when errors occur.Retrieving the index property is helpful, but be cautious about indefinite test continuity if it fails. Consider failing early to avoid subsequent operations on an invalid index state.
264-340: Consistency in naming queries.When building request IDs using
id + "-" + payload.Search_AggregationAlgorithm_name[...], confirm that parsing logic in the logging statements (withstrings.Cut) always works as expected, especially with custom naming.
341-416: Prevent nil pointer when linear searching by ID.Similar to
MultiSearchByID, ifresorris unexpectedly nil, accessing them in logs can panic. Ensure safe checks or gracefully degrade. Consider verifying eachResponseelement before processing.
504-553: Assess remove concurrency.Ensure that removing large volumes concurrently via
MultiRemovedoes not exceed recommended cluster rate limits. If you see rate-limit errors (429) or performance drop, consider implementing a backoff or decreased concurrency.
555-609: Add or update verification of upsert results.
MultiUpsertmerges insert & update logic. You may want to confirm that each item actually exists post-upsert, especially if concurrency and many items might cause overshadowing or race conditions.
610-629: Check timestamp boundary.
RemoveByTimestampusestime.Now().Add(-time.Hour).UnixNano(). Confirm that an hour boundary (or older) accurately reflects test data for the desired condition. If some data is older, it may be unintentionally removed.tests/v2/e2e/kubernetes/client.go (4)
36-39: Document usage guidelines for the Client interface.Explain that
GetClientSet()andGetRESRConfig()provide direct access to Kubernetes internals. Maintaining a stable abstraction boundary prevents callers from circumventing the interface.
93-109: Clarify error handling innewClient.
newClientreturns an error if creating theclientsetfails, but no additional context is logged here. You rely on upper layers to handle logging. Consider adding a contextual log or wrap the error to indicate where the client creation failed.
110-116: Consider a user-visible error message if in-cluster config is missing.When
rest.InClusterConfig()fails, the function just returns the error. This might be cryptic in certain CI environments. Provide a clarifying message that no in-cluster config was found or that permissions are missing.
118-120: Method naming alignment.
GetClientSet()andGetRESRConfig()(typo in “RESR” instead of “REST”?) are consistent with the interface. If you intended “REST” in the method name, consider updating to avoid confusion.-func (c *client) GetRESRConfig() *rest.Config { +func (c *client) GetRESTConfig() *rest.Config { return c.rest }Also applies to: 122-124
internal/params/params.go (4)
29-35: Re-exporting standard library error handling.Defining a type alias and constants for
flag.ErrorHandlingis acceptable, but confirm it’s necessary. If they're only used internally, consider referencingflagdirectly instead to reduce indirection.
69-69: Check usage ofErrorHandler.
ErrorHandlingviaflagtypically influences parsing behavior (panic, exit, etc.). Ensure the rest of the application can handle these behaviors gracefully, especially if used in production code.
105-115: Filter arguments cautiously.Removing arguments with
slices.DeleteFuncis a neat approach. However, confirm you aren’t removing essential flags. Consider logging or enumerating which flags were filtered out for easier debugging.
128-128: Provide more contextual help on usage error.When the config file doesn’t exist, calling
p.f.Usage()helps. Consider returning a more descriptive error, e.g., “Config file not found.”tests/v2/e2e/kubernetes/status.go (1)
92-93: Make poll interval configurable.
time.NewTicker(5 * time.Second)is hardcoded. Consider exposing this interval or calculating a dynamic backoff to optimize test runtime. This allows more flexible e2e testing configurations.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (40)
.github/actions/setup-k3d/action.yaml(1 hunks).github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/errors/k8s.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(6 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(5 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/assets/rollout.yaml(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/portforward.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/v2/e2e/crud/assets/rollout.yaml
🚧 Files skipped from review as they are similar to previous changes (29)
- .github/workflows/e2e-code-bench-agent.yaml
- .github/workflows/e2e-profiling.yaml
- .github/workflows/_docker-image.yaml
- .github/actions/setup-k3d/action.yaml
- .github/workflows/unit-test-rust.yaml
- .github/workflows/unit-test.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/test-hack.yaml
- .github/workflows/build-protobuf.yaml
- .github/workflows/coverage.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/codeql-analysis.yml
- .github/workflows/build-binaries.yaml
- .github/workflows/helm.yaml
- .github/workflows/reviewdog-k8s.yaml
- .github/workflows/e2e.yaml
- internal/net/http/client/client.go
- .github/workflows/reviewdog.yaml
- .github/workflows/chatops.yaml
- .github/workflows/fossa.yaml
- .github/workflows/format.yaml
- tests/v2/e2e/crud/stream_test.go
- .github/workflows/e2e-chaos.yaml
- tests/v2/e2e/kubernetes/rollout.go
- .github/workflows/update-deps.yaml
- tests/v2/e2e/kubernetes/portforward.go
- .github/workflows/helm-lint.yaml
- internal/errors/k8s.go
- internal/errors/errors.go
⏰ Context from checks skipped due to timeout of 90000ms (166)
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
🔇 Additional comments (32)
internal/net/grpc/errdetails/errdetails.go (3)
26-26: Use of newly introduced imports looks good.The addition of
convandprotojsonlibraries appears justified, given their usage for custom base64 encoding/decoding and protobuf JSON marshalling.Also applies to: 35-35
93-94: Validate prefix trimming usage.Using
strings.TrimPrefixto handletypeURLlogic is effective, but confirm that all relevant prefix variations (type.googleapis.com/google.rpc.,type.googleapis.com/rpc.v1., etc.) are covered to avoid misclassification.
315-319: Check fallback output format.When there is only one message,
msgs[0].MarshalJSON()is used directly. Otherwise, multiple messages are wrapped inDetails. If marshaling fails for the single-message scenario, fallback callsfmt.Sprint(objs...). This may lose structured detail. Consider logging or returning partial data if feasible.internal/params/option.go (2)
28-35: Initialize default options carefully.Including
WithName(filepath.Base(os.Args[0]))andWithOverrideDefault(false)in the defaultOptions helps set a clear baseline. Confirm these defaults align with typical usage, especially for CLI tools not run from a conventional main binary.
89-94: Mark potential side effects when overriding defaults.
WithOverrideDefaultindicates skipping the standardflag.CommandLine. Ensure that callers are aware of potential side effects when this override istrue, such as losing built-in flags.tests/v2/e2e/crud/multi_test.go (4)
110-185: Avoid confusion in concurrency logic.When splitting requests at
cfg.Search.BulkSize, confirm that leftover requests (fewer thanBulkSize) are processed at the end. This logic looks correct, but subsequent expansions might complicate concurrency patterns.
187-263: Check concurrency iteration boundaries.For
MultiSearchByID, ensure that indexing withds.Trainfromcfg.SearchByID.Offsettocfg.SearchByID.Offset+cfg.SearchByID.Numdoes not exceed dataset length and that partial final chunks are also processed.
449-503: Check partial updates.When grouping updates into
cfg.Update.BulkSizerequests, handle leftover data if the total count does not divide evenly byBulkSize. Also consider verifying each multi-update’s error.
631-645: Finalize test with flush verification.Flushing after all operations is a good practice, but consider verifying the flush result (e.g., ensuring no items remain if that’s expected) to confirm the system state is fully clean post-test.
tests/v2/e2e/kubernetes/client.go (1)
41-46: Evaluate concurrency safety.The
clientstruct holds multiple references (rest.Config,clientset,manager,client). Clarify if concurrent calls are thread-safe. Usually, clientsets are safe for concurrent use, but confirm usage ofmanageris also concurrency-friendly.internal/params/params.go (7)
23-23: Confirm Go version support for "slices" package.The
"slices"package was introduced in Go 1.21. Ensure the project’s minimum required Go version aligns with this import.Would you like a script to scan the repo for a go.mod definition or references to confirm the Go version?
54-58: Validate concurrency safety of parser fields.These new fields (
overrideDefault,name,filters,f,defaults) introduce state into the parser. If multiple goroutines call the parser concurrently, consider adding synchronization or clarifying that concurrency is unsupported.
78-82: Document side effects of overridingflag.CommandLine.Overriding the global
CommandLineinNewcan affect other packages relying on default flags. Provide clear documentation or a warning to future maintainers about this.
90-90: Usage ofStringVarfor config file path looks good.This properly captures the configuration path from the command line. Ensure you handle potentially empty strings or invalid paths before proceeding.
117-120: Return on non-help parse errors.Using
errors.Is(err, flag.ErrHelp)is a good approach to detect help requests. The code returns a specialized error otherwise. This logic seems appropriate.
135-139: Caution restoring global flags.
Restoresetsflag.CommandLineback to the defaults. If other code depends on the new override during runtime, it can lead to subtle bugs. Document or confirm only one restore path occurs.
141-145: Overridingflag.CommandLinecan be risky.This is the same concern noted earlier: toggling global state might affect other packages.
tests/v2/e2e/crud/unary_test.go (3)
1-20: E2E test file introduction.The header and package statements are consistent with e2e tests. The build constraint (
//go:build e2e) ensures it only runs in the intended environment.
35-446: Guard against nilresin logging.Inside
TestE2EUnaryCRUD, several places (e.g., line 81, 129, 173, etc.) logres.String()even iferr != nil. There is a risk ofresbeing nil and causing a runtime panic.Apply this pattern to fix:
-if err != nil { - ... -} -t.Logf("... %s", res.String()) +if err != nil { + ... +} else if res != nil { + t.Logf("... %s", res.String()) +}
448-752: Same nil-pointer risk in the second test function.
TestE2EUnarySkipStrictExistsCheckCRUDrepeats the same pattern, loggingres.String()after errors. Adjust to prevent potential panics ifresis nil.tests/v2/e2e/crud/crud_test.go (6)
19-41: Review environment variables and global vars usage.
cfg,ctx,client,kclient, anddsare package-level variables. Ensure that parallel tests do not inadvertently alter these shared globals.
52-131: Avoid partial initialization inTestMain.
- If any segment of setup fails, you call
log.Fatalf(exits test). This is fine, but confirm your environment cleans up partial resources (like port-forwards or partially open connections).- If you intend to run multiple test packages concurrently, ensure no conflicts with
flag.CommandLine.
133-152:newClientusage check.Good approach to pass
metadata. If the metadata is critical to the requests, ensure you handle the case of missing or invalid metadata gracefully.
154-158:sleephelper is fine.
t.Logfbefore and after is helpful for diagnosing wait times in logs.
176-189: Slice bounds incalculateRecall.On line 187,
ds.Neighbors[idx][:len(topKIDs)]may go out of range ifds.Neighbors[idx]is shorter thantopKIDs. Add a length check first.
191-241: Handle potential nilresin index status logging.After an error,
resmight be nil. For example, lines 203, 215, 227, 239 logres.String()even iferr != nil. Return early or checkres != nilto avoid panics.tests/v2/e2e/kubernetes/resources.go (1)
187-229: Potential nil-pointer panic with concurrency pattern.All these methods (
Create,Update,UpdateStatus,Delete,DeleteCollection,Get,List,Watch,Patch,Apply, and the pod- and scale-related helpers) begin by lockingb.mu.RLock()and then checking ifb == nil. However, ifbwere actuallynil, callingb.mu.RLock()would panic before the check is reached. This mirrors a previously flagged issue.A fix often involves checking
bfor nil before locking:-func (b *baseClient[T, L, C]) Create( - ctx context.Context, resource T, opts metav1.CreateOptions, -) (t T, err error) { - b.mu.RLock() - defer b.mu.RUnlock() - if b == nil || b.Interface == nil { - return t, errors.ErrKubernetesClientNotFound - } - return b.Interface.Create(ctx, resource, opts) +func (b *baseClient[T, L, C]) Create( + ctx context.Context, resource T, opts metav1.CreateOptions, +) (t T, err error) { + if b == nil { + return t, errors.ErrKubernetesClientNotFound + } + b.mu.RLock() + defer b.mu.RUnlock() + if b.Interface == nil { + return t, errors.ErrKubernetesClientNotFound + } + return b.Interface.Create(ctx, resource, opts) }Also applies to: 231-287, 289-298, 300-326, 328-340
tests/v2/e2e/config/config.go (3)
233-234: Potential out-of-range panic when referencing default search query.Accessing
Default.Search.Queries[0]can panic ifDefault.Search.Queriesis empty, particularly if the configuration was altered or merged at runtime so that queries become empty. This was also mentioned in prior reviews.Consider safely checking the length of the slice before indexing:
- sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) + if len(Default.Search.Queries) > 0 { + sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) + } else { + // fallback if Queries is empty + sq.Timeout = time.Second * 3 + }
333-341: Doc mismatch for local port default behavior.The doc states: "If not set, it defaults to TargetPort," but the code line 339 always defaults
pf.LocalPorttolocalPort(8081), not topf.TargetPort. This mismatch was flagged before.Either correct the doc to say it defaults to a fixed port (8081) or update the code to match the doc by setting
pf.LocalPort = pf.TargetPort.
465-467: Potential PII or secret leakage in debug logs.Dumping the entire configuration as raw YAML can inadvertently log confidential information. This was also raised previously.
If the configuration might contain secrets or sensitive metadata, consider removing or sanitizing this debug log:
- log.Debug(config.ToRawYaml(cfg)) + // log.Debug("Configuration loaded") // Mask or sanitize sensitive fieldstests/v2/e2e/kubernetes/status.go (2)
76-114: Clear approach for resource status polling.This function cleanly checks the object’s status at regular intervals and handles context cancellation properly. The usage of generics for scope-limited type checks and slicing logic is well-designed.
146-175: Comprehensive type-based resource status evaluation.Your switch case ensures that unsupported resource types are gracefully handled, improving extensibility for future resource kinds.
| func NewClient(kubeConfig, currentContext string) (c Client, err error) { | ||
| if kubeConfig == "" { | ||
| kubeConfig = os.Getenv(clientcmd.RecommendedConfigPathEnvVar) | ||
| if kubeConfig == "" { | ||
| if file.Exists(clientcmd.RecommendedHomeFile) { | ||
| kubeConfig = clientcmd.RecommendedHomeFile | ||
| } | ||
| if kubeConfig == "" { | ||
| c, err = inClusterConfigClient() | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| return c, nil | ||
| } | ||
| } | ||
| } | ||
|
|
||
| cfg, err := clientcmd.NewNonInteractiveDeferredLoadingClientConfig( | ||
| &clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeConfig}, | ||
| &clientcmd.ConfigOverrides{ | ||
| ClusterInfo: clientcmdapi.Cluster{}, | ||
| CurrentContext: currentContext, | ||
| }).ClientConfig() | ||
| if err != nil { | ||
| log.Debugf("failed to build config from kubeConfig path %s,\terror: %v", kubeConfig, err) | ||
| var ierr error | ||
| c, ierr = inClusterConfigClient() | ||
| if ierr != nil { | ||
| return nil, errors.Join(err, ierr) | ||
| } | ||
| return c, nil | ||
| } | ||
|
|
||
| c, err = newClient(cfg) | ||
| if err != nil { | ||
| log.Debugf("failed to build config from kubeConfig path %s,\terror: %v", kubeConfig, err) | ||
| var ierr error | ||
| c, ierr = inClusterConfigClient() | ||
| if ierr != nil { | ||
| return nil, errors.Join(err, ierr) | ||
| } | ||
| } | ||
| return c, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure that in-cluster fallback is intentional.
NewClient attempts a fallback to inClusterConfigClient() if building the config fails. Make sure these fallback semantics are intentional (especially if the environment might not have configured in-cluster credentials). Otherwise, you could silently connect to an unintended cluster.
e7bbdb8 to
fd8961e
Compare
Signed-off-by: kpango <kpango@vdaas.org>
fd8961e to
d4f681c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
tests/v2/e2e/crud/unary_test.go (1)
629-634:⚠️ Potential issueMismatch between error checking and logged message.
Here, the code checks if the status code is not
AlreadyExists, but logs "NotFound error on #5." This is misleading and can confuse debugging.-if !ok || st == nil || st.Code() != codes.AlreadyExists { - t.Errorf("update vector response is not AlreadyExists: %v with SkipStrictExistCheck=false", err) -} -t.Logf("received a NotFound error on #5: %s", err.Error()) +if !ok || st == nil || st.Code() != codes.AlreadyExists { + t.Errorf("update vector response is not AlreadyExists: %v with SkipStrictExistCheck=false", err) + t.Logf("unexpected error on #5: %s", err.Error()) +} else { + t.Logf("received an AlreadyExists error on #5: %s", err.Error()) +}
🧹 Nitpick comments (8)
tests/v2/e2e/crud/stream_test.go (1)
162-162: Fix incorrect error messages in stream operations.Several error messages reference "insert stream" even when the operation is not an insert. Update these messages to reference the correct operation type:
- t.Errorf("failed to complete insert stream %v", err) + t.Errorf("failed to complete search stream %v", err)And similarly for other operations (update, remove, upsert, etc.).
Also applies to: 223-223, 284-284, 345-345, 376-376, 458-458, 502-502
tests/v2/e2e/kubernetes/client.go (1)
38-39: Consider renaming GetRESRConfig to GetRESTConfig."RESR" appears to be a typo. Renaming for clarity will improve maintainability and readability.
-func (c *client) GetRESRConfig() *rest.Config { +func (c *client) GetRESTConfig() *rest.Config {tests/v2/e2e/kubernetes/status.go (6)
152-175: Flexible approach to supporting new resource types.
The switch statement inCheckResourceStateis clear, though consider centralizing common checks or promoting a pattern-based approach to avoid duplication.
342-374: Parallel Job nuance.
For jobs with parallelism, consider differentiating partial success vs. complete success (e.g., some tasks succeeded but others failed).
376-401: CronJob states could include failures of child Jobs.
Currently, a CronJob with recently failed Jobs remains “available.” Checking for recent Job failures might better reflect the CronJob’s overall health.
453-475: PVC ephemeral volumes note.
If ephemeral volumes or other advanced storage class configurations are in use, consider additional checks for unusual PVC binding states.Would you like help drafting code to handle ephemeral volume states?
477-497: NodePort coverage.
Services of type NodePort can also face partial readiness (e.g., node port conflicts). Consider an extended check if that scenario is relevant.
499-517: Optional expansion of Ingress checks.
You might also evaluate path-level rules, TLS configuration, or invalid host definitions to catch more user misconfigurations.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (39)
.github/workflows/_docker-image.yaml(1 hunks).github/workflows/_release-pr.yaml(1 hunks).github/workflows/build-binaries.yaml(1 hunks).github/workflows/build-protobuf.yaml(1 hunks).github/workflows/chatops.yaml(5 hunks).github/workflows/codeql-analysis.yml(1 hunks).github/workflows/coverage.yaml(1 hunks).github/workflows/e2e-chaos.yaml(4 hunks).github/workflows/e2e-code-bench-agent.yaml(2 hunks).github/workflows/e2e-max-dim.yaml(1 hunks).github/workflows/e2e-profiling.yaml(1 hunks).github/workflows/e2e.yaml(8 hunks).github/workflows/format.yaml(2 hunks).github/workflows/fossa.yaml(1 hunks).github/workflows/helm-lint.yaml(3 hunks).github/workflows/helm.yaml(2 hunks).github/workflows/reviewdog-k8s.yaml(3 hunks).github/workflows/reviewdog.yaml(2 hunks).github/workflows/test-hack.yaml(1 hunks).github/workflows/unit-test-rust.yaml(1 hunks).github/workflows/unit-test.yaml(3 hunks).github/workflows/update-deps.yaml(1 hunks)internal/errors/errors.go(1 hunks)internal/errors/k8s.go(1 hunks)internal/net/grpc/errdetails/errdetails.go(6 hunks)internal/net/http/client/client.go(1 hunks)internal/params/option.go(2 hunks)internal/params/params.go(5 hunks)tests/v2/e2e/config/config.go(1 hunks)tests/v2/e2e/crud/assets/rollout.yaml(1 hunks)tests/v2/e2e/crud/crud_test.go(1 hunks)tests/v2/e2e/crud/multi_test.go(1 hunks)tests/v2/e2e/crud/stream_test.go(1 hunks)tests/v2/e2e/crud/unary_test.go(1 hunks)tests/v2/e2e/kubernetes/client.go(1 hunks)tests/v2/e2e/kubernetes/portforward.go(1 hunks)tests/v2/e2e/kubernetes/resources.go(1 hunks)tests/v2/e2e/kubernetes/rollout.go(1 hunks)tests/v2/e2e/kubernetes/status.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (28)
- .github/workflows/format.yaml
- .github/workflows/e2e-profiling.yaml
- .github/workflows/build-protobuf.yaml
- .github/workflows/test-hack.yaml
- .github/workflows/e2e-code-bench-agent.yaml
- .github/workflows/update-deps.yaml
- .github/workflows/_release-pr.yaml
- .github/workflows/unit-test.yaml
- .github/workflows/e2e-max-dim.yaml
- .github/workflows/build-binaries.yaml
- .github/workflows/helm.yaml
- .github/workflows/unit-test-rust.yaml
- .github/workflows/reviewdog-k8s.yaml
- .github/workflows/fossa.yaml
- .github/workflows/coverage.yaml
- .github/workflows/helm-lint.yaml
- .github/workflows/chatops.yaml
- .github/workflows/_docker-image.yaml
- .github/workflows/codeql-analysis.yml
- .github/workflows/e2e-chaos.yaml
- .github/workflows/reviewdog.yaml
- tests/v2/e2e/crud/assets/rollout.yaml
- internal/net/http/client/client.go
- .github/workflows/e2e.yaml
- tests/v2/e2e/kubernetes/rollout.go
- internal/errors/k8s.go
- internal/errors/errors.go
- tests/v2/e2e/kubernetes/portforward.go
⏰ Context from checks skipped due to timeout of 90000ms (180)
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: check-format-diff
- GitHub Check: runner / go build
- GitHub Check: coverage
- GitHub Check: Run tests for pkg packages
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
- GitHub Check: build / build
🔇 Additional comments (38)
internal/net/grpc/errdetails/errdetails.go (7)
80-82: Well-structured addition of Details type.The new
Detailstype provides a cleaner way to encapsulate a collection of error details, improving the serialization process.
93-94: Improved type handling in MarshalJSON.The changes to properly trim prefixes from type URLs make the type handling more robust.
165-167: Good addition of default case for handling unknown types.Adding a default case that uses
protojson.Marshalprovides a more flexible fallback for types not explicitly handled.
168-171: Enhanced error logging with more context.The improved error logging now includes type name and object details, which will help with debugging.
315-318: Validate error handling result inSerialize.Currently, if marshaling
Detailsfails, the function falls back tofmt.Sprint(objs...), which may mask the underlying error cause. Consider logging or returning the error more explicitly if possible.
416-421: Good enhancement for handling unknown message types.The addition of a default case that attempts to unmarshal unknown types improves robustness when dealing with new or custom message types.
424-424: Improved error logging with type information.Including the type name in the error log provides better context for troubleshooting.
internal/params/option.go (4)
28-28: Good default setting for executable name.Using
filepath.Base(os.Args[0])as the default name ensures the parser always has a sensible name derived from the executable.
39-45: Well-implemented WithName function.The function properly checks for empty names before assigning, which is a good defensive programming practice.
90-94: Good addition of WithOverrideDefault function.This provides a clean way to control whether to override default flag.CommandLine settings.
97-109: Well-structured WithArgumentFilters implementation.The function has good defensive checks:
- Returns early if no filters are provided
- Initializes the filters slice if it's nil
- Uses capacity hint for slice allocation
This provides a robust way to filter command-line arguments.
tests/v2/e2e/crud/multi_test.go (3)
51-105: Capture errgroup errors & handle potential nil responses.Inside the goroutines, errors are logged but never aggregated. Also, you call
eg.Wait()without checking its returned error. If a goroutine returns an error, it might go unnoticed, potentially hiding issues. Consider capturing that error:-eg.Wait() +if err := eg.Wait(); err != nil { + t.Errorf("concurrent insert error: %v", err) +}Furthermore, after
client.MultiInsert, confirmresis non-nil before usingres.String()to avoid a nil-pointer in edge cases.
422-430: Avoid potential nil pointer dereference whenGetObjectfails.If
erris non-nil,objmay benil, causingobj.String()to panic. Consider returning immediately or skipping theobj.String()call whenerr != nil.Here's an example fix:
- if err != nil { - t.Errorf("failed to get object: %v", err) - } - t.Logf("id %s got object: %v", id, obj.String()) + if err != nil { + t.Errorf("failed to get object: %v", err) + return nil + } + if obj != nil { + t.Logf("id %s got object: %v", id, obj.String()) + } else { + t.Errorf("failed to get object: object is nil") + return nil + }
35-645: Refactor test for better maintainability.The test function is too large and contains significant code duplication. Consider:
- Extracting helper functions for common operations.
- Splitting the test into smaller, focused test cases.
Here are some suggested improvements:
- Extract helper functions for common operations:
func createMultiRequest[T any](cfg *Config, data []float32, createReq func(id string, vec []float32) T) []T { reqs := make([]T, 0, cfg.BulkSize) for i, vec := range data { id := strconv.Itoa(i) reqs = append(reqs, createReq(id, vec)) } return reqs } func handleError(t *testing.T, err error, msg string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("%s: %v, status: %s", msg, err, st.String()) } else { t.Errorf("%s: %v", msg, err) } } }
- Split the test into smaller test cases:
func TestE2EMultiInsert(t *testing.T) { /* ... */ } func TestE2EMultiSearch(t *testing.T) { /* ... */ } func TestE2EMultiSearchByID(t *testing.T) { /* ... */ } func TestE2EMultiLinearSearch(t *testing.T) { /* ... */ } func TestE2EMultiUpdate(t *testing.T) { /* ... */ } func TestE2EMultiRemove(t *testing.T) { /* ... */ } func TestE2EMultiUpsert(t *testing.T) { /* ... */ }tests/v2/e2e/crud/stream_test.go (4)
66-68: Fix array bounds check.The condition should be
idx >= len(datas)to correctly detect when we've processed all elements.-if len(datas) < idx { +if idx >= len(datas) { return nil }
114-115: Fix query array bounds check in stream search functions.The condition should be
qidx >= len(cfg.Search.Queries)to correctly detect when all queries have been processed.-if len(cfg.Search.Queries) < qidx { +if qidx >= len(cfg.Search.Queries) { qidx = 0 idx++ }This also applies to similar conditions in the other search stream functions.
431-432: Resolve suspicious slice range when removing vectors.Here, you use:
datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num]instead of
datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]This likely removes a different slice than intended. Consider correcting the slice boundaries to avoid accidentally referencing the
Updateconfig.- datas = ds.Train[cfg.Remove.Offset : cfg.Update.Offset+cfg.Update.Num] + datas = ds.Train[cfg.Remove.Offset : cfg.Remove.Offset+cfg.Remove.Num]
34-540: Extract common error handling into a utility function.The error handling pattern is repeated throughout the test code. Consider extracting it into a helper function to reduce duplication.
func handleStreamError(t *testing.T, err error, operation string) { if err != nil { st, ok := status.FromError(err) if ok && st != nil { t.Errorf("failed to %s: %v, status: %s", operation, err, st.String()) } else { t.Errorf("failed to %s: %v", operation, err) } } }Then use it throughout the code:
if err != nil { - st, ok := status.FromError(err) - if ok && st != nil { - t.Errorf("failed to search vector: %v, status: %s", err, st.String()) - } else { - t.Errorf("failed to search vector: %v", err) - } + handleStreamError(t, err, "search vector") }tests/v2/e2e/kubernetes/client.go (1)
48-91: Ensure that in-cluster fallback is intentional.This code attempts a fallback to
inClusterConfigClientif building the config fails. Make sure this behavior is desired, as it may connect to an unintended cluster if misconfigured.internal/params/params.go (2)
23-23: Verify Go version compatibility for slices.DeleteFunc.
slices.DeleteFuncis only available in Go 1.21+. Ensure the project requires or enforces this minimum version to avoid compilation errors for older environments.
141-145: Overriding the global flag.CommandLine can be risky.This modifies
flag.CommandLineinOverride(), impacting other packages that rely on global flags. Document these side effects or consider an alternative approach for safer, localized flag management.tests/v2/e2e/crud/crud_test.go (3)
160-174: Prevent potential division by zero inrecall()function.If
neighborsis empty, the expressionlen(neighbors)will be 0, causing a division-by-zero error. A simple length check is recommended before performing the division.Apply this diff to handle empty neighbors:
-func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { - ... - return recall / float64(len(neighbors)) +func recall(t *testing.T, resultIDs []string, neighbors []int) (recall float64) { + ... + if len(neighbors) == 0 { + t.Errorf("neighbors is empty, cannot compute recall") + return 0.0 + } + return recall / float64(len(neighbors)) }
194-203: Avoid nil-pointer dereferences inindexStatus.When any of the
client.*calls fail,resmight be nil. Immediately after checking for errors, the code logsres.String(). This can cause a panic ifresis nil. Consider returning early or logging only ifresis non-nil.Apply this diff to prevent nil-pointer dereferences:
res, err := client.IndexInfo(ctx, &payload.Empty{}) if err != nil { ... + return } -t.Logf("IndexInfo: %v", res.String()) +if res != nil { + t.Logf("IndexInfo: %v", res.String()) +}Repeat similarly for the other index calls:
IndexDetail,IndexStatistics, andIndexStatisticsDetail.Also applies to: 206-216, 218-228, 230-240
1-159: Looks good overall.No further major issues identified outside of the highlighted sections.
Also applies to: 175-193, 241-242
tests/v2/e2e/kubernetes/resources.go (2)
187-196: Check for nilbaseClientbefore locking the mutex.If
bis actually nil, callingb.mu.RLock()orb.mu.Lock()can panic before reaching the checks forb == nil. Explicitly verifybis non-nil before locking when handling concurrency logic.Example fix:
-func (b *baseClient[T, L, C]) Create( - ctx context.Context, resource T, opts metav1.CreateOptions, -) (t T, err error) { - b.mu.RLock() - defer b.mu.RUnlock() - if b == nil || b.Interface == nil { +func (b *baseClient[T, L, C]) Create( + ctx context.Context, resource T, opts metav1.CreateOptions, +) (t T, err error) { + if b == nil { + return t, errors.ErrKubernetesClientNotFound + } + b.mu.RLock() + defer b.mu.RUnlock() + if b.Interface == nil { return t, errors.ErrKubernetesClientNotFound } return b.Interface.Create(ctx, resource, opts) }Apply similar checks in the other methods.
Also applies to: 198-207, 210-218, 221-229, 232-240, 242-251, 253-260, 262-271, 273-287, 289-298, 300-312, 314-326, 328-340, 342-354, 356-371
1-408: LGTM for the rest.No further issues spotted beyond the concurrency and nil-pointer checks.
tests/v2/e2e/config/config.go (4)
232-234: Potential out-of-range error withDefault.Search.Queries[0].Accessing
Default.Search.Queries[0]can fail if the slice is empty. Ensure thatDefault.Search.Queriesis non-empty before referencing index 0.Example fix:
-sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) +if len(Default.Search.Queries) > 0 { + sq.Timeout = timeutil.ParseWithDefault(sq.TimeoutString, Default.Search.Queries[0].Timeout) +} else { + sq.Timeout = time.Second * 3 // or another safe fallback +}
465-467: Avoid potential PII exposure in debug logs.Logging the entire configuration as raw YAML, including
Metadata, can leak sensitive or private data. Consider removing or sanitizing the debug output.log.Debug(config.ToRawYaml(cfg)) +// log.Debug("Configuration loaded and sanitized")
111-111: Doc mismatch for local port default behavior.The comment states it defaults to
TargetPortif not set, but the actual code setspf.LocalPortto a constantlocalPortifpf.LocalPort == 0. Update the doc or logic to match.- The local port number; if not set, it defaults to TargetPort. + The local port number; if not set, it defaults to 8081. if pf.LocalPort == 0 { pf.LocalPort = localPort }Also applies to: 338-340
1-464: Overall configuration setup looks sound.No further structural issues beyond the existing concerns of indexing, PII in logs, and doc mismatch.
Also applies to: 468-469
tests/v2/e2e/kubernetes/status.go (8)
1-18: License & build tag look good.
No issues found with the standard header and build directive.
19-35: Imports and package declaration are consistent.
All imports appear valid, and the package naming aligns with the project structure.
36-74: ResourceStatus definitions are well-organized.
The comprehensive set of status constants and their descriptive map improve clarity and maintainability.
116-144: Nicely enumerated status coverage per resource type.
The functionPossibleStatusescleanly restricts valid statuses for each Kubernetes resource, reducing the chance of unsupported checks.
185-246: Deployment condition checks are thorough.
You correctly handle paused states, ObservedGeneration mismatch, and multiple condition types. Consider monitoringProgressDeadlineExceededif needed, as it can indicate stuck rollouts.
248-301: StatefulSet evaluation is comprehensive.
The readiness logic and revision checks are well-implemented.
303-340: DaemonSet condition handling is sufficient, but verify unschedulable pods.
If there’s an inadequate node resource scenario, the DaemonSet might never reach the desired count. Adding a quick node scheduling check could improve diagnostics.
402-451: Add container-level status checks for Pod.
Pod might show phase “Running” while containers are in CrashLoopBackOff. Reviewingpod.Status.ContainerStatusescan provide more accurate health data.
| t.Errorf("failed to get IndexProperty %v", err) | ||
| } | ||
| } | ||
| t.Logf("IndexProperty: %v", res.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check for nil response before calling res.String().
After encountering an error, the response object (res) might be nil. Calling res.String() in the same block could trigger a nil-pointer panic. Wrap the log call with a safety check:
-if err != nil {
...
-}
-t.Logf("IndexProperty: %v", res.String())
+if err != nil {
+ ...
+} else if res != nil {
+ t.Logf("IndexProperty: %v", res.String())
+}Apply similarly to all places that log res.String() immediately after an error.
Also applies to: 81-81, 129-129, 173-173, 217-217, 260-260, 293-293, 307-307, 341-341, 371-371, 405-405, 429-429, 442-442
| // -------------------------------------------------------------------------------- | ||
| // WaitForStatus waits for a Kubernetes resource to reach a specific status. | ||
| // The function checks the status of the resource at regular intervals and returns | ||
| // the object, a boolean indicating if the status matched, and an error (if any). | ||
| // The function supports Deployment, StatefulSet, DaemonSet, Job, CronJob, Pod, | ||
| // PersistentVolumeClaim, Service, and Ingress. | ||
| // -------------------------------------------------------------------------------- | ||
| func WaitForStatus[T Object, L ObjectList, C NamedObject, I ObjectInterface[T, L, C]]( | ||
| ctx context.Context, client I, name string, statuses ...ResourceStatus, | ||
| ) (obj T, matched bool, err error) { | ||
| if !slices.ContainsFunc(PossibleStatuses(obj), func(st ResourceStatus) bool { | ||
| return slices.Contains(statuses, st) | ||
| }) { | ||
| return obj, false, errors.ErrStatusPatternNeverMatched | ||
| } | ||
|
|
||
| ticker := time.NewTicker(5 * time.Second) | ||
| defer ticker.Stop() | ||
| for { | ||
| select { | ||
| case <-ctx.Done(): | ||
| return obj, false, ctx.Err() | ||
| case <-ticker.C: | ||
| obj, err = client.Get(ctx, name, metav1.GetOptions{}) | ||
| if err != nil { | ||
| return obj, false, err | ||
| } | ||
| status, info, err := CheckResourceState(obj) | ||
| if err != nil { | ||
| return obj, false, errors.Wrap(err, info) | ||
| } | ||
| for _, st := range statuses { | ||
| if st == status { | ||
| return obj, true, nil | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider a watch-based approach for real-time updates.
Polling with a Ticker works reliably, but switching to a Kubernetes watch/Informer pattern would reduce unnecessary polling overhead and allow event-driven updates.
Description
SSIA
Related Issue
Versions
Checklist
Special notes for your reviewer
Summary by CodeRabbit
New Features
Chores