chore: upgrade all dependencies to latest versions#245
Conversation
pgconn is now imported from pgx/v5/pgconn instead of the standalone pgconn module. stdlib driver registration moves from pgx/v4/stdlib to pgx/v5/stdlib. Removes obsolete go1.16 build constraint.
- Replace grpc_middleware.ChainUnaryServer with native grpc.ChainUnaryInterceptor - Use recovery interceptor from go-grpc-middleware/v2 - Remove deprecated grpc_ctxtags and grpc_logrus middleware - Switch from otelgrpc.UnaryServerInterceptor to grpc.StatsHandler(otelgrpc.NewServerHandler()) - Replace deprecated grpc.DialContext with grpc.NewClient - Add List() method to HealthHandler for newer grpc health API - Rewrite interceptor tests without grpc-middleware v1 test utilities, testing interceptors directly
API is identical between v2 and v3 — pure import path change across all 7 files that use the diff library for asset changelog tracking.
- Swap all go-elasticsearch imports from v7 to v8 - Remove deprecated olivere/elastic/v7 dependency entirely - Replace olivere query builders (BoolQuery, MultiMatchQuery, TermQuery, FunctionScoreQuery, CompletionSuggester, etc.) with JSON map construction using a lightweight boolQuery helper - Define local totalHits and suggestionOption structs to replace olivere response types - NewRelic nrelasticsearch-v7 transport kept as-is since it wraps http.RoundTripper and works with v8 client
Migrate to salt v0.7.0 restructured packages: - salt/log -> salt/observability/logger (aliased as log) - salt/printer -> salt/cli/printer - salt/term -> salt/cli/printer (merged) - salt/mux -> salt/server/mux - salt/cmdx -> salt/cli/commander + salt/config - salt/version -> salt/cli/releaser - salt/config: remove ConfigFileNotFoundError handling Also fix go vet errors from stricter format string checking in newer Go: use %s format verb for term.Bluef/Greenf/Redf/ Cyanf calls, and status.Error instead of status.Errorf for non-format strings.
Major upgrades: - go 1.20 -> 1.25 (toolchain go1.26.1) - jackc/pgx v4 -> v5 - grpc-middleware v1 -> v2 - r3labs/diff v2 -> v3 - go-elasticsearch v7 -> v8 - raystack/salt v0.3 -> v0.7 Notable minor/patch upgrades: - google.golang.org/grpc v1.55 -> v1.79 - google.golang.org/protobuf v1.30 -> v1.36 - go.opentelemetry.io/otel v1.16 -> v1.42 - spf13/cobra v1.4 -> v1.10 - grpc-gateway/v2 v2.11 -> v2.28 - newrelic/go-agent/v3 v3.15 -> v3.42 - stretchr/testify v1.8 -> v1.11 - And 20+ other dependency updates Removed: olivere/elastic/v7 (deprecated)
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR updates dependencies and internal integrations across the repository: Go toolchain bump and many module upgrades (elasticsearch v7→v8, pgx v4→v5, r3labs/diff v2→v3), standardizes logging on github.com/raystack/salt/observability/logger, replaces terminal output with github.com/raystack/salt/cli/printer, switches config loading to config.NewLoader, rewrites Elasticsearch query construction to manual JSON, modifies gRPC server interceptors to use grpc-middleware v2 recovery and OpenTelemetry stats handling, and adjusts client connection creation to grpc.NewClient. Sequence Diagram(s)mermaid 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…lient Fixes staticcheck SA1019 lint warnings.
- Bump go-version from 1.20 to 1.25 across all workflows (lint, test, main, release) to match go.mod requirement - Upgrade test Elasticsearch service image from 7.17.9 to 8.17.1
Read Go version from go.mod so CI always stays in sync with the project's declared Go version, matching the pattern used in raystack/salt.
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/store/elasticsearch/testutil/elastic_search.go (1)
12-33:⚠️ Potential issue | 🟠 MajorUpgrade Elasticsearch test server image to v8 or downgrade client to v7.
Line 12 imports the v8 client (
github.com/elastic/go-elasticsearch/v8), but line 33 runs Elasticsearch7.17.6. The v8 client is not officially backward compatible with v7 servers, and while it may work with v7.17.x, this pairing lacks guaranteed support and creates cross-major compatibility testing rather than validating the intended v8 target environment.Either upgrade the server image to
docker.elastic.co/elasticsearch/elasticsearch:8.xor revert the client import to v7 (which is already available as a transitive dependency).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/store/elasticsearch/testutil/elastic_search.go` around lines 12 - 33, The test harness currently mixes the v8 Go client import (github.com/elastic/go-elasticsearch/v8) with an Elasticsearch 7.17.6 Docker image in the elasticSearchCmdLine variable; update the server image to an 8.x release (replace "docker.elastic.co/elasticsearch/elasticsearch:7.17.6" in elasticSearchCmdLine with a stable 8.x tag such as "docker.elastic.co/elasticsearch/elasticsearch:8.10.2") so the test server matches the v8 client, and adjust any incompatible environment flags in elasticSearchCmdLine if needed for Elasticsearch 8.x.
🧹 Nitpick comments (1)
internal/store/elasticsearch/discovery_search_repository.go (1)
143-163: Hardcoded suggester size may limit flexibility.The completion suggester uses a hardcoded
size: 5(line 151), whilecfg.MaxResultsis only used for the outer search size (line 86-87). If the previous implementation usedcfg.MaxResultsfor suggestions, this could be a behavioral change.Consider using cfg.MaxResults for suggester size
func (repo *DiscoveryRepository) buildSuggestQuery(ctx context.Context, cfg asset.SearchConfig) (io.Reader, error) { + suggesterSize := 5 + if cfg.MaxResults > 0 { + suggesterSize = cfg.MaxResults + } searchBody := map[string]interface{}{ "suggest": map[string]interface{}{ suggesterName: map[string]interface{}{ "prefix": cfg.Text, "completion": map[string]interface{}{ "field": "name.suggest", "skip_duplicates": true, - "size": 5, + "size": suggesterSize, }, }, }, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/store/elasticsearch/discovery_search_repository.go` around lines 143 - 163, The completion suggester in buildSuggestQuery currently hardcodes "size": 5 which can change behavior; update buildSuggestQuery to use cfg.MaxResults (falling back to a sensible default if zero) for the suggester size so the suggester matches caller-configured limits—locate buildSuggestQuery and the suggester map keyed by suggesterName and replace the literal 5 with the computed value derived from cfg.MaxResults.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/config.go`:
- Around line 114-118: The function LoadFromCurrentDir currently loads
"./config.yaml" but the user-facing guidance in cli/errors.go instructs creating
"compass.yaml"; update LoadFromCurrentDir to look for "./compass.yaml" (i.e.,
change the config.WithFile argument) so the local override filename matches the
documented CLI guidance; ensure nothing else relies on "config.yaml" or update
the docs instead if you prefer the other name.
- Around line 102-108: The code currently falls back on any error from
LoadFromCurrentDir and maps all loader.Load failures to ErrConfigNotFound;
change the logic so LoadFromCurrentDir's non-nil error is checked for a "not
found" condition (use os.IsNotExist(err) or the loader's file-not-found
sentinel) and only then attempt the app-config fallback via config.NewLoader;
conversely, if LoadFromCurrentDir returns a different error, return that
original error immediately; likewise, when calling loader.Load(&cfg), if loadErr
is a file-not-found error return ErrConfigNotFound, otherwise return loadErr so
real parse/permission errors surface instead of being hidden.
- Around line 106-107: The config loader creation calls (e.g., the loader :=
config.NewLoader(config.WithAppConfig("compass")) used before Load(&cfg) and the
loader used for the --config flag path) are missing the environment prefix;
update both NewLoader calls to include config.WithEnvPrefix("COMPASS") alongside
config.WithAppConfig("compass") (and any other loader options) so that
environment overrides (COMPASS_*) are applied consistently regardless of which
loader path is used.
In `@go.mod`:
- Line 45: Update the pinned gRPC module from "google.golang.org/grpc v1.79.2"
to "v1.79.3" to address the GHSA-p77j-4mvh-x3m3 (GO-2026-4762) vulnerability;
change the version in go.mod and then run 'go get
google.golang.org/grpc@v1.79.3' (or 'go mod tidy') to refresh go.sum and ensure
the new version is applied across the build.
- Line 96: The go.mod currently pins the indirect dependency
"github.com/microcosm-cc/bluemonday" at v1.0.6 which is vulnerable; update the
version to v1.0.16 or later (recommend v1.0.26) by editing the go.mod entry for
github.com/microcosm-cc/bluemonday (replace v1.0.6 with v1.0.16+), then run `go
get github.com/microcosm-cc/bluemonday@v1.0.26` and `go mod tidy` to ensure the
transitive dependency is upgraded and the lockfile/module graph is consistent.
In `@internal/client/client.go`:
- Around line 48-51: createConnection currently calls grpc.NewClient which is
lazy and ignores the provided ctx timeout, so replace the call with a blocking
dial that respects the context deadline: use grpc.DialContext(ctx, cfg.Host,
grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithBlock())
(i.e., swap grpc.NewClient for grpc.DialContext and add grpc.WithBlock()) so the
2s timeout passed into createConnection is enforced and connection failures
surface synchronously from Create(); keep the function signature
createConnection(ctx context.Context, cfg Config) unchanged.
In `@internal/store/elasticsearch/es.go`:
- Around line 12-15: The import and instrumentation using the New Relic v7
integration (package
"github.com/newrelic/go-agent/v3/integrations/nrelasticsearch-v7") is
incompatible with the upgraded Elasticsearch client v8; either revert the client
import back to v7 or remove/replace the New Relic instrumentation. Locate the
import of "github.com/newrelic/go-agent/v3/integrations/nrelasticsearch-v7" in
es.go and any usages (search for identifiers like nrelasticsearch,
NewRelicTransport, NewTransport, or similar) and then either (A) change the
go-elasticsearch import back to v7 to match the New Relic integration, or (B)
remove the New Relic import and all instrumentation wiring (transport
wrapper/initialization) so the code uses the plain v8 client; ensure any
references to the removed symbols are deleted or replaced accordingly and update
tests/initialization code that depended on the New Relic transport.
---
Outside diff comments:
In `@internal/store/elasticsearch/testutil/elastic_search.go`:
- Around line 12-33: The test harness currently mixes the v8 Go client import
(github.com/elastic/go-elasticsearch/v8) with an Elasticsearch 7.17.6 Docker
image in the elasticSearchCmdLine variable; update the server image to an 8.x
release (replace "docker.elastic.co/elasticsearch/elasticsearch:7.17.6" in
elasticSearchCmdLine with a stable 8.x tag such as
"docker.elastic.co/elasticsearch/elasticsearch:8.10.2") so the test server
matches the v8 client, and adjust any incompatible environment flags in
elasticSearchCmdLine if needed for Elasticsearch 8.x.
---
Nitpick comments:
In `@internal/store/elasticsearch/discovery_search_repository.go`:
- Around line 143-163: The completion suggester in buildSuggestQuery currently
hardcodes "size": 5 which can change behavior; update buildSuggestQuery to use
cfg.MaxResults (falling back to a sensible default if zero) for the suggester
size so the suggester matches caller-configured limits—locate buildSuggestQuery
and the suggester map keyed by suggesterName and replace the literal 5 with the
computed value derived from cfg.MaxResults.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e5e43f90-b1a2-4731-8c90-b59bbcf64d7f
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (55)
cli/assets.gocli/config.gocli/discussions.gocli/lineage.gocli/namespace.gocli/root.gocli/search.gocli/server.gocli/version.gocore/asset/asset.gocore/asset/asset_test.gocore/namespace/service.gocore/namespace/service_test.gocore/user/service.gocore/user/service_test.gogo.modinternal/client/client.gointernal/server/health/health.gointernal/server/server.gointernal/server/v1beta1/asset.gointernal/server/v1beta1/asset_test.gointernal/server/v1beta1/comment_test.gointernal/server/v1beta1/discussion_test.gointernal/server/v1beta1/lineage_test.gointernal/server/v1beta1/namespace_test.gointernal/server/v1beta1/search_test.gointernal/server/v1beta1/server.gointernal/server/v1beta1/tag_template_test.gointernal/server/v1beta1/tag_test.gointernal/server/v1beta1/type_test.gointernal/server/v1beta1/user_test.gointernal/store/elasticsearch/discovery_repository_test.gointernal/store/elasticsearch/discovery_search_repository.gointernal/store/elasticsearch/discovery_search_repository_test.gointernal/store/elasticsearch/es.gointernal/store/elasticsearch/es_test.gointernal/store/elasticsearch/options.gointernal/store/elasticsearch/testutil/elastic_search.gointernal/store/postgres/asset_model.gointernal/store/postgres/asset_repository.gointernal/store/postgres/asset_repository_test.gointernal/store/postgres/discussion_repository_test.gointernal/store/postgres/lineage_repository_test.gointernal/store/postgres/namespace_repository_test.gointernal/store/postgres/postgres.gointernal/store/postgres/postgres_test.gointernal/store/postgres/star_repository_test.gointernal/store/postgres/tag_repository_test.gointernal/store/postgres/tag_template_repository_test.gointernal/store/postgres/user_repository_test.gopkg/grpc_interceptor/interceptor_test.gopkg/grpc_interceptor/user_test.gopkg/statsd/metric.gopkg/statsd/statsd.gopkg/telemetry/telemetry.go
💤 Files with no reviewable changes (1)
- pkg/grpc_interceptor/interceptor_test.go
| // Try loading from current directory first | ||
| err := LoadFromCurrentDir(&cfg) | ||
| if err != nil { | ||
| // Fall back to app config directory | ||
| loader := config.NewLoader(config.WithAppConfig("compass")) | ||
| if loadErr := loader.Load(&cfg); loadErr != nil { | ||
| return &cfg, ErrConfigNotFound |
There was a problem hiding this comment.
Don't hide invalid config files behind fallback.
This now falls back on any LoadFromCurrentDir error and then rewrites any app-config load failure to ErrConfigNotFound. A malformed local file or permission problem becomes indistinguishable from “not found”, so the CLI can ignore a broken config and continue with different settings. Please only fall back on the loader's file-missing case and return the original error otherwise.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/config.go` around lines 102 - 108, The code currently falls back on any
error from LoadFromCurrentDir and maps all loader.Load failures to
ErrConfigNotFound; change the logic so LoadFromCurrentDir's non-nil error is
checked for a "not found" condition (use os.IsNotExist(err) or the loader's
file-not-found sentinel) and only then attempt the app-config fallback via
config.NewLoader; conversely, if LoadFromCurrentDir returns a different error,
return that original error immediately; likewise, when calling
loader.Load(&cfg), if loadErr is a file-not-found error return
ErrConfigNotFound, otherwise return loadErr so real parse/permission errors
surface instead of being hidden.
| loader := config.NewLoader(config.WithAppConfig("compass")) | ||
| if loadErr := loader.Load(&cfg); loadErr != nil { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -path "*/cli/config.go" -type fRepository: raystack/compass
Length of output: 75
🏁 Script executed:
cat -n ./cli/config.go | head -150Repository: raystack/compass
Length of output: 3794
🏁 Script executed:
cat -n ./cli/config.go | tail -50Repository: raystack/compass
Length of output: 1562
Add config.WithEnvPrefix("COMPASS") to all config loader paths for consistency.
Environment variable overrides (COMPASS_*) are currently only applied when loading from the current directory. The app-config fallback (line 106) and the --config flag path (line 122) omit the env prefix, so these configuration values are ignored in those scenarios.
Suggested change
- loader := config.NewLoader(config.WithAppConfig("compass"))
+ loader := config.NewLoader(
+ config.WithAppConfig("compass"),
+ config.WithEnvPrefix("COMPASS"),
+ )- return config.NewLoader(config.WithFile(cfgFile)).Load(cfg)
+ return config.NewLoader(
+ config.WithFile(cfgFile),
+ config.WithEnvPrefix("COMPASS"),
+ ).Load(cfg)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| loader := config.NewLoader(config.WithAppConfig("compass")) | |
| if loadErr := loader.Load(&cfg); loadErr != nil { | |
| loader := config.NewLoader( | |
| config.WithAppConfig("compass"), | |
| config.WithEnvPrefix("COMPASS"), | |
| ) | |
| if loadErr := loader.Load(&cfg); loadErr != nil { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/config.go` around lines 106 - 107, The config loader creation calls
(e.g., the loader := config.NewLoader(config.WithAppConfig("compass")) used
before Load(&cfg) and the loader used for the --config flag path) are missing
the environment prefix; update both NewLoader calls to include
config.WithEnvPrefix("COMPASS") alongside config.WithAppConfig("compass") (and
any other loader options) so that environment overrides (COMPASS_*) are applied
consistently regardless of which loader path is used.
| func LoadFromCurrentDir(cfg *Config) error { | ||
| var opts []config.LoaderOption | ||
| opts = append(opts, | ||
| config.WithPath("./"), | ||
| config.WithFile("config.yaml"), | ||
| config.WithEnvKeyReplacer(".", "_"), | ||
| return config.NewLoader( | ||
| config.WithFile("./config.yaml"), | ||
| config.WithEnvPrefix("COMPASS"), | ||
| ) | ||
|
|
||
| return config.NewLoader(opts...).Load(cfg) | ||
| ).Load(cfg) |
There was a problem hiding this comment.
Align the local config filename with the CLI guidance.
This lookup now uses ./config.yaml, but cli/errors.go:9-18 tells users to create compass.yaml in the current directory. One of them has to change, or the documented local override path will never be found.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/config.go` around lines 114 - 118, The function LoadFromCurrentDir
currently loads "./config.yaml" but the user-facing guidance in cli/errors.go
instructs creating "compass.yaml"; update LoadFromCurrentDir to look for
"./compass.yaml" (i.e., change the config.WithFile argument) so the local
override filename matches the documented CLI guidance; ensure nothing else
relies on "config.yaml" or update the docs instead if you prefer the other name.
| go.opentelemetry.io/otel/sdk v1.42.0 | ||
| go.opentelemetry.io/otel/sdk/metric v1.42.0 | ||
| google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57 | ||
| google.golang.org/grpc v1.79.2 |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the first patched grpc-go version for GHSA-p77j-4mvh-x3m3 (GO-2026-4762), and which version range is affected?
💡 Result:
The first patched version of grpc-go for GHSA-p77j-4mvh-x3m3 (GO-2026-4762) is v1.79.3. The affected version range is <1.79.3 (all versions before 1.79.3).
Citations:
- 1: GHSA-p77j-4mvh-x3m3
- 2: https://www.resolvedsecurity.com/vulnerability-catalog/GHSA-p77j-4mvh-x3m3
Bump grpc-go to v1.79.3 to address critical auth-bypass vulnerability.
Line 45 pins google.golang.org/grpc v1.79.2, which is vulnerable to GHSA-p77j-4mvh-x3m3 (GO-2026-4762). Upgrade to v1.79.3 before merge.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@go.mod` at line 45, Update the pinned gRPC module from
"google.golang.org/grpc v1.79.2" to "v1.79.3" to address the GHSA-p77j-4mvh-x3m3
(GO-2026-4762) vulnerability; change the version in go.mod and then run 'go get
google.golang.org/grpc@v1.79.3' (or 'go mod tidy') to refresh go.sum and ensure
the new version is applied across the build.
| github.com/mattn/go-isatty v0.0.19 // indirect | ||
| github.com/mattn/go-runewidth v0.0.13 // indirect | ||
| github.com/mcuadros/go-defaults v1.2.0 // indirect | ||
| github.com/microcosm-cc/bluemonday v1.0.6 // indirect |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Which bluemonday versions fix GHSA-x95h-979x-cf3j and GO-2022-0588, and what is the recommended minimum safe version?
💡 Result:
Bluemonday versions 1.0.16 and later fix GHSA-x95h-979x-cf3j (CVE-2021-42576) and GO-2022-0588. The recommended minimum safe version is 1.0.16. All later versions, up to the latest available (v1.0.26 as of 2023-10-12, with commits as recent as 2025-04-04), also include the fix.
Citations:
- 1: GHSA-x95h-979x-cf3j
- 2: https://pkg.go.dev/vuln/GO-2022-0588
- 3: https://osv.dev/vulnerability/GO-2022-0588
- 4: https://osv.dev/vulnerability/GHSA-x95h-979x-cf3j
- 5: https://github.com/microcosm-cc/bluemonday
- 6: https://pkg.go.dev/github.com/microcosm-cc/bluemonday
Upgrade bluemonday from v1.0.6 to v1.0.16 or later to fix GHSA-x95h-979x-cf3j and GO-2022-0588 XSS advisories.
bluemonday v1.0.6 is affected by known XSS vulnerabilities (GHSA-x95h-979x-cf3j and GO-2022-0588). Upgrade the transitive dependency to v1.0.16 or later (current stable: v1.0.26).
🧰 Tools
🪛 OSV Scanner (2.3.5)
[CRITICAL] 96-96: github.com/microcosm-cc/bluemonday 1.0.6: Cross-site scripting via leaked style elements in github.com/microcosm-cc/bluemonday
(GO-2022-0588)
[CRITICAL] 96-96: github.com/microcosm-cc/bluemonday 1.0.6: Policies not properly enforced in bluemonday
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@go.mod` at line 96, The go.mod currently pins the indirect dependency
"github.com/microcosm-cc/bluemonday" at v1.0.6 which is vulnerable; update the
version to v1.0.16 or later (recommend v1.0.26) by editing the go.mod entry for
github.com/microcosm-cc/bluemonday (replace v1.0.6 with v1.0.16+), then run `go
get github.com/microcosm-cc/bluemonday@v1.0.26` and `go mod tidy` to ensure the
transitive dependency is upgraded and the lockfile/module graph is consistent.
| func createConnection(ctx context.Context, cfg Config) (*grpc.ClientConn, error) { | ||
| opts := []grpc.DialOption{ | ||
| return grpc.NewClient(cfg.Host, | ||
| grpc.WithTransportCredentials(insecure.NewCredentials()), | ||
| grpc.WithBlock(), | ||
| } | ||
|
|
||
| return grpc.DialContext(ctx, cfg.Host, opts...) | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In grpc-go v1.79+, does grpc.NewClient establish connections lazily, and what is the recommended pattern to wait until connectivity.Ready with a context timeout?
💡 Result:
Yes, in grpc-go v1.79+, grpc.NewClient establishes connections lazily. It creates a ClientConn in IDLE state without performing I/O or starting connections immediately, unlike the deprecated grpc.Dial which starts connecting right away. Connections begin when conn.Connect is called explicitly, the first RPC is made, or due to activity. The recommended pattern to wait until connectivity.Ready with a context timeout is: conn, err := grpc.NewClient(target, grpc.WithTransportCredentials(insecure.NewCredentials)) if err != nil { return err } defer conn.Close conn.Connect // Exit IDLE and start connecting ctx, cancel := context.WithTimeout(context.Background, 10*time.Second) defer cancel for { state := conn.GetState if state == connectivity.Ready { break } if state == connectivity.Shutdown || !conn.WaitForStateChange(ctx, state) { return fmt.Errorf("failed to connect: %v", ctx.Err) } }
Citations:
- 1: chore: expose
NewClientmethod to end users grpc/grpc-go#7010 - 2: Question: How to use Timeout and Blocking behavior with grpc.NewClient? grpc/grpc-go#8739
- 3: https://grpc.io/docs/languages/go/api
- 4: https://fuchsia.googlesource.com/third_party/github.com/grpc/grpc-go/+/refs/tags/v1.64.1/Documentation/anti-patterns.md
- 5: https://github.com/grpc/grpc-go/blob/master/clientconn.go
- 6: https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md
🏁 Script executed:
cat -n internal/client/client.go | head -60Repository: raystack/compass
Length of output: 2073
🏁 Script executed:
# Check the specific CLI files mentioned
grep -n "Create()" cli/search.go cli/assets.go cli/namespace.go -A 3 -B 3Repository: raystack/compass
Length of output: 42
🏁 Script executed:
# Look for any prior versions of createConnection in git history
git log --oneline --all internal/client/client.go | head -20Repository: raystack/compass
Length of output: 445
🏁 Script executed:
# Check if there's a grpc version requirement in go.mod
rg "google.golang.org/grpc" go.modRepository: raystack/compass
Length of output: 176
🏁 Script executed:
# Find the cli directory and files
find . -type f -path "*cli/*.go" | head -20Repository: raystack/compass
Length of output: 256
🏁 Script executed:
# Check how Create() is called in the cli package
rg "\.Create\(" -A 5 -B 2Repository: raystack/compass
Length of output: 50372
🏁 Script executed:
# Look for the specific commit that changed createConnection
git log --oneline -p internal/client/client.go | grep -A 20 -B 5 "createConnection" | head -100Repository: raystack/compass
Length of output: 1119
🏁 Script executed:
# Check for any recent changes to grpc usage
git log -p --all -- internal/client/client.go | head -200Repository: raystack/compass
Length of output: 42
createConnection ignores the timeout context and defers connection failures to first RPC.
The timeout context created at line 24 (2 seconds) is passed to createConnection at line 25 but never enforced. grpc.NewClient establishes connections lazily without respecting the context deadline, causing connection errors to surface at the first RPC instead of at the Create() call boundary. CLI call sites (cli/search.go, cli/assets.go, cli/namespace.go, and others) expect synchronous failures from Create() and will not detect connection issues until the first request is made.
Suggested fix
import (
"context"
+ "fmt"
"time"
compassv1beta1 "github.com/raystack/compass/proto/raystack/compass/v1beta1"
"google.golang.org/grpc"
+ "google.golang.org/grpc/connectivity"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/metadata"
)
@@
func createConnection(ctx context.Context, cfg Config) (*grpc.ClientConn, error) {
- return grpc.NewClient(cfg.Host,
+ conn, err := grpc.NewClient(cfg.Host,
grpc.WithTransportCredentials(insecure.NewCredentials()),
)
+ if err != nil {
+ return nil, err
+ }
+
+ conn.Connect()
+ for st := conn.GetState(); st != connectivity.Ready; st = conn.GetState() {
+ if !conn.WaitForStateChange(ctx, st) {
+ _ = conn.Close()
+ return nil, fmt.Errorf("gRPC dial timeout: %w", ctx.Err())
+ }
+ }
+ return conn, nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func createConnection(ctx context.Context, cfg Config) (*grpc.ClientConn, error) { | |
| opts := []grpc.DialOption{ | |
| return grpc.NewClient(cfg.Host, | |
| grpc.WithTransportCredentials(insecure.NewCredentials()), | |
| grpc.WithBlock(), | |
| } | |
| return grpc.DialContext(ctx, cfg.Host, opts...) | |
| ) | |
| func createConnection(ctx context.Context, cfg Config) (*grpc.ClientConn, error) { | |
| conn, err := grpc.NewClient(cfg.Host, | |
| grpc.WithTransportCredentials(insecure.NewCredentials()), | |
| ) | |
| if err != nil { | |
| return nil, err | |
| } | |
| conn.Connect() | |
| for st := conn.GetState(); st != connectivity.Ready; st = conn.GetState() { | |
| if !conn.WaitForStateChange(ctx, st) { | |
| _ = conn.Close() | |
| return nil, fmt.Errorf("gRPC dial timeout: %w", ctx.Err()) | |
| } | |
| } | |
| return conn, nil | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/client/client.go` around lines 48 - 51, createConnection currently
calls grpc.NewClient which is lazy and ignores the provided ctx timeout, so
replace the call with a blocking dial that respects the context deadline: use
grpc.DialContext(ctx, cfg.Host,
grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithBlock())
(i.e., swap grpc.NewClient for grpc.DialContext and add grpc.WithBlock()) so the
2s timeout passed into createConnection is enforced and connection failures
surface synchronously from Create(); keep the function signature
createConnection(ctx context.Context, cfg Config) unchanged.
| "github.com/elastic/go-elasticsearch/v8" | ||
| "github.com/elastic/go-elasticsearch/v8/esapi" | ||
| "github.com/google/uuid" | ||
| "github.com/newrelic/go-agent/v3/integrations/nrelasticsearch-v7" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Is newrelic go-agent nrelasticsearch-v7 compatible with go-elasticsearch v8 client?
💡 Result:
No, the New Relic go-agent nrelasticsearch-v7 integration is not compatible with the go-elasticsearch v8 client. It is specifically designed for the v7 version of the go-elasticsearch client, as indicated by its name and official documentation listing it under "elastic/go-elasticsearch" with the v7 integration package. There is no separate nrelasticsearch-v8 integration available, unlike for Redis where separate v7, v8, and v9 packages exist. The go-elasticsearch v7 and v8 are distinct modules (github.com/elastic/go-elasticsearch/v7 vs /v8) with potential breaking API changes between major versions, making the v7-specific instrumentation incompatible with v8.
Citations:
- 1: https://pkg.go.dev/github.com/newrelic/go-agent/v3/integrations/nrelasticsearch-v7
- 2: https://docs.newrelic.com/docs/apm/agents/go-agent/get-started/go-agent-compatibility-requirements/
- 3: https://github.com/newrelic/go-agent
- 4: https://github.com/newrelic/go-agent/tree/master/v3/integrations
- 5: https://docs.newrelic.com/docs/apm/agents/go-agent/get-started/go-agent-compatibility-requirements
- 6: https://pkg.go.dev/github.com/elastic/go-elasticsearch/v8@v8.19.0
- 7: https://pkg.go.dev/github.com/elastic/go-elasticsearch/v7?tab=versions
Remove or upgrade the New Relic Elasticsearch v7 instrumentation for compatibility with ES v8 client.
The code upgrades to go-elasticsearch v8 (lines 12-13) but continues using the New Relic nrelasticsearch-v7 integration (line 15), which is incompatible with v8. There is no nrelasticsearch-v8 package available. Either downgrade the Elasticsearch client back to v7 or remove the New Relic instrumentation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/store/elasticsearch/es.go` around lines 12 - 15, The import and
instrumentation using the New Relic v7 integration (package
"github.com/newrelic/go-agent/v3/integrations/nrelasticsearch-v7") is
incompatible with the upgraded Elasticsearch client v8; either revert the client
import back to v7 or remove/replace the New Relic instrumentation. Locate the
import of "github.com/newrelic/go-agent/v3/integrations/nrelasticsearch-v7" in
es.go and any usages (search for identifiers like nrelasticsearch,
NewRelicTransport, NewTransport, or similar) and then either (A) change the
go-elasticsearch import back to v7 to match the New Relic integration, or (B)
remove the New Relic import and all instrumentation wiring (transport
wrapper/initialization) so the code uses the plain v8 client; ensure any
references to the removed symbols are deleted or replaced accordingly and update
tests/initialization code that depended on the New Relic transport.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.github/workflows/test.yml (1)
9-9: Use the official Elastic registry for Elasticsearch 8.x image.The image
elasticsearch:8.17.1pulls from Docker Hub, but official Elasticsearch 8.x images are hosted ondocker.elastic.co. This is inconsistent with the testutil code which usesdocker.elastic.co/elasticsearch/elasticsearch:7.17.6.Suggested fix
elasticsearch: - image: elasticsearch:8.17.1 + image: docker.elastic.co/elasticsearch/elasticsearch:8.17.1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test.yml at line 9, Replace the Docker Hub Elasticsearch image reference used in the workflow (the line containing "image: elasticsearch:8.17.1") with the official Elastic registry image by changing it to the docker.elastic.co path (matching the pattern used in testutil, e.g. use "docker.elastic.co/elasticsearch/elasticsearch:8.17.1"); verify the tag exists and update any related references or environment variables that assume the old image name so the workflow uses the official registry image consistently with testutil's "docker.elastic.co/elasticsearch/elasticsearch:7.17.6".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.github/workflows/test.yml:
- Line 9: Replace the Docker Hub Elasticsearch image reference used in the
workflow (the line containing "image: elasticsearch:8.17.1") with the official
Elastic registry image by changing it to the docker.elastic.co path (matching
the pattern used in testutil, e.g. use
"docker.elastic.co/elasticsearch/elasticsearch:8.17.1"); verify the tag exists
and update any related references or environment variables that assume the old
image name so the workflow uses the official registry image consistently with
testutil's "docker.elastic.co/elasticsearch/elasticsearch:7.17.6".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a691d959-fc18-4e8a-8acc-483761de168c
📒 Files selected for processing (4)
.github/workflows/lint.yml.github/workflows/main.yml.github/workflows/release.yml.github/workflows/test.yml
✅ Files skipped from review due to trivial changes (3)
- .github/workflows/lint.yml
- .github/workflows/release.yml
- .github/workflows/main.yml
- Add Go module caching (cache: true) to all workflows - Add Docker layer caching via GitHub Actions cache in main workflow - Add Elasticsearch health check to avoid test flakiness - Skip CI for docs-only changes (paths-ignore for *.md and docs/**) - Remove unnecessary fetch-depth: 0 from lint and main workflows - Remove duplicate go mod tidy (Makefile clean no longer depends on tidy) - Update Dockerfile.dev to use parameterized Go version (1.25) and alpine 3.21 - Fix JSONMap.Scan() to handle NULL JSONB values from PostgreSQL
Restrict push trigger to main branch only for test and lint workflows. PRs are covered by the pull_request trigger, preventing double runs when pushing to a branch with an open PR.
Auto-cancels in-progress runs on PR branches when a newer push arrives. Main branch runs are never cancelled to ensure every merge is validated.
This reverts commit 441c374.
This reverts commit ac37120.
- Remove apt-get install build-essential from test workflow (gcc is pre-installed on ubuntu-latest runners) - Add Go module caching to release workflow
- JSONMap.Scan: return nil (not empty map) for NULL DB values, preserving
nil semantics for Data and Labels fields
- toVersionedAsset/toAssetVersion: guard JSONText.Unmarshal against "{}"
produced by sqlx v1.4.0 for NULL columns, fixing []user.User unmarshal
- ES testutil: use DELETE /* instead of /_all for ES 8.x compatibility
where action.destructive_requires_name defaults to true
- Update local ES docker image from 7.17.6 to 8.17.1 to match CI
- Makefile: fix wrong project name, go fmt, clean, goreleaser flags, setup deps, and broken doc target - Dockerfiles: combine RUN layers, fix deprecated AS syntax - docker-compose: upgrade ES to 8.x matching CI environment - .dockerignore: add .git, docs, test, dist to reduce build context - main.go: replace brittle RPC string parsing with grpc/status
Use DELETE /*,-.* with expand_wildcards=open,closed to avoid deleting protected system indices that cause 400 errors in ES 8.x.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
internal/store/elasticsearch/testutil/elastic_search.go (2)
161-167:⚠️ Potential issue | 🟠 MajorClose the purge response body to avoid connection leaks.
At Line 161,
cli.Perform(req)returns an HTTP response whose body is never closed. RepeatedNewClient()calls can leak resources and degrade test stability.💡 Proposed fix
import ( "fmt" + "io" "net/http" "net/url" "os" "os/exec" "strings" "time" @@ res, err := cli.Perform(req) if err != nil { return err } + defer func() { + _, _ = io.Copy(io.Discard, res.Body) + _ = res.Body.Close() + }() if res.StatusCode > 299 { return fmt.Errorf("elasticsearch server returned status code %d", res.StatusCode) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/store/elasticsearch/testutil/elastic_search.go` around lines 161 - 167, The HTTP response returned by cli.Perform(req) (variable res) isn’t closed, causing connection leaks; after confirming err == nil, immediately schedule closing the response body (e.g., defer res.Body.Close() or explicitly close it once done) before returning or checking res.StatusCode in the purge function that calls cli.Perform(req) so the response body is always closed even on non-2xx status.
127-129:⚠️ Potential issue | 🟡 MinorUpdate the stale
NewClientpurge comment.Lines 127-129 still document
DELETE /_all, but purge now usesDELETE /*,-.*. This is minor, but misleading during maintenance.📝 Proposed fix
-// Calling this method issues a DELETE /_all call to the elasticsearch +// Calling this method issues a DELETE /*,-.* call to the elasticsearch // server, effectively resetting it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/store/elasticsearch/testutil/elastic_search.go` around lines 127 - 129, Update the doc comment for NewClient in elastic_search.go to accurately describe the current purge behavior: replace the outdated reference to "DELETE /_all" with the actual request used now ("DELETE /*,-.*") and briefly note that this effectively resets the index while preserving system indices; ensure the comment references the NewClient function so maintainers can find it easily.
🧹 Nitpick comments (7)
main.go (1)
21-24: Clarify behavior when config loading fails.If
LoadConfig()fails, the error is printed but execution continues with a potentially nil/defaultcliConfig. This silent degradation could be intentional, but consider either:
- Adding a comment explaining this is expected behavior (using defaults)
- Or exiting early if configuration is required
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main.go` around lines 21 - 24, The code prints errors from cli.LoadConfig() but continues using cliConfig, which can lead to unexpected behavior; update the main function to either (A) treat configuration as required: replace the current err handling with a clear log of the error and terminate the process (e.g., call os.Exit(1)) when LoadConfig() returns an error so cliConfig is never used nil, or (B) explicitly document the fallback behavior by adding a comment near the LoadConfig() call stating that falling back to defaults is intentional and ensure subsequent code defensively handles default/empty cliConfig; reference the LoadConfig function and the cliConfig variable when implementing the chosen change.Dockerfile (3)
1-1: Pin Alpine version for reproducible builds.Using
alpine:latestcan lead to unpredictable behavior as the base image changes over time. Consider pinning to a specific version (e.g.,alpine:3.21) to matchDockerfile.devand ensure reproducible builds.Suggested fix
-FROM alpine:latest +FROM alpine:3.21🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` at line 1, The FROM instruction currently uses a floating tag "alpine:latest"; change the Dockerfile's FROM line to pin a specific Alpine version (for example "alpine:3.21") to ensure reproducible builds and match Dockerfile.dev—update the FROM instruction accordingly.
1-6: Consider adding a non-root user for security.The container runs as root by default. Adding a dedicated non-root user improves security posture by limiting potential damage from container escapes.
Suggested fix
FROM alpine:3.21 COPY compass /usr/bin/compass -RUN apk update && apk add --no-cache ca-certificates && rm -rf /var/cache/apk/* +RUN apk add --no-cache ca-certificates && \ + adduser -D -H compass +USER compass ENTRYPOINT ["compass"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 1 - 6, Add a non-root user and run the binary as that user: create a dedicated user/group (e.g., "compass"), ensure the copied binary (/usr/bin/compass from the COPY step) and any required directories are owned by that user (chown), set a sensible HOME and drop root with the USER instruction before ENTRYPOINT, and keep file permissions minimal (executable only) so the container runs as the non-root "compass" user instead of root.
4-4: Simplify apk command - redundant operations.When using
--no-cache, theapk updateandrm -rf /var/cache/apk/*steps are unnecessary. The--no-cacheflag already fetches a fresh index without caching it locally.Suggested fix
-RUN apk update && apk add --no-cache ca-certificates && rm -rf /var/cache/apk/* +RUN apk add --no-cache ca-certificates🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` at line 4, The RUN line in the Dockerfile is doing redundant work: remove the explicit apk update and the rm -rf /var/cache/apk/* and use apk add with --no-cache only; update the RUN command that currently reads the apk operations so it installs ca-certificates via apk add --no-cache ca-certificates (no separate apk update or cache removal) to simplify and avoid unnecessary steps.Dockerfile.dev (3)
3-3: Add--no-cacheto builder stageapk add.The builder stage's
apk add makeis missing--no-cache, which leaves cached index files in the layer. While this is a builder stage (discarded in multi-stage), it still increases build cache size unnecessarily.Suggested fix
-RUN apk add make +RUN apk add --no-cache make🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile.dev` at line 3, The Dockerfile's builder stage uses the RUN instruction "apk add make" which leaves cached index files in the layer; update that RUN instruction in the builder stage to pass the apk --no-cache flag so the package index is not stored in the image (i.e., invoke apk add with --no-cache for installing make).
10-10: Simplify apk command - redundant operations.Same as
Dockerfile: when using--no-cache, bothapk updateandrm -rf /var/cache/apk/*are unnecessary.Suggested fix
-RUN apk update && apk add --no-cache ca-certificates libc6-compat && rm -rf /var/cache/apk/* +RUN apk add --no-cache ca-certificates libc6-compat🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile.dev` at line 10, The RUN line in Dockerfile.dev currently runs redundant operations; remove the unnecessary apk update and rm -rf /var/cache/apk/* when using --no-cache. Replace the existing RUN command with a single apk add --no-cache invocation that installs ca-certificates and libc6-compat (i.e., simplify the RUN step that installs packages to only use apk add --no-cache ca-certificates libc6-compat).
8-12: Consider adding a non-root user for security.Per static analysis (DS-0002), the container runs as root. For a development image this may be acceptable, but adding a non-root user improves security consistency with production practices.
Suggested fix
FROM alpine:3.21 COPY --from=builder /build/compass /usr/bin/compass -RUN apk update && apk add --no-cache ca-certificates libc6-compat && rm -rf /var/cache/apk/* +RUN apk add --no-cache ca-certificates libc6-compat && \ + adduser -D -H compass +USER compass ENTRYPOINT ["compass"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile.dev` around lines 8 - 12, The image currently runs as root; add a non-root user/group and switch to it after copying the binary to improve security. Create a dedicated user (e.g., app or compass) and group in the Dockerfile using Alpine tools (addgroup/adduser or adduser -S -G), chown the installed binary (/usr/bin/compass) and any runtime dirs to that user, and add a USER instruction before the ENTRYPOINT so the container runs as the non-root user while keeping ENTRYPOINT ["compass"] unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Makefile`:
- Around line 73-74: Pin the installed tool versions in the Makefile to avoid
drift: change the go install lines that currently use
github.com/vektra/mockery/v2@latest and github.com/bufbuild/buf/cmd/buf@latest
to explicit versions—set mockery to v2.20.2 to match the committed mocks (or
update all mocks if you intend to upgrade) and pick a stable buf v1.x release
(e.g., v1.66.1) instead of `@latest` so CI and local environments remain
consistent.
---
Outside diff comments:
In `@internal/store/elasticsearch/testutil/elastic_search.go`:
- Around line 161-167: The HTTP response returned by cli.Perform(req) (variable
res) isn’t closed, causing connection leaks; after confirming err == nil,
immediately schedule closing the response body (e.g., defer res.Body.Close() or
explicitly close it once done) before returning or checking res.StatusCode in
the purge function that calls cli.Perform(req) so the response body is always
closed even on non-2xx status.
- Around line 127-129: Update the doc comment for NewClient in elastic_search.go
to accurately describe the current purge behavior: replace the outdated
reference to "DELETE /_all" with the actual request used now ("DELETE /*,-.*")
and briefly note that this effectively resets the index while preserving system
indices; ensure the comment references the NewClient function so maintainers can
find it easily.
---
Nitpick comments:
In `@Dockerfile`:
- Line 1: The FROM instruction currently uses a floating tag "alpine:latest";
change the Dockerfile's FROM line to pin a specific Alpine version (for example
"alpine:3.21") to ensure reproducible builds and match Dockerfile.dev—update the
FROM instruction accordingly.
- Around line 1-6: Add a non-root user and run the binary as that user: create a
dedicated user/group (e.g., "compass"), ensure the copied binary
(/usr/bin/compass from the COPY step) and any required directories are owned by
that user (chown), set a sensible HOME and drop root with the USER instruction
before ENTRYPOINT, and keep file permissions minimal (executable only) so the
container runs as the non-root "compass" user instead of root.
- Line 4: The RUN line in the Dockerfile is doing redundant work: remove the
explicit apk update and the rm -rf /var/cache/apk/* and use apk add with
--no-cache only; update the RUN command that currently reads the apk operations
so it installs ca-certificates via apk add --no-cache ca-certificates (no
separate apk update or cache removal) to simplify and avoid unnecessary steps.
In `@Dockerfile.dev`:
- Line 3: The Dockerfile's builder stage uses the RUN instruction "apk add make"
which leaves cached index files in the layer; update that RUN instruction in the
builder stage to pass the apk --no-cache flag so the package index is not stored
in the image (i.e., invoke apk add with --no-cache for installing make).
- Line 10: The RUN line in Dockerfile.dev currently runs redundant operations;
remove the unnecessary apk update and rm -rf /var/cache/apk/* when using
--no-cache. Replace the existing RUN command with a single apk add --no-cache
invocation that installs ca-certificates and libc6-compat (i.e., simplify the
RUN step that installs packages to only use apk add --no-cache ca-certificates
libc6-compat).
- Around line 8-12: The image currently runs as root; add a non-root user/group
and switch to it after copying the binary to improve security. Create a
dedicated user (e.g., app or compass) and group in the Dockerfile using Alpine
tools (addgroup/adduser or adduser -S -G), chown the installed binary
(/usr/bin/compass) and any runtime dirs to that user, and add a USER instruction
before the ENTRYPOINT so the container runs as the non-root user while keeping
ENTRYPOINT ["compass"] unchanged.
In `@main.go`:
- Around line 21-24: The code prints errors from cli.LoadConfig() but continues
using cliConfig, which can lead to unexpected behavior; update the main function
to either (A) treat configuration as required: replace the current err handling
with a clear log of the error and terminate the process (e.g., call os.Exit(1))
when LoadConfig() returns an error so cliConfig is never used nil, or (B)
explicitly document the fallback behavior by adding a comment near the
LoadConfig() call stating that falling back to defaults is intentional and
ensure subsequent code defensively handles default/empty cliConfig; reference
the LoadConfig function and the cliConfig variable when implementing the chosen
change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2d077c4a-f1b4-43d9-bd58-b14d397b19a1
📒 Files selected for processing (12)
.dockerignore.github/workflows/lint.yml.github/workflows/main.yml.github/workflows/release.yml.github/workflows/test.ymlDockerfileDockerfile.devMakefiledocker-compose.yamlinternal/store/elasticsearch/testutil/elastic_search.gointernal/store/postgres/asset_model.gomain.go
✅ Files skipped from review due to trivial changes (1)
- .dockerignore
🚧 Files skipped from review as they are similar to previous changes (5)
- .github/workflows/release.yml
- .github/workflows/main.yml
- internal/store/postgres/asset_model.go
- .github/workflows/lint.yml
- .github/workflows/test.yml
| go install github.com/vektra/mockery/v2@latest | ||
| go install github.com/bufbuild/buf/cmd/buf@latest |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the latest stable version of vektra/mockery and bufbuild/buf?
💡 Result:
The latest stable version of vektra/mockery is v3.7.0 (released March 6, 2026). The latest stable version of bufbuild/buf is v1.66.1 (released March 9, 2026).
Citations:
- 1: https://github.com/vektra/mockery/releases
- 2: https://newreleases.io/project/github/vektra/mockery/release/v3.7.0
- 3: https://pkg.go.dev/github.com/vektra/mockery/v3
- 4: https://github.com/bufbuild/buf/releases
🏁 Script executed:
# Check the Makefile context around lines 73-74
head -80 Makefile | tail -15Repository: raystack/compass
Length of output: 680
🏁 Script executed:
# Look for go.mod and check if it has any version constraints
find . -name "go.mod" -o -name "go.sum" | head -5Repository: raystack/compass
Length of output: 77
🏁 Script executed:
# Check test.yml to understand the CI setup mentioned
find . -name "test.yml" -o -name ".github" -type dRepository: raystack/compass
Length of output: 98
🏁 Script executed:
# Search for any documentation or comments about tool versions
rg -i "mockery|buf.*version" --max-count=20Repository: raystack/compass
Length of output: 5498
🏁 Script executed:
cat .github/workflows/test.ymlRepository: raystack/compass
Length of output: 1313
🏁 Script executed:
# Check go.mod for mockery/buf references
cat go.mod | head -50Repository: raystack/compass
Length of output: 1976
🏁 Script executed:
# Search for any comments or documentation about why v2 is used
rg -i "mockery.*v2|mockery.*v3" -B 2 -A 2Repository: raystack/compass
Length of output: 3855
Pin mockery version to match committed mocks; use caution with buf version selection.
Using @latest for mockery and buf creates version drift between local development and CI. Your CI runs make test without make setup, relying on committed mocks (all generated with mockery v2.20.2). Version inconsistencies between local and committed mocks will cause subtle test failures.
However, the suggested versions need adjustment:
- Mockery v2.50.0 may generate different mock output than the committed v2.20.2, creating drift. Pin to v2.20.2 to match committed code, or update all committed mocks if upgrading.
- Mockery v3.7.0 is available but represents a major version change—verify compatibility before adopting.
- Buf v1.50.0 is reasonable but v1.66.1 is latest in the v1 series.
♻️ Suggested change (conservative approach)
- go install github.com/vektra/mockery/v2@latest
- go install github.com/bufbuild/buf/cmd/buf@latest
+ go install github.com/vektra/mockery/v2@v2.20.2
+ go install github.com/bufbuild/buf/cmd/buf@v1.50.0📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| go install github.com/vektra/mockery/v2@latest | |
| go install github.com/bufbuild/buf/cmd/buf@latest | |
| go install github.com/vektra/mockery/v2@v2.20.2 | |
| go install github.com/bufbuild/buf/cmd/buf@v1.50.0 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Makefile` around lines 73 - 74, Pin the installed tool versions in the
Makefile to avoid drift: change the go install lines that currently use
github.com/vektra/mockery/v2@latest and github.com/bufbuild/buf/cmd/buf@latest
to explicit versions—set mockery to v2.20.2 to match the committed mocks (or
update all mocks if you intend to upgrade) and pick a stable buf v1.x release
(e.g., v1.66.1) instead of `@latest` so CI and local environments remain
consistent.
Instead of using wildcard DELETE which fails on ES 8.x (protected system indices and empty-index edge case), explicitly list non-system indices via _cat/indices and delete them by name.
Remove unused BuildCommit/BuildDate ldflags, drop 386/arm arch targets, remove unnecessary git brew dependency, and fix YAML comment syntax in config.yaml.
Pull Request Test Coverage Report for Build 23673089442Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
Summary
pgconnnow imported frompgx/v5/pgconn, stdlib driver frompgx/v5/stdlibgrpc.ChainUnaryInterceptor,otelgrpc.NewServerHandler()stats handler,grpc.NewClient, rewritten interceptor testsolivere/elastic/v7query builders with JSON map constructionlog→observability/logger,term/printer→cli/printer,cmdx→cli/commander,mux→server/mux,version→cli/releaser)Removed
olivere/elastic/v7(deprecated, no v8 planned)go-grpc-middlewarev1 logging/tags/testing packagesAdditional fixes
go veterrors from stricter format string checking in newer GoList()to gRPC health handler for newer health APIstatus.Errorfwithstatus.Errorfor non-format stringsTest plan
go build ./...passesgo vet ./...passesgo testexcluding docker-dependent store tests)