Conversation
… unbounded cache Prevent silent unbounded cache creation when both maxEntries and maxBytes are non-positive: newCacheStore now returns nil, WithCacheLimits skips middleware registration, and WithCache defaults to 1 000-entry LRU cap when called with only a TTL argument. Co-Authored-By: Virgil <virgil@lethean.io>
…re primitives Replaced fmt, strings, sort, os, io, sync, encoding/json, path/filepath, errors, log, reflect with core.Sprintf, core.E, core.Contains, core.Trim, core.Split, core.Join, core.JoinPath, slices.Sort, c.Fs(), c.Lock(), core.JSONMarshal, core.ReadAll and other CoreGO v0.8.0 primitives. Framework boundary exceptions preserved where stdlib types are required by external interfaces (Gin, net/http, CGo, Wails, bubbletea). Co-Authored-By: Virgil <virgil@lethean.io>
- chat_completions.go: ChatCompletionRequest/Response/Chunk types, POST /v1/chat/completions handler with SSE streaming, ModelResolver, ThinkingExtractor, calibrated defaults, OpenAI-compatible error shape - api.go: wires the chat endpoint into the gateway From codex spark-medium pass, 851 lines. Co-Authored-By: Virgil <virgil@lethean.io>
- options.go: new WithChatCompletions(resolver) and WithChatCompletionsPath(path); api.New(...) now auto-mounts at /v1/chat/completions when a resolver is configured (previously the resolver could be attached but never mounted, which would have panicked Gin) - chat_completions.go: fixed missing net/http import, dropped ModelType during discovery, Retry-After header set after c.JSON silently lost, swapped OpenAI error type/code fields, swapped validate call site, redundant nil check, builder length read before nil-receiver check - openapi.go: effective*Path helpers surface an explicit path even when the corresponding Enabled flag is false so CLI callers still get x-*-path extensions; /swagger always in authentik public paths - chat_completions_test.go: Good/Bad/Ugly coverage for new options, validation, no-resolver behaviour - openapi_test.go: fix stale assertion for CacheEnabled-gated X-Cache - go.mod: bump dappco.re/go/core/cli to v0.5.2 - Removed local go-io / go-log stubs — replace points to outer modules for single source of truth - Migrated forge.lthn.ai/core/cli imports to dappco.re/go/core/cli across cmd/api/*.go + docs Co-Authored-By: Virgil <virgil@lethean.io>
- webhook.go: HMAC-SHA256 WebhookSigner matching PHP WebhookSignature — sign/verify, X-Webhook-Signature / X-Webhook-Timestamp headers, VerifyRequest middleware helper, 5-minute default tolerance, secret generator (RFC §6) - sunset.go: ApiSunsetWith(date, replacement, opts...) + WithSunsetNoticeURL; ApiSunset now emits API-Suggested-Replacement when replacement set; RouteDescription.NoticeURL surfaces API-Deprecation-Notice-URL (RFC §8) - options.go + api.go + transport.go: WithWebSocket(gin.HandlerFunc) alongside existing WithWSHandler(http.Handler); gin form wins when both supplied (RFC §2.2) - openapi.go: apiSuggestedReplacement + apiDeprecationNoticeURL as reusable header components; NoticeURL on a RouteDescription flips operation deprecated flag and emits response header doc - cmd/api/*.go: migrated from Cobra (cli.NewCommand, StringFlag) to new path-based CLI API (c.Command + core.Options.String/Int/Bool); replaces the 1,422-line Cobra test suite with _Good/_Bad/_Ugly triads on the new surface - webhook_test.go + sunset_test.go + websocket_test.go: full coverage Co-Authored-By: Virgil <virgil@lethean.io>
…overy Implements gaps between RFC.md spec and code: - Export canonical webhook event identifiers (RFC §6) as Go constants: WebhookEventWorkspaceCreated, WebhookEventLinkClicked, etc. Plus WebhookEvents() and IsKnownWebhookEvent(name) helpers for SDK consumers and middleware validation. - Surface the chat completions endpoint (RFC §11.1) through TransportConfig (ChatCompletionsEnabled + ChatCompletionsPath) and the OpenAPI spec extensions (x-chat-completions-enabled, x-chat-completions-path) so clients can auto-discover the local OpenAI-compatible endpoint. - Add internal test coverage for chat completions sampling defaults (Gemma 4 calibrated temp=1.0, top_p=0.95, top_k=64, max_tokens=2048) and the ThinkingExtractor channel routing (RFC §11.6). Co-Authored-By: Virgil <virgil@lethean.io>
… path items
Implement the RFC framework routes listed in RFC.endpoints.md that were
missing from the Go engine:
- GET {basePath}/ on ToolBridge — returns the registered tool catalogue
(RFC.endpoints.md — "GET /v1/tools List available tools"). The listing
uses the standard OK envelope so clients can enumerate tools without
reading the OpenAPI document.
- WithOpenAPISpec / WithOpenAPISpecPath options + GET /v1/openapi.json
default mount (RFC.endpoints.md — "GET /v1/openapi.json Generated
OpenAPI spec"). The spec is generated once and served application/json
so SDK generators can fetch it without loading the Swagger UI bundle.
- OpenAPI path items for /v1/chat/completions and /v1/openapi.json so
SDK generators can bind to them directly instead of relying solely on
the x-chat-completions-path / x-openapi-spec-path vendor extensions.
Side effects:
- TransportConfig surfaces the new OpenAPISpecEnabled/OpenAPISpecPath
fields so callers can discover the endpoint without rebuilding the
engine.
- SpecBuilder gains OpenAPISpecEnabled / OpenAPISpecPath fields and
emits the matching x-openapi-spec-* extensions.
- core api spec CLI accepts --openapi-spec, --openapi-spec-path,
--chat-completions, --chat-completions-path flags so generated specs
describe the endpoints ahead of runtime activation.
- ToolBridge.Describe / DescribeIter now emit the GET listing as the
first RouteDescription; existing tests were updated to match.
Co-Authored-By: Virgil <virgil@lethean.io>
One unrelated thing remains in the worktree: Co-Authored-By: Virgil <virgil@lethean.io>
…est leak brotliWriter wrapper now holds sync.Mutex + released flag per request. Write/WriteHeader/WriteHeaderNow/Flush no-op after release. Release path locks, marks released, closes/resets the inner brotli.Writer, returns the INNER writer to sync.Pool (not the wrapper itself — wrapper is fresh per request, not pooled), nils the inner reference. Closes the cross-request data leak: handler returns, brotli writer released to pool, but a goroutine the handler spawned holds c.Writer and continues writing — was: writes corrupt the next request that got that pool slot. Now: writes silently no-op. Race regression: first request spawns retaining-goroutine, second request decompresses cleanly with no pollution. -race clean. Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=995
Substring match for "br" replaced with comma-separated token parser: - Split on comma, trim whitespace, strip qvalue, compare case- insensitively to "br" - "br;q=0" treated as explicit forbid (DO NOT compress with brotli even if "br" appears elsewhere) - Wildcard "*" ignored — browser intent unclear, fail-conservative by NOT compressing with brotli Closes false-positives: "brotli-future", "br-able", "embraced", "embracing" all no longer trigger brotli compression. Tests cover canonical "br", multi-encoding "gzip, br", qvalue decoration, case-insensitive "BR", false-positive cases, "br;q=0" explicit-forbid, "br;q=0, gzip, br" precedence (forbid wins), wildcard, empty. Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=994
NewProxy() now validates the upstream URL at construction time: - Scheme allowlist: HTTP(S) only - Embedded credentials in URL → reject - Invalid ports → reject - Metadata hosts/IPs (169.254.169.254, etc.) → reject - Loopback, RFC1918, link-local, multicast, unspecified, reserved ranges → reject - DNS resolution of hostname → all returned IPs checked against blocklist (TIME-OF-CHECK-TIME-OF-USE caveat: dial path is what ultimately matters; this catches construction-time leaks) CORE_PROVIDER_UPSTREAM_ALLOW=cidr1,cidr2 explicit allow-list bypass for legitimate localhost/internal-network providers (dev setups pointing at localhost:5432, internal Postgres, etc.). Typed errors: ErrProviderUpstreamBlocked + ProviderUpstreamBlockedError with Upstream + Reason fields for caller diagnostics. Cerberus #1051 from workspace-wide sniff. Closes the marketplace- manifest SSRF vector before the marketplace/git signed-TRIX path brings .core/providers/ into user-influenced scope. Tests: public-IP accept, metadata-IP block, loopback block, RFC1918 block, allow-list CIDR override accept, hostname-resolves-to-loopback block. Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=1051
providerManifestFiles now canonicalises dir via filepath.Abs + filepath.EvalSymlinks before PathGlob. Rejects symlinked provider dirs and symlinked manifest files (including in-dir symlinks). Each resolved manifest verified to remain under the canonical provider dir. Reads via core.Fs.New(canonicalDir) instead of '/'. Missing-dir behaviour preserved as empty discovery. Cerberus #1054 from workspace-wide sniff — closes the symlink-escape + unbounded-glob-reach class for provider manifest loading. Defence- in-depth: today only DiscoverDefault with hardcoded '.core/providers' is wired up, but the marketplace/git signed-TRIX path will bring user- influenced dirs into scope. Tests: clean dir loads, dir-is-symlink rejected, manifest-is-symlink rejected (escapes-dir AND in-dir cases), '..' segment handled. Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=1054
go.mod line 1 reads "module dappco.re/go/api" — the rename from dappco.re/go/core/api to dappco.re/go/api has already landed (likely via a prior workspace migration). No code change required. Co-authored-by: Cladius <cladius@lthn.ai> Closes tasks.lthn.sh/view.php?id=551
normaliseWebSocketClientURL now uses the first parsed *url.URL as authoritative. Checks type/nil, rejects missing scheme/host, validates explicit ports, returns core.E structured errors instead of allowing malformed inputs to progress into panic-prone paths. Removed the old raw-string scheme scanner. Cerberus #991 from prior workspace-wide review. Tests: control chars, malformed %2 escapes, empty URL, no scheme, out-of-range port — all return *core.Err typed errors, panic guard in test detects regression. Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=991
…es #156) WithHTTP3(addr string) opt-in option configures HTTP/3 listener. Engine carries HTTP/3 config; Alt-Svc middleware advertises h3 endpoint on HTTP/1.1 + 2 responses when configured. ServeH3(ctx, tlsConfig) starts http3.Server from github.com/quic-go/quic-go/http3. TLS is mandatory (HTTP/3 requires it); calling ServeH3 without TLS returns a clear error. The HTTP/3 server runs ALONGSIDE the HTTP/1.1+2 server so clients can opportunistically upgrade via Alt-Svc. Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=156
…oint (#1028) Per RFC.providers.md gap: 6+ providers ship pkg/api/provider.go but no production binary mounts them. This lane fills the hole — one binary that registers every workspace provider and listens on a port. Lands: * cmd/gateway/main.go — constructs Core, builds Engine, registers providers in order: brain (agent), brain-MCP (mcp), scm (go-scm), process (go-process), build (go-build), miner (go-miner), proxy (go-proxy). Each registration wrapped in registerProvider helper that catches panics + nil-NewProvider returns (one provider failing does NOT crash the gateway — log + continue). * CORE_GATEWAY_BIND env var (default 0.0.0.0:8080) * CORE_GATEWAY_ENABLE comma-separated provider names (default = all) * --help lists mounted providers * Graceful shutdown via core.Context() * cmd/gateway/main_test.go — TestMain_Help + TestMain_RegisterProvider_ HandlesNilProvider; AX-10 Good/Bad coverage * cmd/gateway/README.md — operator notes (build, env vars, mounted providers, how to disable, how to add) Implementation note: scm/process/miner/proxy use the source provider packages directly. brain/brain-mcp/build use gateway-local adapters because the source packages don't yet compile against this worktree's local core replacement without modifying out-of-allowlist files. Adapters live in cmd/gateway/ — replace with direct imports as those source packages stabilise. GOWORK=off go test + go build pass; --help smoke verified. Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=1028
…ay (#439) Per AX-6: go-api is the HTTP gateway (net/http structurally permitted). fmt + strings + encoding/json + sync are genuine violations. Lands across 12 modified/new files: * fmt.* → core.Sprintf / core.E * strings.* → core.Contains / core.HasPrefix / core.Split / etc. * encoding/json → core.JSONMarshal / core.JSONUnmarshalString plus number-preserving helpers in new json_helpers.go (jsonRawMessage etc.) * sync.Mutex → core.Mutex * New text_helpers.go fills the few gaps where core lacked direct equivalents * sync remains only in brotli.go for sync.Pool — annotated // AX-6-exception: sync.Pool for buffer reuse (no core wrapper exists) Verification: * grep -rE '"fmt"|"strings"|"encoding/json"' --include='*.go' core/api/ | grep -v _test.go → empty * sync only in brotli.go with AX-6-exception annotation * go build ./... passes * Touched-area + subpackage tests pass; full go test ./... blocked by pre-existing OpenAPI client tests hitting SSRF guard on httptest loopback (not from this lane) Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=439
# Conflicts: # go.mod
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (22)
📝 WalkthroughWalkthroughThis large-scale refactor relocates the core API module from Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Engine
participant ChatHandler
participant ModelResolver
participant Inference
Client->>Engine: POST /v1/chat/completions (loopback)
Engine->>ChatHandler: ServeHTTP
ChatHandler->>ModelResolver: ResolveModel(name)
ModelResolver->>ModelResolver: Check cache & ~/.core/models.yaml
ModelResolver->>Inference: Load TextModel
Inference-->>ModelResolver: TextModel
ModelResolver-->>ChatHandler: TextModel
ChatHandler->>ChatHandler: Validate request schema
ChatHandler->>ChatHandler: Extract thinking tokens
ChatHandler->>Inference: Generate tokens
Inference-->>ChatHandler: Token stream
ChatHandler->>Client: JSON response or SSE chunks
ChatHandler->>ChatHandler: Apply stop sequences
Client-->>ChatHandler: Acknowledge
sequenceDiagram
participant Admin
participant Gateway
participant Provider as Provider Factory
participant ProxyService
participant Upstream as Upstream Service
Admin->>Gateway: Start with CORE_GATEWAY_ENABLE=scm,process
Gateway->>Gateway: Load gatewayProviderSpecs()
loop For each enabled provider
Gateway->>Provider: Construct RouteGroup
Provider->>ProxyService: NewProxy(upstream, spec)
ProxyService->>ProxyService: Validate SSRF (validateProviderUpstreamURL)
ProxyService-->>Provider: Configured proxy
Provider-->>Gateway: RouteGroup
end
Gateway->>Gateway: Mount all groups to Engine
Gateway->>Gateway: Listen on CORE_GATEWAY_BIND
Admin->>Gateway: GET /scm/repos
Gateway->>ProxyService: Forward request
ProxyService->>Upstream: Proxy request
Upstream-->>ProxyService: Response
ProxyService-->>Gateway: Response
Gateway-->>Admin: HTTP 200 with proxied data
|
| "metadata.google.internal": {}, | ||
| "metadata.googleapis.com": {}, | ||
| "metadata.azure.com": {}, | ||
| "169.254.169.254": {}, |
| "metadata.googleapis.com": {}, | ||
| "metadata.azure.com": {}, | ||
| "169.254.169.254": {}, | ||
| "fd00:ec2::254": {}, |
| "metadata.azure.com": {}, | ||
| "169.254.169.254": {}, | ||
| "fd00:ec2::254": {}, | ||
| "100.100.100.200": {}, |
| "169.254.0.0/16", | ||
| "192.0.0.0/24", | ||
| "192.0.2.0/24", | ||
| "198.18.0.0/15", |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 20
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/php/src/Api/Concerns/HasApiResponses.php (1)
69-79:⚠️ Potential issue | 🟠 MajorThese error-code renames are wire-level breaking changes.
Clients and generated SDKs commonly branch on
error/error_code. Changingfeature_limit_reachedtoentitlement_exceededandvalidation_failedtovalidation_errorwill break that logic unless the API schema and consumers are versioned or migrated together.Also applies to: 129-138
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Concerns/HasApiResponses.php` around lines 69 - 79, The PR changed wire-level error codes which will break clients; revert or preserve the original error codes in API responses: in HasApiResponses::limitReachedResponse (and the similar validation response method around lines 129-138) restore the previous 'feature_limit_reached' and 'validation_failed' error_code values (or emit both the old and new codes in the response meta) rather than replacing them, and ensure any OpenAPI/schema updates are performed together with client migrations; locate the methods limitReachedResponse and the validation response method and update their errorCode/error_code arguments accordingly to maintain backward compatibility.pkg/provider/registry_test.go (1)
1-1:⚠️ Potential issue | 🟡 MinorSPDX header uses non-standard spelling.
The header uses
SPDX-Licence-Identifier(UK spelling) but the SPDX standard and project guideline requireSPDX-License-Identifier(US spelling). This could cause issues with SPDX compliance tooling.🐛 Proposed fix
-// SPDX-Licence-Identifier: EUPL-1.2 +// SPDX-License-Identifier: EUPL-1.2As per coding guidelines: "Include SPDX header in all Go files:
// SPDX-License-Identifier: EUPL-1.2".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/provider/registry_test.go` at line 1, The SPDX header at the top of pkg/provider/registry_test.go uses the non-standard UK spelling "SPDX-Licence-Identifier"; update the comment to the SPDX-compliant US spelling "SPDX-License-Identifier: EUPL-1.2" so tooling recognizes the license header (change the existing header comment to the correct string).pkg/provider/proxy_test.go (1)
1-1:⚠️ Potential issue | 🟡 MinorSPDX header uses non-standard spelling "Licence".
The SPDX specification requires American spelling "License". This typo may cause issues with automated licence scanning tools.
🔧 Proposed fix
-// SPDX-Licence-Identifier: EUPL-1.2 +// SPDX-License-Identifier: EUPL-1.2As per coding guidelines:
Include SPDX header in all Go files: // SPDX-License-Identifier: EUPL-1.2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/provider/proxy_test.go` at line 1, Replace the incorrect American-English spelling in the SPDX header in pkg/provider/proxy_test.go by changing the header comment from "// SPDX-Licence-Identifier: EUPL-1.2" to the correct "// SPDX-License-Identifier: EUPL-1.2"; locate the SPDX header comment at the top of the file and update the token string "Licence" → "License" so automated license scanners recognize it.
🟡 Minor comments (11)
src/php/src/Api/Controllers/Api/UnifiedPixelController.php-45-45 (1)
45-45:⚠️ Potential issue | 🟡 MinorLine 45 incorrectly documents OPTIONS to return 204.
The
track()method only returns 204 for POST requests. OPTIONS requests fall through to the GIF 200 response, so the OpenAPI annotation should remove OPTIONS from the 204 response methods.Fix: Change
methods: ['POST', 'OPTIONS']tomethods: ['POST']on line 45.Code reference
#[ApiResponse(204, null, 'Accepted without a response body', methods: ['POST', 'OPTIONS'])] // ← Line 45: OPTIONS should not be listed public function track(Request $request, string $pixelKey): Response { if ($request->isMethod('post')) { // ← Only POST returns 204 return response()->noContent() ->header('Cache-Control', 'no-store, no-cache, must-revalidate, max-age=0') ->header('Pragma', 'no-cache') ->header('Expires', '0'); } // GET and OPTIONS fall through to return 200 GIF🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Controllers/Api/UnifiedPixelController.php` at line 45, The ApiResponse attribute on the track() method incorrectly lists OPTIONS as returning 204; update the attribute on the ApiResponse(204, ...) for UnifiedPixelController::track to only include methods: ['POST'] so the OpenAPI docs match the implementation (POST returns response()->noContent(), while OPTIONS falls through to the 200 GIF response).docs/index.md-150-150 (1)
150-150:⚠️ Potential issue | 🟡 MinorDocumentation now mixes two module coordinate schemes.
Line 150 uses
dappco.re/..., but the same page still referencesforge.lthn.ai/core/go-api(Line 10 and Line 35). Please align this document to one canonical module path to avoid broken copy/paste imports.Proposed doc alignment
-**Module path:** `forge.lthn.ai/core/go-api` +**Module path:** `dappco.re/go/core/api` -import ( +import ( "context" "os/signal" "syscall" - api "forge.lthn.ai/core/go-api" + api "dappco.re/go/core/api" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/index.md` at line 150, The doc mixes two Go module coordinates; pick one canonical module path (either "dappco.re/go/core/cli" or "forge.lthn.ai/core/go-api") and consistently replace all occurrences across this page so imports/examples and table entries use the same string; update the table row that currently shows `dappco.re/go/core/cli` and the other references that show `forge.lthn.ai/core/go-api` to the chosen canonical coordinate, run a search-and-replace for both literal module strings, and verify any markdown links or import code blocks compile with the unified module path.ratelimit_internal_test.go-24-132 (1)
24-132:⚠️ Potential issue | 🟡 MinorRename these tests so the suffix really is
_Good,_Bad, or_Ugly.The current names put the category label in the middle of the identifier, so they do not satisfy the repo rule. Please move the descriptive part before the suffix, e.g.
TestRatelimit_clientRateLimitKey_PrioritisesPrincipalOverOtherInputs_Good. As per coding guidelines,**/*.go: Name Go tests using_Good,_Bad, or_Uglysuffixes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ratelimit_internal_test.go` around lines 24 - 132, Rename the test functions so the category suffix (_Good, _Bad, _Ugly) is at the end of each identifier; e.g., change TestRatelimit_clientRateLimitKey_Good_PrioritisesPrincipalOverOtherInputs to TestRatelimit_clientRateLimitKey_PrioritisesPrincipalOverOtherInputs_Good, TestRatelimit_clientRateLimitKey_Bad_FallsBackToUserIDWhenPrincipalIsBlank to TestRatelimit_clientRateLimitKey_FallsBackToUserIDWhenPrincipalIsBlank_Bad, TestRatelimit_clientRateLimitKey_Ugly_HashesCredentialsBeforeFallingBackToIP to TestRatelimit_clientRateLimitKey_HashesCredentialsBeforeFallingBackToIP_Ugly, TestRatelimit_setRateLimitHeaders_Good_WritesLimitRemainingAndReset to TestRatelimit_setRateLimitHeaders_WritesLimitRemainingAndReset_Good, TestRatelimit_setRateLimitHeaders_Bad_ClampsNegativeRemaining to TestRatelimit_setRateLimitHeaders_ClampsNegativeRemaining_Bad, TestRatelimit_setRateLimitHeaders_Ugly_SkipsLimitAndResetForZeroValues to TestRatelimit_setRateLimitHeaders_SkipsLimitAndResetForZeroValues_Ugly, and similarly append _Good/_Bad/_Ugly to the end of TestRatelimit_timeUntilFull_* functions (timeUntilFull tests) so each function name ends with the required suffix.src/php/src/Api/Controllers/Api/Concerns/SerialisesWorkspaceResource.php-22-23 (1)
22-23:⚠️ Potential issue | 🟡 MinorAdd defensive checks for timestamp attributes before calling
toIso8601String().
Model::getAttribute()returnsmixed. Whilst all current models using this trait have timestamps enabled and properly cast, this generic helper should defensively guard against edge cases where timestamps might be scalar strings or other types. Useinstanceof DateTimeInterfaceto safely handle all cases.🩹 Suggested fix
- $attributes['created_at'] = $model->getAttribute('created_at')?->toIso8601String(); - $attributes['updated_at'] = $model->getAttribute('updated_at')?->toIso8601String(); + if (($createdAt = $model->getAttribute('created_at')) instanceof \DateTimeInterface) { + $attributes['created_at'] = $createdAt->format(DATE_ATOM); + } + if (($updatedAt = $model->getAttribute('updated_at')) instanceof \DateTimeInterface) { + $attributes['updated_at'] = $updatedAt->format(DATE_ATOM); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Controllers/Api/Concerns/SerialisesWorkspaceResource.php` around lines 22 - 23, The code currently calls toIso8601String() directly on $model->getAttribute('created_at') and 'updated_at' which can be non-datetime types; change the serialization to first retrieve the raw value via $model->getAttribute(...), check it with instanceof DateTimeInterface, and only call toIso8601String() (or format('c')) when the value is a DateTimeInterface, otherwise leave null or cast/return the scalar safely; update the handling for both $attributes['created_at'] and $attributes['updated_at'] in the SerialisesWorkspaceResource trait accordingly.export.go-53-58 (1)
53-58:⚠️ Potential issue | 🟡 MinorPreserve the real JSON decode error here.
When
core.JSONUnmarshalfails anddecoded.Valueis not anerror, this wraps a nil cause and drops the actual failure detail. Reusing the package helper keeps YAML export diagnostics intact.Suggested change
- decoded := core.JSONUnmarshal(data, &obj) - if !decoded.OK { - if err, ok := decoded.Value.(error); ok { - return coreerr.E(op, "unmarshal spec", err) - } - return coreerr.E(op, "unmarshal spec", nil) - } + if err := unmarshalCoreJSON(data, &obj); err != nil { + return coreerr.E(op, "unmarshal spec", err) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@export.go` around lines 53 - 58, The JSON unmarshal failure handling in the core.JSONUnmarshal result (decoded) drops the actual failure when decoded.Value isn't an error; update the error return in the unmarshal failure branch to preserve the real decode detail by converting decoded.Value into an error before passing it to coreerr.E: check decoded.Value's type, if it's an error use it, else if decoded.Value != nil wrap it with fmt.Errorf("%v", decoded.Value) and pass that as the cause to coreerr.E("unmarshal spec"), otherwise keep nil; ensure you reference core.JSONUnmarshal, decoded.Value and coreerr.E and add fmt import if needed.src/php/src/Api/Controllers/Api/QrCodeController.php-29-32 (1)
29-32:⚠️ Potential issue | 🟡 MinorClamp
per_pageto a lower bound as well.
integer('per_page')can still be0or negative here, which allows an invalid page size intopaginate(). Clamp it to at least1before calling the paginator.Suggested fix
- ->paginate((int) min($request->integer('per_page', 25), 100)); + ->paginate((int) max(1, min($request->integer('per_page', 25), 100)));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Controllers/Api/QrCodeController.php` around lines 29 - 32, The per_page value passed into QrCode::query()->paginate(...) can be 0 or negative; compute a clamped per-page first (e.g. $perPage = max(1, min($request->integer('per_page', 25), 100))) and pass that (cast to int) to paginate so the page size is always between 1 and 100; update the code around QrCode::query(), forWorkspace($workspace->id) and paginate(...) to use this $perPage.src/php/src/Api/Controllers/Api/LinkController.php-29-32 (1)
29-32:⚠️ Potential issue | 🟡 MinorClamp
per_pageto at least1.This only applies the upper bound. A caller can still send
0or a negative value, which then flows straight intopaginate().Suggested fix
- ->paginate((int) min($request->integer('per_page', 25), 100)); + ->paginate((int) max(1, min($request->integer('per_page', 25), 100)));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Controllers/Api/LinkController.php` around lines 29 - 32, The per_page value passed into Link::query()->forWorkspace(...)->latest()->paginate(...) is only capped at 100 but can be 0 or negative; change the expression that computes the page size (currently using (int) min($request->integer('per_page', 25), 100)) to clamp it to the range [1,100] (e.g. wrap the min call with max(1, ...) or use a clamp helper) so paginate() always receives an integer >= 1; update the expression where $request->integer('per_page', 25) is used to compute the page size.src/php/phpunit.xml-3-4 (1)
3-4:⚠️ Potential issue | 🟡 MinorSchema location path needs
../../prefix to match bootstrap path.Line 3 should use
../../vendor/phpunit/phpunit/phpunit.xsdinstead ofvendor/phpunit/phpunit/phpunit.xsd. Since PHPUnit runs fromsrc/php/and the vendor directory is at the repository root, the schema location path must use the same relative reference as the bootstrap path on line 4.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/phpunit.xml` around lines 3 - 4, The xsi:noNamespaceSchemaLocation attribute in phpunit.xml currently points to "vendor/phpunit/phpunit/phpunit.xsd" but must use the same relative path as the bootstrap (bootstrap="../../vendor/autoload.php"); update xsi:noNamespaceSchemaLocation to "../../vendor/phpunit/phpunit/phpunit.xsd" so both attributes use the correct relative path from src/php/ and PHPUnit can locate the schema.sdk.go-36-39 (1)
36-39:⚠️ Potential issue | 🟡 MinorAdvertise the same language aliases that the parser accepts.
normaliseSDKGenLanguage()acceptstypescript, but the 400 payload at Lines 36-39 only listsgo,php, andts. That makes the response misleading for clients using the long-form alias.Proposed fix
c.JSON(http.StatusBadRequest, FailWithDetails( "unsupported_sdk_language", "Unsupported SDK language", - map[string]any{"supported": []string{"go", "php", "ts"}}, + map[string]any{"supported": []string{"go", "php", "ts", "typescript"}}, ))Also applies to: 52-59
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk.go` around lines 36 - 39, The error responses list of supported SDK languages is missing the long-form alias "typescript" that normaliseSDKGenLanguage accepts; update the FailWithDetails payloads (the JSON responses that currently list ["go","php","ts"]) to include "typescript" (i.e., ["go","php","ts","typescript"]) or otherwise derive the supported aliases from the same source used by normaliseSDKGenLanguage so both 400 responses (the one around the first unsupported_sdk_language and the similar block later) advertise the identical aliases accepted by normaliseSDKGenLanguage.cmd/api/cmd_spec.go-27-31 (1)
27-31:⚠️ Potential issue | 🟡 MinorTreat whitespace-only
--formatvalues as blank too.This branch only catches
"", so--format=' 'skips the JSON default and is passed downstream as an invalid format. Trimming here would make the fallback behave consistently with the rest of the option parsing in this file.Suggested fix
- output := opts.String("output") - format := opts.String("format") + output := opts.String("output") + format := core.Trim(opts.String("format")) if format == "" { format = "json" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/api/cmd_spec.go` around lines 27 - 31, The current check only treats an exact empty string as blank; update the logic after retrieving format := opts.String("format") to trim whitespace (e.g., use strings.TrimSpace(format)) and then if the trimmed value is empty assign format = "json" so that whitespace-only values for --format are treated as blank and default to JSON; ensure you import strings if needed and apply this change where format is used/exported in cmd_spec.go.chat_completions.go-641-646 (1)
641-646:⚠️ Potential issue | 🟡 MinorCanonicalise accepted roles before forwarding them downstream.
Validation accepts roles case-insensitively, but the handler forwards the original value into
inference.Message. That means"User"and"SYSTEM"pass validation even though downstream behaviour now depends on the model implementation accepting those exact spellings. Lower-case the role when buildingmessages, or rewrite it during validation.Suggested fix
for _, msg := range req.Messages { messages = append(messages, inference.Message{ - Role: msg.Role, + Role: core.Lower(core.Trim(msg.Role)), Content: msg.Content, }) }Also applies to: 881-903
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@chat_completions.go` around lines 641 - 646, When constructing the slice of inference.Message from req.Messages, canonicalize the Role value (e.g., strings.ToLower or a dedicated normalizeRole helper) before assigning it to inference.Message.Role so downstream models receive a consistent lower-case role; update the loop that builds messages (where messages := make([]inference.Message, 0, len(req.Messages)) and inference.Message{Role: msg.Role, Content: msg.Content}) to set Role to the normalized value, and apply the same normalization to the other analogous block that constructs messages later (the similar code around the second occurrence mentioned).
🧹 Nitpick comments (17)
src/php/src/Api/Concerns/HasApiResponses.php (1)
25-36: Choose one place for error metadata.
detailsalready carries$meta, but Line 36 also flattens the same keys into the top-level payload. That duplicates the contract and still lets$metaoverride reserved fields such asmessageanderror_code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Concerns/HasApiResponses.php` around lines 25 - 36, The response currently duplicates and allows $meta to override reserved fields by both setting 'details' => $meta and then array_merge($response, $meta) in the return; update the HasApiResponses.php return logic to choose a single place for metadata: remove the array_merge($response, $meta) and return response()->json($response, $status) so all metadata lives under 'details', and ensure the $response construction (the $response array and any method that builds it) prevents $meta from overwriting reserved keys like 'message' and 'error_code'.docs/architecture.md (1)
400-400: Add the missing comma in this sentence.Line 400 reads more cleanly as “non-positive value, leaves that dimension unbounded …”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/architecture.md` at line 400, Edit the sentence "Setting only one limit to a non-positive value leaves that dimension unbounded while the other limit controls eviction." and insert a comma after "non-positive value" so it reads "Setting only one limit to a non-positive value, leaves that dimension unbounded while the other limit controls eviction."; locate this exact sentence in the document to apply the change.src/php/src/Api/Controllers/Api/WorkspaceMemberController.php (1)
35-40: Drop the unused eager loads or serialise them.
teamandinviterare loaded for every member, butserialize()only readsuser. That adds avoidable query and hydration cost on the list path.Suggested change
- ->with(['user', 'team', 'inviter']) + ->with(['user'])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Controllers/Api/WorkspaceMemberController.php` around lines 35 - 40, The query in WorkspaceMemberController loads relationships 'team' and 'inviter' into $members but the serialize() path only reads the 'user', causing unnecessary DB/hydration cost; either remove those eager loads from the WorkspaceMember::query()->with(['user', 'team', 'inviter']) call or update the member serialization method (e.g., serialize() on the WorkspaceMember resource/Model) to include 'team' and 'inviter' so the eager-loaded relations are actually used. Locate the query and the serialize() routine and apply one of these two fixes to keep eager-loads and serialization consistent.json_helpers.go (1)
53-106: Split token dispatch out ofUnmarshalJSON.This method is already over the repo’s complexity gate, and it sits on a critical parser path. Extracting the object, array, and scalar branches into small helpers will keep future JSON-token changes safer to review.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@json_helpers.go` around lines 53 - 106, The UnmarshalJSON method on jsonValue is too large; extract the object, array and scalar branches into small helper functions and call them from jsonValue.UnmarshalJSON. Implement helpers such as unmarshalJSONObject(data []byte) (map[string]any, error) that uses unmarshalCoreJSON into map[string]jsonValue then converts to map[string]any, unmarshalJSONArray(data []byte) ([]any, error) that converts []jsonValue to []any, and unmarshalJSONScalar(data []byte) (any, error) that handles string/bool/null/number paths (using unmarshalCoreJSON and jsonNumber where appropriate), then replace the big switch branches in jsonValue.UnmarshalJSON with calls to these helpers and assign v.value from their returns.export.go (1)
95-103: Avoid the extra full-copy buffer beforeWriteAtomic.
builder.Build*()already materialises the entire spec. Buffering it again here and then converting tostringdoubles peak memory for large exports and regresses the old streaming path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@export.go` around lines 95 - 103, exportSpecToFile currently creates a full in-memory buffer via core.NewBuffer and then passes buf.String() into localFS.WriteAtomic, doubling memory for large specs; instead avoid the extra full-copy by streaming directly into the filesystem: obtain an atomic/temp writer from the filesystem (use localFS to create a temp file or an atomic-writer API rather than core.NewBuffer), pass that io.Writer into the existing write func so the builder writes directly to disk, close/flush it, then call the atomic rename/commit (or use the filesystem's WriteAtomic method that accepts a reader) and preserve the existing error handling around localFS.WriteAtomic/result.Value.src/php/src/Api/Controllers/Api/ApiKeyController.php (1)
95-109: Consider using more specific exception types.Catching
\RuntimeExceptionand\InvalidArgumentExceptionis quite broad. IfApiKeyService::create()throws these for other reasons, they would be incorrectly mapped to entitlement/validation errors.Consider defining specific exception classes:
♻️ Suggested approach
// In ApiKeyService or a dedicated exceptions namespace: class EntitlementExceededException extends \RuntimeException {} class InvalidServerScopeException extends \InvalidArgumentException {} // Then in controller: } catch (EntitlementExceededException) { return $this->errorResponse(/* ... */); } catch (InvalidServerScopeException) { return $this->validationErrorResponse(/* ... */); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Controllers/Api/ApiKeyController.php` around lines 95 - 109, ApiKeyController is catching broad exceptions from ApiKeyService::create; define and throw specific exceptions (e.g., EntitlementExceededException extends \RuntimeException and InvalidServerScopeException extends \InvalidArgumentException) inside the ApiKeyService (or a dedicated Exceptions namespace), update ApiKeyService::create to throw these specific types where appropriate, and then replace the current catches in ApiKeyController with catches for EntitlementExceededException and InvalidServerScopeException so the entitlement and validation errors are mapped only when those exact conditions occur.pkg/provider/proxy.go (2)
239-291: Consider reducing cognitive complexity by extracting helper functions.SonarCloud flags cognitive complexity of 17 (limit 15). The function handles multiple validation concerns that could be split into focused helpers:
- Scheme/credentials validation
- Port validation
- Host/metadata check
- DNS resolution and IP validation
♻️ Suggested refactor to reduce complexity
func validateProviderUpstreamURL(raw string, target *url.URL) error { if target == nil { return blockProviderUpstream(raw, "invalid upstream URL result", nil) } + if err := validateProviderUpstreamScheme(raw, target); err != nil { + return err + } + if err := validateProviderUpstreamHost(raw, target); err != nil { + return err + } + return validateProviderUpstreamIPs(raw, target) +} - scheme := core.Lower(target.Scheme) - if scheme != "http" && scheme != "https" { - return blockProviderUpstream(raw, "only HTTP and HTTPS upstream URLs are permitted", nil) - } - // ... rest of inline validation +func validateProviderUpstreamScheme(raw string, target *url.URL) error { + scheme := core.Lower(target.Scheme) + if scheme != "http" && scheme != "https" { + return blockProviderUpstream(raw, "only HTTP and HTTPS upstream URLs are permitted", nil) + } + if target.User != nil { + return blockProviderUpstream(raw, "upstream URLs must not include embedded credentials", nil) + } + if port := target.Port(); port != "" { + n, err := strconv.Atoi(port) + if err != nil || n < 1 || n > 65535 { + return blockProviderUpstream(raw, "upstream URL port is invalid", err) + } + } + return nil +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/provider/proxy.go` around lines 239 - 291, The validateProviderUpstreamURL function is hitting SonarCloud cognitive complexity limits by doing multiple validation responsibilities; split it into focused helpers: extract scheme/credentials checks into a function (e.g., validateUpstreamSchemeAndUser(target) used from validateProviderUpstreamURL), extract port parsing/validation into validateUpstreamPort(target), extract host/metadata checks into validateUpstreamHost(host) that uses providerMetadataHosts, and extract DNS resolution + iteration into resolveAndValidateUpstreamHost(host, allowCIDRs) which calls providerUpstreamAllowCIDRs and validateProviderUpstreamIP for each IP; keep blockProviderUpstream and providerUpstreamAllowEnv/providerUpstreamAllowCIDRs/providerMetadataHosts/validateProviderUpstreamIP usages but move their calls into these helpers so validateProviderUpstreamURL becomes a thin orchestrator invoking those helpers and returning errors.
277-288: Construction-time DNS resolution is susceptible to TOCTOU attacks.The DNS lookup at line 278 validates IP addresses during proxy initialisation, but a DNS rebinding attack could change resolution between validation and the first request. This is a known limitation of construction-time SSRF guards.
In typical deployments, provider upstreams resolve to localhost addresses (127.0.0.1), which mitigates this risk. The allowlist mechanism via
CORE_PROVIDER_UPSTREAM_ALLOWprovides additional control for non-localhost upstreams. For stricter isolation, request-time re-validation via a customhttp.Transportwith aDialContexthook could be implemented, though this would require significant refactoring.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/provider/proxy.go` around lines 277 - 288, The construction-time net.LookupIP + validateProviderUpstreamIP check (host, ips loop) is vulnerable to TOCTOU/DNS rebinding; change to perform per-request re-validation by creating a custom http.Transport with a DialContext that (1) resolves the target host, (2) runs validateProviderUpstreamIP(raw, host, resolvedIP, allowCIDRs) for the chosen IP, and (3) only proceeds to dial if validation passes (fall back to blockProviderUpstream on failure). Keep or retain the existing constructor-time checks as a fast-fail, but wire the new DialContext into the HTTP client used for upstream calls so validation occurs at request time. Ensure you reference the existing symbols net.LookupIP, validateProviderUpstreamIP, allowCIDRs, blockProviderUpstream, raw and host when locating where to add the Transport/DialContext.src/php/src/Api/Controllers/Api/PaymentMethodController.php (1)
66-71: Consider making the default gateway configurable.The default gateway is hardcoded to
'stripe'. If other gateways are supported, consider making this configurable via environment or service configuration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Controllers/Api/PaymentMethodController.php` around lines 66 - 71, The controller hardcodes the default gateway to 'stripe' when calling PaymentMethodController->service->addPaymentMethod using $data['gateway'] ?? 'stripe'; change this to read a configurable default (e.g. from env or a service/config class) and pass that value instead; update PaymentMethodController to fetch the default gateway from configuration (via env('PAYMENT_GATEWAY_DEFAULT') or a config/service method) and use that value as the fallback for $data['gateway'] when calling addPaymentMethod so different gateways can be supported without code changes.cmd/gateway/main_test.go (1)
16-73: Test names should follow the_Good,_Bad, or_Uglysuffix convention.The tests are well-structured but don't follow the project's naming convention for Go tests. Consider renaming:
TestMain_Help→TestMain_Help_GoodTestMain_RegisterProvider_HandlesNilProvider→TestMain_RegisterProvider_Bad_NilProviderTestMain_EnableFiltersSubset→TestMain_EnableFiltersSubset_GoodAs per coding guidelines: "Name Go tests using
_Good,_Bad, or_Uglysuffixes".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/gateway/main_test.go` around lines 16 - 73, Rename the test functions to follow the project's `_Good`/`_Bad`/`_Ugly` suffix convention: change TestMain_Help to TestMain_Help_Good, TestMain_RegisterProvider_HandlesNilProvider to TestMain_RegisterProvider_Bad_NilProvider, and TestMain_EnableFiltersSubset to TestMain_EnableFiltersSubset_Good; update the function names where they are declared (the test signatures) and any local references to those functions (if any) so the tests compile and run under the new names, keeping the test bodies and assertions unchanged.src/php/src/Api/Controllers/Api/BiolinkController.php (1)
74-82: Unused$workspaceparameter in route methods.The
$workspaceparameter is declared but unused inshow,update, anddestroymethods (lines 74, 84, 114). The workspace is resolved internally viafindBiolink().If the parameter is required for Laravel route model binding, consider suppressing the warning or using the parameter directly to avoid the redundant
resolveWorkspace()call insidefindBiolink().♻️ Option: Use route-bound workspace directly
- public function show(Request $request, string $workspace, string $id): JsonResponse + public function show(Request $request, string $_workspace, string $id): JsonResponseOr refactor
findBiolinkto accept the workspace ID directly from the route parameter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/php/src/Api/Controllers/Api/BiolinkController.php` around lines 74 - 82, The $workspace route parameter is declared but never used in show/update/destroy while findBiolink() internally calls resolveWorkspace(), causing redundancy and an unused-parameter warning; fix by passing the route workspace into findBiolink (e.g., change calls in show, update, destroy from findBiolink($request, $id) to findBiolink($request, $workspace, $id)) or refactor findBiolink to accept an optional workspace id argument and use that instead of calling resolveWorkspace(); update the findBiolink signature and any internal resolveWorkspace() calls accordingly, or if route binding requires the parameter but you intend not to use it, explicitly document/suppress the unused parameter to silence the warning.sdk-config/csharp.yaml (1)
5-5: Centralise SDK version to avoid cross-language drift.
packageVersion: "1.0.0"is duplicated across multiple SDK config files; consider sourcing it from one shared release variable in the SDK generation workflow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-config/csharp.yaml` at line 5, The hard-coded packageVersion value ("1.0.0") should be removed from individual SDK config files and replaced by a single shared release variable used at generation time; update the SDK generation workflow to define a canonical variable (e.g., RELEASE_SDK_VERSION) and modify the config templating step to inject that variable instead of the literal packageVersion entry, ensuring references to packageVersion in sdk-config/csharp.yaml (and other SDK configs) are sourced from that central variable.scripts/generate-sdks.sh (1)
5-9: Consider adding existence checks for required files and tools.The script assumes
OPENAPI_FILE, config files undersdk-config/, andopenapi-generator-cliare available. Adding guard checks would provide clearer error messages on misconfiguration.🛡️ Proposed guard checks
OPENAPI_FILE="storage/app/openapi.json" OUTPUT_DIR="./sdks" +# Ensure openapi-generator-cli is available +if ! command -v openapi-generator-cli &>/dev/null; then + echo "Error: openapi-generator-cli not found in PATH" >&2 + exit 1 +fi + # Export a fresh OpenAPI document before generating SDKs. php artisan scramble:export + +if [[ ! -f "${OPENAPI_FILE}" ]]; then + echo "Error: OpenAPI spec not found at ${OPENAPI_FILE}" >&2 + exit 1 +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/generate-sdks.sh` around lines 5 - 9, Add guard checks in scripts/generate-sdks.sh to validate prerequisites before running generation: verify the OPENAPI_FILE path exists (symbol OPENAPI_FILE), ensure required config files exist under sdk-config/ (refer to sdk-config/* or specific filenames used later), check that the openapi-generator-cli executable is on PATH (or the expected bin name) and is executable, and confirm php artisan scramble:export succeeded before continuing; if any check fails, print a clear error message and exit non‑zero so the script aborts early.openapi.go (1)
2822-2830: Consider usingstrings.Trimfor underscore trimming.The loop-based approach works but
core.Trim(value, "_")(if available) or a single-pass trim would be more efficient for very long operationIds.This is a minor optimisation since operationIds are typically short strings. The current implementation is correct and readable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@openapi.go` around lines 2822 - 2830, The function trimOperationIDUnderscores currently loops to remove leading/trailing underscores; replace the loop logic in trimOperationIDUnderscores with a single-call trim (e.g., use core.Trim(value, "_") if that helper exists, otherwise import the stdlib strings package and use strings.Trim(value, "_")) to perform a single-pass trim of leading and trailing underscores and return the result; ensure any necessary import (strings) is added and remove the old for-loop code.options_test.go (1)
142-154: Minor TOCTOU window in ephemeral port reservation.There's a small time window between closing the UDP listener and the server binding to the port where another process could claim it. This is a standard pattern in test code and failures would be obvious, but worth noting for flaky test investigations.
Consider keeping the listener open and passing it to
ServeH3if the API supports it, or accept the rare flakiness as acceptable for test code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@options_test.go` around lines 142 - 154, The helper reserveHTTP3UDPAddr has a TOCTOU gap because it closes the UDP listener before the server binds; change it to keep the listener open and return the listener (e.g., return net.PacketConn or *net.UDPConn) instead of a string address so callers can pass the open socket directly to ServeH3 (or the equivalent server bind API) to avoid the race; update callers in tests to accept the new signature and hand the live conn into ServeH3, or if ServeH3 cannot accept an existing listener, document and accept the rare flakiness.chat_completions_test.go (1)
70-94: Test function missing_Badsuffix.The test validates rejection behaviour for non-loopback requests, which aligns with the
_Badsuffix convention.✏️ Suggested rename
-func TestChatCompletions_RejectsNonLoopback(t *testing.T) { +func TestChatCompletions_RejectsNonLoopback_Bad(t *testing.T) {As per coding guidelines:
Name Go tests using _Good, _Bad, or _Ugly suffixes🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@chat_completions_test.go` around lines 70 - 94, Rename the test function TestChatCompletions_RejectsNonLoopback to include the _Bad suffix (e.g., TestChatCompletions_RejectsNonLoopback_Bad) to follow the _Good/_Bad/_Ugly naming convention; update the function identifier and any references to it (imports, test runner filters, or comments) so the test compiles and runs under the new name, leaving the test body (request setup, RemoteAddr override, recorder, and assertions) unchanged.cmd/api/cmd_sdk.go (1)
31-97: Consider extracting spec generation to reduce cognitive complexity.SonarCloud flags this function with cognitive complexity 16 (threshold is 15). The logic is clear, but extracting the spec generation block (lines 61–84) into a helper like
generateTempSpec(opts) (string, func(), error)would reduce nesting and improve testability.♻️ Suggested extraction
+func generateTempSpec(opts core.Options) (specPath string, cleanup func(), err error) { + cfg := sdkConfigFromOptions(opts) + builder, err := sdkSpecBuilder(cfg) + if err != nil { + return "", nil, err + } + groups := sdkSpecGroupsIter() + + tmpFile, err := os.CreateTemp("", "openapi-*.json") + if err != nil { + return "", nil, cli.Wrap(err, "create temp spec file") + } + tmpPath := tmpFile.Name() + + if err := goapi.ExportSpecIter(tmpFile, "json", builder, groups); err != nil { + _ = tmpFile.Close() + coreio.Local.Delete(tmpPath) + return "", nil, cli.Wrap(err, "generate spec") + } + if err := tmpFile.Close(); err != nil { + coreio.Local.Delete(tmpPath) + return "", nil, cli.Wrap(err, "close temp spec file") + } + + return tmpPath, func() { coreio.Local.Delete(tmpPath) }, nil +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/api/cmd_sdk.go` around lines 31 - 97, The sdkAction function's spec-generation block (the code that builds cfg via sdkConfigFromOptions, calls sdkSpecBuilder, iterates sdkSpecGroupsIter, creates a temp file, calls goapi.ExportSpecIter, closes the file, and sets resolvedSpecFile) should be extracted into a helper like generateTempSpec(opts core.Options) (specPath string, cleanup func(), err error); move the temp file creation, ExportSpecIter call, file close, and error wrapping into that helper, have it return the temp path plus a cleanup function that calls coreio.Local.Delete and any necessary closes, and update sdkAction to call the helper, defer the returned cleanup when non-nil, and assign gen.SpecPath to the returned specPath to reduce nesting and complexity.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 64e02e66-ad5c-4058-b846-43813071e87a
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (251)
.core/TODO.md.gitignore.gitleaksignoreCLAUDE.mdRFC.mdapi.goapi_describable_test.goapi_renderable_test.goapi_test.goauthentik.goauthentik_integration_test.goauthentik_test.goauthz_test.gobridge.gobridge_internal_test.gobridge_test.gobrotli.gobrotli_test.gocache.gocache_config_test.gocache_control.gocache_control_test.gocache_test.gochat_completions.gochat_completions_internal_test.gochat_completions_test.goclient.goclient_test.gocmd/api/cmd.gocmd/api/cmd_args.gocmd/api/cmd_args_test.gocmd/api/cmd_sdk.gocmd/api/cmd_sdk_test.gocmd/api/cmd_spec.gocmd/api/cmd_spec_test.gocmd/api/cmd_test.gocmd/api/spec_builder.gocmd/api/spec_groups_iter.gocmd/gateway/README.mdcmd/gateway/main.gocmd/gateway/main_test.gocodegen.gocodegen_test.gocomposer.jsondocs/architecture.mddocs/index.mdentitlements.goentitlements_test.goexport.goexport_test.goexpvar_test.gogo-io/go.modgo-io/local.gogo-log/error.gogo-log/go.modgo.modgraphql.gographql_config_test.gographql_test.gogroup.gogroup_test.gogzip_test.gohttpsign_test.goi18n.goi18n_test.gojson_helpers.golocation_test.gomiddleware.gomiddleware_test.gomodernization_test.goopenapi.goopenapi_test.gooptions.gooptions_test.gopkg/provider/cache_control_example_test.gopkg/provider/cache_control_test.gopkg/provider/discovery.gopkg/provider/discovery_test.gopkg/provider/provider.gopkg/provider/proxy.gopkg/provider/proxy_test.gopkg/provider/registry.gopkg/provider/registry_test.gopkg/stream/stream_group.gopkg/stream/stream_group_example_test.gopkg/stream/stream_group_test.gopprof_test.goratelimit.goratelimit_internal_test.goratelimit_test.goresponse_meta.goresponse_meta_test.goresponse_test.goruntime_config_test.goscripts/generate-sdks.shscripts/publish-sdks.shsdk-config/csharp.yamlsdk-config/dart.yamlsdk-config/go.yamlsdk-config/java.yamlsdk-config/kotlin.yamlsdk-config/php.yamlsdk-config/python.yamlsdk-config/ruby.yamlsdk-config/rust.yamlsdk-config/swift.yamlsdk-config/typescript.yamlsdk.gosecure_test.goserve_h3.goservers.goservers_test.gosessions_test.goslog_test.gospec_builder_helper.gospec_builder_helper_test.gospec_registry.gospec_registry_test.gosrc/php/AUDIT-fail-open-controllers.mdsrc/php/phpunit.xmlsrc/php/src/Api/Boot.phpsrc/php/src/Api/Concerns/HasApiResponses.phpsrc/php/src/Api/Concerns/ResolvesWorkspace.phpsrc/php/src/Api/Console/Commands/CleanupExpiredGracePeriods.phpsrc/php/src/Api/Controllers/Api/ApiKeyController.phpsrc/php/src/Api/Controllers/Api/AuthController.phpsrc/php/src/Api/Controllers/Api/BiolinkController.phpsrc/php/src/Api/Controllers/Api/Concerns/SerialisesWorkspaceResource.phpsrc/php/src/Api/Controllers/Api/EntitlementApiController.phpsrc/php/src/Api/Controllers/Api/LinkController.phpsrc/php/src/Api/Controllers/Api/PaymentMethodController.phpsrc/php/src/Api/Controllers/Api/QrCodeController.phpsrc/php/src/Api/Controllers/Api/SeoReportController.phpsrc/php/src/Api/Controllers/Api/TicketController.phpsrc/php/src/Api/Controllers/Api/UnifiedPixelController.phpsrc/php/src/Api/Controllers/Api/WebhookController.phpsrc/php/src/Api/Controllers/Api/WebhookSecretController.phpsrc/php/src/Api/Controllers/Api/WebhookTemplateController.phpsrc/php/src/Api/Controllers/Api/WorkspaceMemberController.phpsrc/php/src/Api/Controllers/McpApiController.phpsrc/php/src/Api/Database/Factories/ApiKeyFactory.phpsrc/php/src/Api/Documentation/Attributes/ApiResponse.phpsrc/php/src/Api/Documentation/DocumentationController.phpsrc/php/src/Api/Documentation/DocumentationServiceProvider.phpsrc/php/src/Api/Documentation/Extensions/ApiKeyAuthExtension.phpsrc/php/src/Api/Documentation/Extensions/RateLimitExtension.phpsrc/php/src/Api/Documentation/Extensions/SunsetExtension.phpsrc/php/src/Api/Documentation/Middleware/ProtectDocumentation.phpsrc/php/src/Api/Documentation/OpenApiBuilder.phpsrc/php/src/Api/Documentation/Routes/docs.phpsrc/php/src/Api/Documentation/config.phpsrc/php/src/Api/Guards/AccessTokenGuard.phpsrc/php/src/Api/Jobs/DeliverWebhookJob.phpsrc/php/src/Api/Listeners/DispatchSubscriptionWebhookEvents.phpsrc/php/src/Api/Middleware/ApiCacheControl.phpsrc/php/src/Api/Middleware/AuthenticateApiKey.phpsrc/php/src/Api/Middleware/RateLimitApi.phpsrc/php/src/Api/Migrations/2026_04_15_000000_create_api_resource_tables.phpsrc/php/src/Api/Migrations/2026_04_15_000001_create_support_ticket_tables.phpsrc/php/src/Api/Models/ApiKey.phpsrc/php/src/Api/Models/Biolink.phpsrc/php/src/Api/Models/Concerns/BelongsToWorkspace.phpsrc/php/src/Api/Models/Link.phpsrc/php/src/Api/Models/QrCode.phpsrc/php/src/Api/Models/SupportTicket.phpsrc/php/src/Api/Models/SupportTicketReply.phpsrc/php/src/Api/Models/WebhookDelivery.phpsrc/php/src/Api/Models/WebhookEndpoint.phpsrc/php/src/Api/Observers/BiolinkWebhookObserver.phpsrc/php/src/Api/Observers/LinkWebhookObserver.phpsrc/php/src/Api/Observers/SupportTicketReplyWebhookObserver.phpsrc/php/src/Api/Observers/SupportTicketWebhookObserver.phpsrc/php/src/Api/Observers/WorkspaceWebhookObserver.phpsrc/php/src/Api/RateLimit/RateLimitService.phpsrc/php/src/Api/Resources/ApiKeyResource.phpsrc/php/src/Api/Routes/admin.phpsrc/php/src/Api/Routes/api.phpsrc/php/src/Api/Services/ApiKeyService.phpsrc/php/src/Api/Services/SeoReportService.phpsrc/php/src/Api/Services/WebhookService.phpsrc/php/src/Api/Services/WebhookTemplateService.phpsrc/php/src/Api/Tests/Feature/ApiKeyRotationTest.phpsrc/php/src/Api/Tests/Feature/ApiKeySecurityTest.phpsrc/php/src/Api/Tests/Feature/ApiKeyTest.phpsrc/php/src/Api/Tests/Feature/ApiScopeEnforcementTest.phpsrc/php/src/Api/Tests/Feature/AuthenticateApiKeyTest.phpsrc/php/src/Api/Tests/Feature/DocumentationControllerTest.phpsrc/php/src/Api/Tests/Feature/DocumentationStoplightTest.phpsrc/php/src/Api/Tests/Feature/McpApiControllerTest.phpsrc/php/src/Api/Tests/Feature/McpResourceTest.phpsrc/php/src/Api/Tests/Feature/McpServerAccessTest.phpsrc/php/src/Api/Tests/Feature/McpServerDetailTest.phpsrc/php/src/Api/Tests/Feature/RateLimitTest.phpsrc/php/src/Api/Tests/Feature/RateLimitingTest.phpsrc/php/src/Api/Tests/Feature/SeoReportEndpointTest.phpsrc/php/src/Api/Tests/Feature/SeoReportServiceTest.phpsrc/php/src/Api/Tests/Feature/WebhookDeliveryTest.phpsrc/php/src/Api/Tests/Feature/WebhookEndpointTest.phpsrc/php/src/Api/Tests/Feature/WebhookTemplateServiceTest.phpsrc/php/src/Api/config.phpsrc/php/src/Front/Api/Middleware/ApiSunset.phpsrc/php/src/Front/Api/Middleware/ApiVersion.phpsrc/php/src/Front/Api/VersionedRoutes.phpsrc/php/src/Front/Api/config.phpsrc/php/src/Website/Api/Controllers/DocsController.phpsrc/php/src/Website/Api/Routes/web.phpsrc/php/src/Website/Api/View/Blade/changelog.blade.phpsrc/php/src/Website/Api/View/Blade/guides/authentication.blade.phpsrc/php/src/Website/Api/View/Blade/guides/index.blade.phpsrc/php/src/Website/Api/View/Blade/guides/quickstart.blade.phpsrc/php/src/Website/Api/View/Blade/guides/rate-limits.blade.phpsrc/php/src/Website/Api/View/Blade/index.blade.phpsrc/php/src/Website/Api/View/Blade/sdks.blade.phpsrc/php/tests/Feature/ApiSunsetTest.phpsrc/php/tests/Feature/ApiVersionParsingTest.phpsrc/php/tests/Feature/ApiVersionServiceTest.phpsrc/php/tests/Feature/AuthenticationGuideTest.phpsrc/php/tests/Feature/DocsControllerTest.phpsrc/php/tests/Feature/VersionedRoutesTest.phpsse.gosse_internal_test.gosse_test.gossrf_guard.gossrf_guard_internal_test.gostatic_test.gosunset.gosunset_internal_test.gosunset_test.goswagger.goswagger_test.gotests/Pest.phptests/cli/api/Taskfile.yamltext_helpers.gothreats.mdtimeout_test.gotracing.gotracing_test.gotransformer.gotransformer_in.gotransformer_out.gotransformer_test.gotransport.gotransport_client.gotransport_client_test.gotransport_test.gowebhook.gowebhook_internal_test.gowebhook_test.gowebsocket.gowebsocket_internal_test.gowebsocket_test.go
💤 Files with no reviewable changes (4)
- go-io/go.mod
- go-log/go.mod
- go-log/error.go
- go-io/local.go
| $data = $request->validate([ | ||
| 'name' => ['required', 'string', 'max:255'], | ||
| 'target_url' => ['required', 'url', 'max:2048'], | ||
| 'format' => ['sometimes', 'string', 'in:png,svg'], | ||
| 'size' => ['sometimes', 'integer', 'min:64', 'max:2048'], | ||
| 'foreground_color' => ['sometimes', 'string', 'max:32'], | ||
| 'background_color' => ['sometimes', 'string', 'max:32'], |
There was a problem hiding this comment.
png is advertised but never produced.
store() accepts format=png, but download() always renders SVG and always returns an .svg attachment. As written, a persisted PNG code can never be downloaded correctly.
Minimal safe fix
- 'format' => ['sometimes', 'string', 'in:png,svg'],
+ 'format' => ['sometimes', 'string', 'in:svg'],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/php/src/Api/Controllers/Api/QrCodeController.php` around lines 51 - 57,
store() accepts format=png but download() always returns SVG; update both
store() and download() to honor the QR format (use $data['format'] when creating
and persist $qr->format, and use $qr->format or the request format when
serving). In store(), render and persist the QR in the requested format (write
PNG bytes and set file extension .png when format==='png', otherwise save SVG).
In download(), check $qr->format (or request->query('format') fallback) and
return the correct content-type and filename extension (image/png + .png for
PNG; image/svg+xml + .svg for SVG), and ensure the response body is PNG binary
for PNG and SVG text for SVG. Use the existing render/generator functions you
have to produce PNG bytes when needed and set appropriate headers instead of
always streaming SVG.
| /** | ||
| * Render a deterministic SVG fallback when no QR library is present. | ||
| */ | ||
| protected function renderSvg(QrCode $code): string | ||
| { | ||
| $size = max(64, (int) ($code->size ?? 256)); | ||
| $cells = 21; | ||
| $cellSize = max(1, (int) floor($size / $cells)); | ||
| $foreground = $code->foreground_color ?: '#000000'; | ||
| $background = $code->background_color ?: '#ffffff'; | ||
| $hash = hash('sha256', $code->target_url); | ||
|
|
||
| $rects = []; | ||
| $index = 0; | ||
| for ($y = 0; $y < $cells; $y++) { | ||
| for ($x = 0; $x < $cells; $x++) { | ||
| $nibble = hexdec($hash[$index % strlen($hash)]); | ||
| if (($nibble % 2) === 0) { | ||
| $rects[] = sprintf( | ||
| '<rect x="%d" y="%d" width="%d" height="%d" fill="%s" />', | ||
| $x * $cellSize, | ||
| $y * $cellSize, | ||
| $cellSize, | ||
| $cellSize, | ||
| htmlspecialchars($foreground, ENT_QUOTES, 'UTF-8') | ||
| ); | ||
| } | ||
| $index++; | ||
| } | ||
| } | ||
|
|
||
| return sprintf( | ||
| '<svg xmlns="http://www.w3.org/2000/svg" width="%1$d" height="%1$d" viewBox="0 0 %1$d %1$d"><rect width="100%%" height="100%%" fill="%2$s" />%3$s</svg>', | ||
| $size, | ||
| htmlspecialchars($background, ENT_QUOTES, 'UTF-8'), | ||
| implode('', $rects) | ||
| ); |
There was a problem hiding this comment.
This does not generate a real QR code.
The SVG is only a hash-derived pattern; it never encodes target_url into QR modules, so a scanner will not recover the stored URL. Until a proper encoder is wired in, this endpoint is functionally broken.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/php/src/Api/Controllers/Api/QrCodeController.php` around lines 113 - 149,
renderSvg currently builds a hash-based decorative SVG, not a scannable QR;
replace the placeholder logic in renderSvg(QrCode $code) so it actually encodes
$code->target_url into QR modules using a QR encoder library (e.g.,
Endroid\QrCode or BaconQrCode): instantiate the library's QR generator, set size
from $code->size and colors from $code->foreground_color /
$code->background_color, generate an SVG output and return that string. If no QR
library is available at runtime, fail fast by throwing an exception or returning
a 501/meaningful error rather than the hash pattern so callers know the endpoint
is nonfunctional. Ensure you keep method signature renderSvg and preserve
escaping/encoding for color values.
Critical:
- brotli.go: strip Content-Encoding+Vary headers on 4xx/5xx responses
before WriteHeader/WriteHeaderNow, so error payloads aren't mislabelled
as Brotli (handlers can't compress, but the brotli writer was claiming
br encoding regardless of status).
Major (security):
- ResolvesWorkspace.php: fail closed when route('workspace') is set but
invalid/non-numeric, instead of falling through to the user's default
workspace. Prevents workspace-scoped requests being routed onto the
wrong tenant.
- TicketController.store(): refuse to create tickets without either a
workspace OR a user context. Orphaned tickets are unreachable via
findTicket() anyway and the unauthenticated path was a privilege
escalation surface.
- WebhookController.serializeDelivery(): truncate response_body to 256
chars before returning. Full bodies can carry secrets/tokens/PII;
delivery-history endpoint must not leak them in raw form.
- WorkspaceMemberController.store(): drop "owner" from the generic
invite role allowlist. Owner invitations need a separate, guarded
flow.
- pkg/provider/discovery.go: reject symlinks in any ancestor path
segment, not just the final component. Compares cleaned input path
against EvalSymlinks() result.
Co-authored-by: Hephaestus <hephaestus@cladius>
- api.go Channels()/ChannelsIter(): walk e.streamGroups (apistream
StreamGroup slice) in addition to e.groups, so channels declared via
RegisterStreamGroup are visible to channel-discovery callers. Filters
for ProtocolWebSocket on the apistream path; SSE handlers don't count
as channels.
- codegen.go packageNameRe: allow dots so Java/dotted-language SDKs
(com.example.api, org.openapitools.client) pass validation. Keeps the
flag-injection guard intent (no spaces, no leading digit, no shell
meta-chars).
- entitlements.go: cap entitlement response body reads at 1 MiB via
io.LimitReader, defending against malformed/hostile upstream services
consuming unbounded memory on the request path.
- middleware.go requestIDMiddleware: handle randomRead() errors instead
of silently consuming them. On rng failure, fall back to a timestamp-
based ID so tracing isn't blanked by a constant
"00000000000000000000000000000000".
- serve_h3.go ServeH3: guard against nil context before reaching
ctx.Done() (which would panic). Adds ErrNilContext sentinel.
- scripts/publish-sdks.sh: pass --no-git-tag-version to npm version so
the explicit `git tag v${VERSION}` later in the script doesn't fail
with "tag already exists". Pack csharp into ./nupkgs and find from
there to avoid pushing stale .nupkg files from previous runs.
Co-authored-by: Hephaestus <hephaestus@cladius>
CodeRabbit flagged pkg/provider/{discovery,proxy,registry}_test.go as
using "SPDX-Licence-Identifier" (UK spelling) instead of the canonical
"SPDX-License-Identifier" (the actual SPDX standard token). Standard
licence-detection tooling (Reuse, ScanCode, fossology, etc.) matches on
the literal SPDX string and would skip these files as license-unknown.
UK English remains the convention for in-tree prose and identifiers
elsewhere; SPDX is a fixed external-standard token, not a stylistic
choice.
Co-authored-by: Hephaestus <hephaestus@cladius>
… race - Stop sequences: accept the OpenAI string-or-array form. Adds chatStopList type with custom UnmarshalJSON that normalises "stop":"\n\n" and "stop":["\n","stop"] into the same []string. parsedStopTokens now skips non-numeric entries (text stops are applied client-side via firstStopSequenceCut, only token-IDs feed inference.WithStopTokens). - ModelResolver.loadByPath: serialise concurrent loads of the same path via per-path inFlight channel. Two cache misses for the same model could each call inference.LoadModel() and one loaded instance was thrown away — wasteful in time + memory. Followers now block on the in-flight channel and re-check the cache when the leader closes it. - validateChatRequest: bounds-check sampling parameters (temperature [0,2], top_p [0,1], top_k>=0, max_tokens>=0) before they reach inference. Negative max_tokens previously bypassed downstream length caps while still being forwarded to the backend. Co-authored-by: Hephaestus <hephaestus@cladius>
CodeRabbit flagged that renderSvg returned a hash-derived placeholder SVG that scanners cannot decode, and that store() advertised a `png` format that download() never produced. Both ship-in-broken-state. Until a real QR encoder is wired (e.g. endroid/qr-code or bacon/bacon-qr-code — picks a library + license boundary), make the controller fail loud rather than fail silent: - download() now returns 501 Not Implemented with a clear message instead of serving a non-decodable hash-pattern SVG. - store() validation drops `png` from the accepted format set since download() can't produce PNG either. SVG remains accepted at the record level so callers can still register codes. - renderSvg() removed entirely (was only called by download()). Database records can still be created and listed; the download endpoint clearly signals the missing implementation. v0.9.0 batch should pick up the encoder integration. Co-authored-by: Hephaestus <hephaestus@cladius>
CodeRabbit flagged two issues that need addressing without reverting the deliberate RFC-alignment changes: 1) HasApiResponses: limitReachedResponse + validationErrorResponse renamed wire error_code values from feature_limit_reached → entitlement_exceeded and validation_failed → validation_error per commit de002f9 "Align API error responses with RFC". CodeRabbit's right that this breaks clients/SDKs branching on error_code. Fix: emit the old code as `legacy_error_code` in the response meta during the deprecation window. New code is in `error_code` (the canonical RFC-aligned name); old code is in `meta.legacy_error_code` for client-side compat. Both docblocks note v1.0 as the planned removal point. 2) composer.json: added the missing ../php (lthn/php framework) path repository alongside the existing ../php-tenant entry. Local workspace already had ../php-tenant but composer install would fail to resolve the lthn/php direct dependency without ../php declared. Sibling lthn/php-tenant follows the same pattern. CodeRabbit's wider point about wildcard versions + path-only repos breaking CI is correct but architectural — affects all 8+ lthn/php-* repos uniformly. Tracked separately for v0.9.0 batch (Packagist / composer-registry move). Co-authored-by: Hephaestus <hephaestus@cladius>
|
@coderabbitai review Disposition note for the 20 findings on this PR: Fixed in this PR (all 20 acted on): Critical:
Major: Outside-diff:
CI is failing on composer install because path-only sibling repos ( |
|
Critical (3/3 resolved)
Major (17/17 resolved) Outside-diff
CI Triggering re-review now against the updated commits. ✏️ Learnings added
✅ Actions performedReview triggered.
|
…indings SonarCloud flagged 18 "IP addresses should not be hardcoded" findings across: - pkg/provider/proxy.go:388-390,395,397,398,400,403,404 (9 findings) - ssrf_guard.go:52,53,54 (3 findings) - webhook.go:354-359,362,363 (6 findings) All 18 are false positives. The hardcoded IPs/CIDRs ARE the SSRF security boundary — vendor-fixed cloud-metadata service endpoints (169.254.169.254 etc.) and IETF-defined private/reserved/special-use ranges per RFC 5735, RFC 6890, RFC 4193, RFC 4291. Configurability would defeat the defence: an attacker who could rewrite the deny-list through configuration would just remove their target IP. Added explanatory comment blocks to each map/slice noting: 1. Why these IPs MUST be hardcoded (security boundary, not config). 2. Per-CIDR RFC citation so future readers see the source of truth. 3. Explicit "SonarCloud is a false positive on this list" note. No code-behaviour change — pure documentation. Future SonarCloud scans will still flag the lines (the rule looks at literals, not comments), but the intent is now visible to human reviewers and auditors. Co-authored-by: Hephaestus <hephaestus@cladius>
|
Addendum: 18 SonarCloud findings (hardcoded IPs) SonarCloud's [github-advanced-security/SonarCloud] reviewer flagged 18 instances of rule
Disposition: all 18 are false positives. Pushing back with documentation, not code change (commit 107afe4). These IPs/CIDRs are not configuration — they ARE the SSRF security boundary itself:
This pattern is canonical SSRF defence — it's the same shape OWASP recommends, the same shape every battle-tested SSRF library (e.g. The 107afe4 commit adds:
Future scans will still flag the literal IPs (the rule looks at literals, not comments), but the intent is now visible to human reviewers and auditors. If SonarCloud supports a That now closes the disposition for all 38 findings on this PR (20 CodeRabbit + 18 SonarCloud). |
Brings exported-surface docstring coverage on the codex-pushed PR #4 delta (commit 107afe4) from 84.4% to 100.0% — 5 standard error-interface methods (Error / Is / Unwrap) on ProviderUpstreamBlockedError + blockedURLError were missing usage-example docstrings. Pure docs. No behaviour change. gofmt-clean. Co-authored-by: Hephaestus <hephaestus@cladius>
|
| // false positive on this list. | ||
| var providerBlockedCIDRs = mustParseProviderCIDRs( | ||
| "0.0.0.0/8", // RFC 1122 "this network" | ||
| "100.64.0.0/10", // RFC 6598 carrier-grade NAT |
| "0.0.0.0/8", // RFC 1122 "this network" | ||
| "100.64.0.0/10", // RFC 6598 carrier-grade NAT | ||
| "127.0.0.0/8", // RFC 1122 loopback | ||
| "169.254.0.0/16", // RFC 3927 link-local |
| "100.64.0.0/10", // RFC 6598 carrier-grade NAT | ||
| "127.0.0.0/8", // RFC 1122 loopback | ||
| "169.254.0.0/16", // RFC 3927 link-local | ||
| "192.0.0.0/24", // RFC 6890 IETF protocol assignments |
| "169.254.0.0/16", // RFC 3927 link-local | ||
| "192.0.0.0/24", // RFC 6890 IETF protocol assignments | ||
| "192.0.2.0/24", // RFC 5737 TEST-NET-1 | ||
| "198.18.0.0/15", // RFC 2544 benchmark |
| "198.18.0.0/15", // RFC 2544 benchmark | ||
| "198.51.100.0/24", // RFC 5737 TEST-NET-2 | ||
| "203.0.113.0/24", // RFC 5737 TEST-NET-3 | ||
| "224.0.0.0/4", // RFC 5771 multicast |
| "198.51.100.0/24", // RFC 5737 TEST-NET-2 | ||
| "203.0.113.0/24", // RFC 5737 TEST-NET-3 | ||
| "224.0.0.0/4", // RFC 5771 multicast | ||
| "240.0.0.0/4", // RFC 1112 reserved |
| "0.0.0.0/8", // RFC 1122 "this network" | ||
| "100.64.0.0/10", // RFC 6598 carrier-grade NAT | ||
| "127.0.0.0/8", // RFC 1122 loopback | ||
| "169.254.0.0/16", // RFC 3927 link-local |
| "169.254.0.0/16", // RFC 3927 link-local | ||
| "192.0.0.0/24", // RFC 6890 IETF protocol assignments | ||
| "192.0.2.0/24", // RFC 5737 TEST-NET-1 | ||
| "198.18.0.0/15", // RFC 2544 benchmark |
| "198.18.0.0/15", // RFC 2544 benchmark | ||
| "198.51.100.0/24", // RFC 5737 TEST-NET-2 | ||
| "203.0.113.0/24", // RFC 5737 TEST-NET-3 | ||
| "224.0.0.0/4", // RFC 5771 multicast |
| "198.51.100.0/24", // RFC 5737 TEST-NET-2 | ||
| "203.0.113.0/24", // RFC 5737 TEST-NET-3 | ||
| "224.0.0.0/4", // RFC 5771 multicast | ||
| "240.0.0.0/4", // RFC 1112 reserved |


Routine dev→main PR opened by Hephaestus PR-cadence task.
This PR exists to:
ahead_by: 229 commit(s) (per gh api compare)If CodeRabbit clears with no blocking comments, this PR is eligible for
gh pr merge --merge(real merge commit, no force-push). Conflicts and review feedback should be addressed on the dev branch before merge.Co-authored-by: Hephaestus hephaestus@cladius
Summary by CodeRabbit
Release Notes
/v1/chat/completions