Conversation
Co-Authored-By: Virgil <virgil@lethean.io>
In-memory Medium implementation: tracks files, dirs, modification times. Used by go-cache and other downstream packages for test isolation. Co-Authored-By: Virgil <virgil@lethean.io>
…re primitives Replaced fmt, strings, sort, os, io, sync, encoding/json, path/filepath, errors, log, reflect with core.Sprintf, core.E, core.Contains, core.Trim, core.Split, core.Join, core.JoinPath, slices.Sort, c.Fs(), c.Lock(), core.JSONMarshal, core.ReadAll and other CoreGO v0.8.0 primitives. Framework boundary exceptions preserved where stdlib types are required by external interfaces (Gin, net/http, CGo, Wails, bubbletea). Co-Authored-By: Virgil <virgil@lethean.io>
Implements the two remaining unimplemented code gaps from RFC §14 (Enchantrix Integration) and §15 (Named Actions): - cube.Medium: transparent encryption layer that wraps any io.Medium with XChaCha20-Poly1305. Reads decrypt, writes encrypt, the inner backend never sees plaintext. Composes with itself (nested cubes with different keys). - cube.Pack/Unpack/Open: portable encrypted DataCube archives. Pack walks a source Medium into a tar archive, encrypts via the Sigil framework, and writes the ciphertext. Open returns an in-memory Medium over the decrypted tar for direct reads without unpacking. - Named action registry: core.io.local.read|write|list|delete, core.io.memory.read|write, core.io.copy (io.RegisterActions), core.io.cube.read|write|pack|unpack (cube.RegisterActions), and core.io.s3.read|write (s3.RegisterActions). All backends use the existing Sigil framework and core primitives (no banned imports). Tests cover Good/Bad/Ugly for every function, all 10 packages pass go build / vet / test. Co-Authored-By: Virgil <virgil@lethean.io>
… (AX-10) Adds tests/cli/io/Taskfile.yaml with canonical build/test/vet/default targets plus test-unit and S3-gated test-integration. default deps-chains build/test/vet per Wave 2 convention. test-integration skips cleanly when no S3 endpoint env var is set. Co-authored-by: Codex <noreply@openai.com> Via-codex-lane: supervised by Cerberus on Athena #102 request Closes tasks.lthn.sh/view.php?id=295
Dropped the embedded "core" segment from the module path per RFC,
aligning with graduated Go repos (dappco.re/go/{name}). Updated go.mod
+ 15 *.go self-imports across node/, s3/, sqlite/, store/, workspace/
subpackages.
Pre-existing go.sum gap (golang.org/x/crypto + golang.org/x/sys) is
unrelated and reproduces on dev HEAD.
Closes tasks.lthn.sh/view.php?id=629
Co-authored-by: Codex <noreply@openai.com>
Core lacks RenderTemplate/Template primitive. text/template retained in KeyValueStore.Render with // Note: AX-6 intrinsic — structural for KeyValueStore templating; core exposes no template primitive. Race PASS. Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=294
Removed bytes import. Replaced ReadStream's bytes.NewReader with existing Open reader path. Added cubeArchiveBuffer (with AX-6 note — pinned core lacks NewBuffer). Tests pass. Co-authored-by: Codex <noreply@openai.com>
ChaChaPolySigil supports nonce bytes (12-byte ChaCha20-Poly1305 or 24-byte XChaCha), copies key/nonce input, validates AEAD mode for In/Out. Registered "chacha20poly1305" in NewSigil factory. Unblocks cube/cube.go compile. Closes tasks.lthn.sh/view.php?id=628 Co-authored-by: Codex <noreply@openai.com>
…ant restored (CRITICAL) NewChaChaPolySigil(key, nonce any) now only accepts nil or PreObfuscator. Passing []byte (or any other type) returns a typed error: "fixed-nonce []byte path removed; use PreObfuscator or nil". In() always reads a fresh random nonce per call. The stored fixed- nonce field is removed from the struct. Cerberus DREAD CRITICAL #1049 — ChaCha20-Poly1305 catastrophically fails under nonce reuse: leaks XOR(plaintext_a, plaintext_b) and lets an attacker recover the Poly1305 one-time key for unlimited authenticated forgeries. Production callers (cube/cube.go x4, workspace/service.go x1) all pass nil → safe. This closes the API shape that invited future deterministic-ciphertext misuse. Doc comment warns explicitly: nonce uniqueness is the invariant; if deterministic AEAD is needed, that's a future PR backed by AES-GCM-SIV. Tests: - []byte non-empty / empty / typed-nil → all rejected - string nonce → rejected - Same-plaintext encryption produces DIFFERENT nonce prefixes AND DIFFERENT ciphertexts (regression catches any future fixed-nonce reintroduction) Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=1049
NewSigil("chacha20poly1305") used to return an unkeyed ChaChaPolySigil
which could never successfully encrypt — newAEAD(nil) returns "bad key
length" and there was no SetKey hook to install one post-construction.
Half-built factory entry: a footgun for callers who'd reasonably expect
the factory to return a usable sigil.
Removed the case entirely. Callers must go through NewChaChaPolySigil
(key, nonce) — the only path that requires + installs key material.
chacha20poly1305 import dropped (no longer used by sigils.go after #1049
landed the nonce-uniqueness invariant in crypto_sigil.go).
Cerberus #1059 from workspace-wide sniff.
Tests: NewSigil("hex") succeeds (sanity); NewSigil("chacha20poly1305")
errors with "scheme requires key material; use NewChaChaPolySigil".
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1059
…dispatcher (#1018)
Per RFC §5 + §15 (umbrella for #631-#634): non-Go consumers need access
to go-io's Workspace service + Medium service surface.
Lands the standard provider Service Framework + 18-action dispatcher:
* pkg/api/provider.go — Provider implementing api.Provider, /v1 base
path, 5 route registrations, OpenAPI descriptions
* pkg/api/handlers.go — workspace handlers (501 with TODO #631 since
Workspace service not yet wired in actions.go), Medium dispatcher,
18-action /v1/io/{action} dispatcher delegating to wired actions
and returning 501 with TODO #632 for the 11 missing actions
* AX-10 tests: provider Good/Bad/Ugly + httptest Good/Bad per route +
duplicate-registration safety + 5-route describe assertion
Surface scaffolded:
- POST /v1/workspace
- POST /v1/workspace/{id}/switch
- POST /v1/workspace/{id}/command
- POST /v1/medium/{type}/{op}
- POST /v1/io/{action} (18-action dispatcher)
go.mod adds dappco.re/go/api + gin. Provider mounting in core/api Engine
left to follow-up. Child tickets #631-#634 stay open until their
respective Workspace/Medium impls land.
GOWORK=temp go test ./... passed.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1018
Per RFC §12.2 Medium Registry: SFTP + WebDAV backends were specified but missing. Lands: * pkg/medium/sftp/ — full coreio.Medium interface backed by github.com/pkg/sftp + golang.org/x/crypto/ssh * pkg/medium/webdav/ — full coreio.Medium interface backed by golang.org/x/net/webdav client * register.go in each backend: factory + action registration so import triggers wiring. Adds core.io.sftp.read/write + core.io.webdav.read/write action hooks. * AX-10 tests for both backends (Read/Write/List Good/Bad/Ugly) * go.mod adds github.com/pkg/sftp + golang.org/x/net (golang.org/x/crypto already present); Borg/Trixxie/Poindexter paths untouched per memory GOWORK=temp go build + go test pass (root /Users/snider/Code/go.work has unrelated dappco.re/go/api replacement conflict; supervisor handles). Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=634
…pkg/api handlers (#631)
Per RFC §5: CreateWorkspace, SwitchWorkspace, ReadWorkspaceFile,
WriteWorkspaceFile, HandleWorkspaceCommand. Lands the package and wires
pkg/api Workspace handlers (replacing #1018's 501 stubs).
Lands:
* workspace/workspace.go — Workspace type backed by Medium; scoped
workspace media (named subdirectory per workspace); path + name
validation
* workspace/command.go — WorkspaceCommand DTO + action constants
* workspace/{workspace,command}_test.go — AX-10 Good/Bad/Ugly per method
* workspace/service.go — small rename to disambiguate prior
encrypted-service Workspace declaration vs RFC §5 Workspace
* pkg/api/handlers.go — Workspace endpoints (POST /v1/workspace,
/v1/workspace/{id}/switch, /v1/workspace/{id}/command) now delegate
to workspace.HandleWorkspaceCommand instead of returning 501
* pkg/api/handlers_test.go — happy-path assertions replace TODO stubs
GOWORK=temp go test ./workspace ./pkg/api passes (root go.work has
unrelated dappco.re/go/api replacement conflict; supervisor handles).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=631
… github/pwa stub) (#632)
Per RFC §15 + #1018 follow-up: actions.go registered only 7 of 18 named
actions. Wires the remaining 11.
Lands:
* actions.go — 11 new handlers:
- core.io.sftp.{read,write} — delegate to pkg/medium/sftp/ (#634)
- core.io.s3.{read,write} — delegate to existing s3 backend
- core.io.cube.{read,write,pack,unpack} — delegate to existing cube
backend; new pack/unpack handlers
- core.io.github.{clone,read} + core.io.pwa.scrape — return 501 with
TODO #633 (backend implementation tracked there)
* pkg/api/handlers.go — removed the 501 Wired gate so /v1/io/{action}
delegates through Core action registry (no more #1018 stubs)
* actions_test.go + pkg/api/handlers_test.go — AX-10 + httptest covering
registration, backend happy paths, stub-not-implemented, backend
errors, former-missing API delegation
GOWORK=off go test passes in temp copy; direct sandbox blocked by go.work
replacement conflict.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=632
…13 (#633)
GitHub Medium: REST contents-backed reads via google/go-github + oauth2,
recursive List, Clone action, auth via GITHUB_TOKEN env or
~/.config/lthn/github-token. Read-only mutators return ErrReadOnly.
Registers core.io.github.{read,list,clone} actions.
PWA Medium: stub returning ErrNotImplemented uniformly. Doc comment
documents the load-bearing dep chain: forge.lthn.ai/Snider/Borg
(Borg IS the PWA collector — headless-browser scraping is one of Borg's
many roles; Borg also wraps the scraped artefact in a DataNode and
io.Medium was designed FOR DataNodes from day 1) and
forge.lthn.ai/Snider/Enchantrix (encrypts the fetched payload at rest;
Trixxie is THE encryption layer for ALL encryption across the stack).
Wire BOTH at canonical forge.lthn.ai/Snider/* paths when scaffolded —
never migrate to dappco.re/*.
go test ./... + go build ./... pass under isolated GOWORK=off (parent
workspace has unrelated replacement conflicts, #1020).
Closes tasks.lthn.sh/view.php?id=633
Co-authored-by: Codex <noreply@openai.com>
…(#635) Across six files (io.go, s3/s3.go, cube/cube.go, node/node.go, sqlite/sqlite.go, sigil/sigils.go): - path → core.PathDir / core.PathBase / core.CleanPath - goio.ReadAll → core.ReadAll - bytes.NewReader → core.NewReader where applicable - bytes.Buffer + slices removed via small local buffers + sort helpers - Remaining sanctioned stdlib imports (io interface types, io/fs, archive/tar, compress/gzip, time, context) annotated with // AX-6-exception: documenting why no core/* equivalent exists go build ./... passes via isolated temp modfile (parent workspace has unrelated dappco.re/go/api replace conflict, #1020). Repo go.mod/go.sum untouched. Closes tasks.lthn.sh/view.php?id=635 Co-authored-by: Codex <noreply@openai.com>
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (28)
📝 WalkthroughWalkthroughThe pull request restructures the module from Changes
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
store/store.go (1)
12-16:⚠️ Potential issue | 🟠 MajorKeep
NotFoundErroras a plain sentinel.
NotFoundErrorshould remain anerrors.New(...)value; usingcore.E(...)changes the sentinel shape and makes the doc comment inaccurate. As per coding guidelines, sentinel errors should use standarderrors.New()— only inline error returns in functions should usecoreerr.E().🔧 Proposed fix
import ( "database/sql" + "errors" "io/fs" "text/template" // Note: AX-6 intrinsic - structural for KeyValueStore.Render templating; core exposes no template primitive. @@ -var NotFoundError = core.E("store", "key not found", nil) +var NotFoundError = errors.New("key not found")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@store/store.go` around lines 12 - 16, The NotFoundError sentinel should be a plain errors.New value rather than core.E(...) so callers can reliably use errors.Is identity checks; replace the current var NotFoundError = core.E("store", "key not found", nil) with var NotFoundError = errors.New("store: key not found"), update imports to include the standard "errors" package and remove the use of core.E, and keep the existing doc comment referencing errors.Is and the NotFoundError symbol.s3/s3.go (1)
472-479:⚠️ Potential issue | 🟠 MajorDon't treat body-read failures as an empty object.
If
GetObjectsucceeds but readingout.Bodyfails,existingstays empty and the eventual close overwrites the object with only the appended suffix. Bubble the read error up instead of silently dropping the original content.🔧 Proposed fix
if err == nil { - if content, readErr := readAllString(out.Body); readErr == nil { - existing = []byte(content) - } + content, readErr := readAllString(out.Body) + if readErr != nil { + return nil, core.E("s3.Append", core.Concat("failed to read existing object: ", key), readErr) + } + existing = []byte(content) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@s3/s3.go` around lines 472 - 479, The code currently treats a failure reading out.Body as if the object were empty (leaving existing as an empty slice); instead, when medium.client.GetObject succeeds but readAllString(out.Body) returns an error, propagate that read error (do not overwrite existing), e.g. set/return the readErr so the caller/Write flow fails rather than silently using empty existing; locate the GetObject call and the readAllString usage around medium.client.GetObject and out.Body, ensure out.Body is closed and that read errors are returned or assigned to err (not ignored), so existing is only populated on a successful read.
🟠 Major comments (17)
mock.go-109-118 (1)
109-118:⚠️ Potential issue | 🟠 Major
Renamedrops file metadata.
m.metais not moved fromoldPathtonewPath, so mode/modtime become zeroed for the renamed file.Proposed fix
func (m *MockMedium) Rename(oldPath, newPath string) error { m.mu.Lock() defer m.mu.Unlock() f, ok := m.Files[oldPath] if !ok { return fs.ErrNotExist } m.Files[newPath] = f delete(m.Files, oldPath) + if metadata, ok := m.meta[oldPath]; ok { + m.meta[newPath] = metadata + delete(m.meta, oldPath) + } return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mock.go` around lines 109 - 118, The Rename method on MockMedium is copying the file data but not moving its metadata, causing mode/modtime to be lost; update MockMedium.Rename to also move the entry in m.meta (e.g., set m.meta[newPath] = m.meta[oldPath] and delete m.meta[oldPath]) inside the existing mutex-protected block, and handle the case where no metadata exists (do nothing) so metadata is preserved after renames.mock.go-90-95 (1)
90-95:⚠️ Potential issue | 🟠 Major
Deletesilently succeeds for missing paths.Returning
nilwhen nothing was removed hides caller mistakes and diverges from the rest of the Medium implementations.Proposed fix
func (m *MockMedium) Delete(path string) error { m.mu.Lock() defer m.mu.Unlock() - delete(m.Files, path) - return nil + if _, ok := m.Files[path]; ok { + delete(m.Files, path) + delete(m.meta, path) + return nil + } + if _, ok := m.dirs[path]; ok { + delete(m.dirs, path) + return nil + } + return fs.ErrNotExist }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mock.go` around lines 90 - 95, MockMedium.Delete currently always returns nil even if the path wasn't present, which hides caller errors and diverges from other Medium implementations; update MockMedium.Delete (the method on type MockMedium that uses m.mu, m.Files) to check if the key exists before deleting and return a not-found error (e.g., os.ErrNotExist or a clear fmt.Errorf) when the path is missing, otherwise perform the delete and return nil.mock.go-97-106 (1)
97-106:⚠️ Potential issue | 🟠 Major
DeleteAllleaves stale directory/meta state and never reports missing paths.It removes file bodies only, but not corresponding metadata or nested directory entries, and always returns
nil.Proposed fix
func (m *MockMedium) DeleteAll(path string) error { m.mu.Lock() defer m.mu.Unlock() + found := false for k := range m.Files { if k == path || len(k) > len(path) && k[:len(path)+1] == path+"/" { delete(m.Files, k) + delete(m.meta, k) + found = true } } - delete(m.dirs, path) - return nil + for d := range m.dirs { + if d == path || len(d) > len(path) && d[:len(path)+1] == path+"/" { + delete(m.dirs, d) + found = true + } + } + if !found { + return fs.ErrNotExist + } + return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mock.go` around lines 97 - 106, The DeleteAll implementation only deletes file bodies and always returns nil, leaving stale directory entries and not signaling missing paths; update MockMedium.DeleteAll to (1) check whether the target path exists in either m.Files or m.dirs and return os.ErrNotExist if it does not, (2) delete all file keys that equal path or start with path+"/", (3) remove any directory entries in m.dirs that equal or are prefixed by path+"/", and (4) also remove any associated metadata maps (e.g., m.meta or similar metadata fields) for those files/dirs; keep the existing m.mu lock/unlock and return nil only on successful deletion.pkg/medium/webdav/webdav.go-67-70 (1)
67-70:⚠️ Potential issue | 🟠 MajorAdd timeout to HTTP client to prevent indefinite hangs.
The fallback HTTP client has no timeout, which causes operations to hang indefinitely under network failures or unresponsive servers. This affects all operations: Read, Write, Delete, Stat, List, and Rename.
Proposed fix
client := options.Client if client == nil { - client = http.DefaultClient + client = &http.Client{Timeout: 30 * time.Second} }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/medium/webdav/webdav.go` around lines 67 - 70, The current fallback assignment uses http.DefaultClient (client := options.Client; if client == nil { client = http.DefaultClient }) which has no timeout and can hang; change the fallback to construct a dedicated *http.Client with a sensible Timeout (e.g. 15–30s) and assign that to client when options.Client is nil so all webdav operations (Read, Write, Delete, Stat, List, Rename) use a timed client; ensure you only replace the nil-case (options.Client == nil) and reuse the provided options.Client unchanged.s3/s3.go-72-85 (1)
72-85:⚠️ Potential issue | 🟠 MajorClose the S3 body in
readAllStringand handle read errors inAppend.The
readAllStringhelper (lines 72–85) reads fromGetObjectresponses but never closes the reader. This causes HTTP connection leaks acrossRead,Open, andAppendcalls. AWS SDK for Go v2 requires callers to closeio.ReadCloserbodies to avoid keeping connections open.Additionally, in the
Appendmethod (lines 477–479), a read failure from the existing object is silently ignored—ifreadAllStringfails,existingremains empty and the method proceeds to overwrite the object with only the appended content, causing data loss.Fix
readAllStringto accept and close the reader:🔧 Proposed fix for readAllString
-func readAllString(reader any) (string, error) { +func readAllString(reader goio.ReadCloser) (string, error) { + defer reader.Close() + result := core.ReadAll(reader) if !result.OK { if err, ok := result.Value.(error); ok { return "", errAlso fix
Appendto propagate read errors instead of swallowing them, so failed reads don't silently lose data.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@s3/s3.go` around lines 72 - 85, readAllString currently takes any and uses core.ReadAll without closing the underlying body, leaking HTTP connections; change readAllString to accept an io.ReadCloser (or io.Reader plus optional closer), call core.ReadAll to get bytes/strings, and ensure you always close the ReadCloser (defer rc.Close()) before returning, propagating any read errors as returned errors (use fs.ErrInvalid only for type assertions). In the Append method, stop swallowing read errors from readAllString: call readAllString on the GetObject body, check the returned error and return that error (or wrap it with context) instead of treating existing as empty, so a failed read does not overwrite object data.s3/actions.go-42-67 (1)
42-67:⚠️ Potential issue | 🟠 MajorGuard against a nil
mediumafter the type assertion.A typed nil
*Mediumwill pass the assertion and then panic onRead/Write. Treat nil as invalid in both handlers.🔧 Suggested fix
func readAction(_ context.Context, opts core.Options) core.Result { medium, ok := opts.Get("medium").Value.(*Medium) - if !ok { + if !ok || medium == nil { return core.Result{}.New(core.E("s3.readAction", "medium is required", fs.ErrInvalid)) } content, err := medium.Read(opts.String("path")) if err != nil { @@ func writeAction(_ context.Context, opts core.Options) core.Result { medium, ok := opts.Get("medium").Value.(*Medium) - if !ok { + if !ok || medium == nil { return core.Result{}.New(core.E("s3.writeAction", "medium is required", fs.ErrInvalid)) } if err := medium.Write(opts.String("path"), opts.String("content")); err != nil {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@s3/actions.go` around lines 42 - 67, The readAction and writeAction handlers currently only check the type assertion for opts.Get("medium").Value.(*Medium) but don't guard against a typed nil Medium, which will panic when calling Read/Write; after the type assertion in both readAction and writeAction, add an explicit nil check (e.g., if medium == nil) and return a core.Result error using core.E("s3.readAction", "medium is required", fs.ErrInvalid) for readAction and core.E("s3.writeAction", "medium is required", fs.ErrInvalid) for writeAction so that a nil *Medium is treated as invalid rather than causing a panic.cube/actions.go-39-42 (1)
39-42:⚠️ Potential issue | 🟠 MajorDon't overwrite the existing cube read/write contract.
These actions reuse
core.io.cube.readandcore.io.cube.write, but the new handlers only acceptinner+key. The existing registry exercised inactions_test.go, Lines 400-429, already accepts a prebuiltmedium. If both registrars are used on the same*core.Core, the later call silently narrows the contract and breaks current callers. Please either reuse the shared handler implementation or support both option shapes here.Also applies to: 50-92
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cube/actions.go` around lines 39 - 42, The new registrations for ActionRead/ActionWrite (and similarly for pack/unpack) replace the existing core.io.cube.read/write contract by only accepting {inner, key}; update the handlers (readAction, writeAction, packAction, unpackAction) to accept both the original options shape (including a prebuilt medium) and the new {inner,key} shape or delegate to the shared existing implementation used by the registry in actions_test.go so the contract is not narrowed; locate the registrations c.Action(ActionRead, readAction) / c.Action(ActionWrite, writeAction) and adjust the corresponding handler functions to detect and handle both option forms (or call the prior common handler) and preserve backward compatibility.pkg/api/provider.go-27-31 (1)
27-31:⚠️ Potential issue | 🟠 MajorDescribe() is documenting an API that no longer exists.
The comments and route descriptions still say "seven currently wired" actions and
501 Not Implementedworkspace endpoints, butRegisterRoutesmounts live handlers andpkg/api/handlers_test.go, Lines 24-174, already expects 200/400/404 behaviour. The request schema is stale as well: for example, the create-workspace tests post{"workspace":"alice"}, while the description still requiresidentifierandpassword. IfDescribe()feeds generated docs or clients, they'll be wrong out of the box.Also applies to: 80-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/provider.go` around lines 27 - 31, Update Describe() so its text and route/schema descriptions match the actual behaviour implemented by RegisterRoutes and exercised by pkg/api/handlers_test.go: remove the stale "seven currently wired" wording and any references to 501 endpoints, change response codes to reflect 200/400/404 where appropriate, and update request body schemas (e.g., accept "workspace" as used in tests instead of "identifier" and "password"); ensure all route descriptions between the Describe() start and the affected block (around the previous 80-177 region) are consistent with the live handlers and tests.actions_test.go-150-154 (1)
150-154:⚠️ Potential issue | 🟠 MajorPlease don't bake an unsandboxed
"/"default into the local-action tests.This assertion effectively blesses listing an arbitrary host path with no explicit root, which is the opposite of the filesystem boundary this package is supposed to enforce. Please either pass an explicit temp-rooted sandbox here or change the expectation so a missing
rootfails instead. As per coding guidelines,**/*_test.go: Useio.NewMemoryMedium()orio.NewSandboxed(t.TempDir())in unit tests. Never hit real S3/SQLite unless performing integration testing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@actions_test.go` around lines 150 - 154, The test `c.Action(coreio.ActionLocalList).Run(...)` is currently allowing an unsandboxed root (defaults to "/"); change the test to use a sandboxed medium instead of relying on the implicit root. Update the call that builds options (the `core.NewOptions` / `core.Option{Key: "path", Value: tempDir}` setup) to provide a sandboxed medium (e.g., `io.NewSandboxed(t.TempDir())` or `io.NewMemoryMedium()`) or pass an explicit sandbox-rooted path, and adjust the assertion so that listing without an explicit `root` fails if you choose to enforce that; ensure you reference and modify the test around the `ActionLocalList` invocation to supply the sandboxed medium per testing guidelines.workspace/service.go-270-286 (1)
270-286:⚠️ Potential issue | 🟠 MajorUse the package's single workspace-name resolver here.
This fallback logic no longer matches
WorkspaceCommand.workspaceName()inworkspace/command.go, Lines 32-40, which prefersWorkspace→WorkspaceID→Identifier. It also lets the create path fall through toCreateWorkspace(""), which deterministically hashes the empty identifier and creates or collides on the same workspace every time. Please reusecommand.workspaceName()and reject blank names before callingCreateWorkspaceorSwitchWorkspace.Suggested fix
switch command.Action { case WorkspaceCreateAction, legacyWorkspaceCreateAction: - passphrase := command.Password - identifier := command.Identifier - if identifier == "" { - identifier = command.Workspace - } - workspaceID, err := service.CreateWorkspace(identifier, passphrase) + identifier := command.workspaceName() + if identifier == "" { + return core.Result{}.New(core.E("workspace.HandleWorkspaceCommand", "workspace identifier is required", fs.ErrInvalid)) + } + workspaceID, err := service.CreateWorkspace(identifier, command.Password) if err != nil { return core.Result{}.New(err) } return core.Result{Value: workspaceID, OK: true} case WorkspaceSwitchAction, legacyWorkspaceSwitchAction: - workspaceID := command.WorkspaceID - if workspaceID == "" { - workspaceID = command.Workspace - } + workspaceID := command.workspaceName() + if workspaceID == "" { + return core.Result{}.New(core.E("workspace.HandleWorkspaceCommand", "workspace id is required", fs.ErrInvalid)) + } if err := service.SwitchWorkspace(workspaceID); err != nil { return core.Result{}.New(err) } return core.Result{OK: true} }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspace/service.go` around lines 270 - 286, The create/switch action branches currently use ad-hoc fallback ordering and can call CreateWorkspace(""); replace those fallbacks by calling the command.workspaceName() helper to resolve the workspace name, validate the returned name is non-empty and return an error/result for blank names, then pass that validated name into service.CreateWorkspace(identifier, passphrase) for WorkspaceCreateAction/legacyWorkspaceCreateAction and into service.SwitchWorkspace(workspaceID) for WorkspaceSwitchAction/legacyWorkspaceSwitchAction; reference the functions command.workspaceName(), CreateWorkspace, and SwitchWorkspace when making the change.actions.go-82-85 (1)
82-85:⚠️ Potential issue | 🟠 MajorAvoid double-registration of
core.io.cube.*handlers.These action names are also registered in
cube/actions.go; registering both makes the active handler order-dependent and can silently change behaviour.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@actions.go` around lines 82 - 85, This file double-registers the core.io.cube handlers: remove the redundant registrations in actions.go by deleting or disabling the c.Action calls for ActionCubeRead/ActionCubeWrite/ActionCubePack/ActionCubeUnpack (the lines that register cubeReadAction, cubeWriteAction, cubePackAction, cubeUnpackAction) so that the single authoritative registration in cube/actions.go remains; if you need a runtime guard instead, wrap these calls in a clear conditional or lookup that only registers when no existing handler for those ActionCube* names exists.workspace/workspace.go-24-29 (1)
24-29:⚠️ Potential issue | 🟠 MajorAvoid implicit fallback to global local storage.
Defaulting
nilmedium tocoreio.Localcan unintentionally route workspace data to host filesystem. Require explicit medium injection (or an explicitly sandboxed default).As per coding guidelines,
Use io.NewSandboxed(root) to define filesystem boundaries; the directory becomes an immutable root where all paths are relative. Never attempt to escape the sandbox root.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspace/workspace.go` around lines 24 - 29, The code currently silently falls back to coreio.Local when medium is nil (in NewWorkspace/clipboard where medium variable is set), which risks routing data to the host FS; remove that implicit fallback and require an explicit medium: if medium == nil return an error (use core.E with context like "workspace.NewWorkspace" and fs.ErrInvalid) or accept a sandboxed default by requiring the caller to pass io.NewSandboxed(root) — update the initialization around the medium variable to enforce non-nil and document that callers must pass io.NewSandboxed(...) when a filesystem medium is intended.pkg/medium/github/github.go-93-108 (1)
93-108:⚠️ Potential issue | 🟠 MajorToken file reads bypass the
io.Mediumabstraction.This path uses direct
os/filepathfile access instead of the project’s Medium boundary model.As per coding guidelines,
All data access in the CoreGO ecosystem MUST go through the io.Medium interface. Never use raw os, filepath, or ioutil calls directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/medium/github/github.go` around lines 93 - 108, tokenFromEnvironment currently uses os.Getenv, os.UserHomeDir, filepath.Join and os.ReadFile which bypass the io.Medium abstraction; change tokenFromEnvironment to obtain environment and filesystem data through the project's io.Medium (either by adding an io.Medium parameter or using the existing Medium instance) and replace os.Getenv/os.UserHomeDir/filepath.Join/os.ReadFile calls with the corresponding io.Medium methods (e.g., Medium.Getenv, Medium.UserHomeDir, Medium.JoinPath or PathJoin, and Medium.ReadFile) so all env and file access goes through the Medium interface.actions.go-505-515 (1)
505-515:⚠️ Potential issue | 🟠 MajorReject traversal paths from tar headers before writing.
Current extraction accepts names like
../xafter only trimming/, which can break destination path guarantees on permissive media.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@actions.go` around lines 505 - 515, Reject tar header names that attempt directory traversal before calling destination.WriteMode: use path.Clean (from the path package, since tar headers use forward slashes) on header.Name (after trimming leading "/"), then reject any cleaned name that is empty, equals "." or starts with ".." or contains "/.." (this covers names like "../x" or "a/../../b"), and also ensure absolute paths are not allowed; keep the existing checks for trailing "/" (core.HasSuffix(name, "/")) and default mode handling, and only call destination.WriteMode(name, string(content), mode) for names that pass the traversal checks.cube/cube.go-363-366 (1)
363-366:⚠️ Potential issue | 🟠 MajorDo not suppress source listing errors in archive walks.
Line 365 returns success on
Listfailure, which can silently produce incomplete cube archives.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cube/cube.go` around lines 363 - 366, The code is suppressing errors from source.List by returning nil on failure; change the handling so listing errors are propagated instead of treated as "nothing to archive." Locate the entries, err := source.List(path) call and replace the early return with returning the error (or wrapping it with context via fmt.Errorf or errors.Wrap) so the caller receives the failure; ensure the surrounding function signature (the function that performs the archive walk) returns an error and update callers if needed to handle the propagated error.cube/cube.go-426-435 (1)
426-435:⚠️ Potential issue | 🟠 MajorSanitise tar entry paths before writing them to destination.
extractTarToMediumdoes not reject traversal segments (e.g.../), so malicious archives can violate path expectations on permissive media.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cube/cube.go` around lines 426 - 435, Sanitise the tar entry path before writing: after computing name := core.TrimPrefix(header.Name, "/") and before the HasSuffix check or destination.WriteMode call in extractTarToMedium, reject any absolute paths, any paths containing traversal segments (e.g. ".." components) or cleaned paths that escape the intended root (use path.Clean and verify it does not start with ".." or contain "/../"), and return a core.E("cube.extract", core.Concat("invalid tar entry path: ", name), nil) (or similar) when rejected; only proceed to call destination.WriteMode(name, ...) for validated, non-empty, non-directory-safe names.actions.go-446-449 (1)
446-449:⚠️ Potential issue | 🟠 MajorPropagate listing failures during archive walk.
Line 448 returns
nilonListerror, which can produce partial archives while reporting success.Suggested fix
func walkAndArchive(source Medium, archivePath string, tarWriter *tar.Writer) error { entries, err := source.List(archivePath) if err != nil { - return nil + return core.E("io.cube.archive", core.Concat("failed to list: ", archivePath), err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@actions.go` around lines 446 - 449, The current archive walk swallows errors from source.List(archivePath) by returning nil, causing partial archives to be treated as success; change the error handling so that when entries, err := source.List(archivePath) returns a non-nil err you propagate that error (e.g., return err or wrap and return it) instead of returning nil. Locate the block that calls source.List with variables entries and archivePath and replace the nil return with an appropriate error return from the surrounding function so failures during listing fail the overall operation.
🟡 Minor comments (3)
mock.go-165-165 (1)
165-165:⚠️ Potential issue | 🟡 Minor
fs.FileInfo.Name()should be the base name, not the full path.Using full paths here breaks expected
fs.FileInfosemantics for callers.Proposed fix
- return NewFileInfo(path, int64(len(content)), mt.mode, mt.modTime, false), nil + name := path + if idx := strings.LastIndex(name, "/"); idx >= 0 { + name = name[idx+1:] + } + return NewFileInfo(name, int64(len(content)), mt.mode, mt.modTime, false), nil @@ - return &MockFile{Reader: bytes.NewReader([]byte(content)), info: NewFileInfo(path, int64(len(content)), mt.mode, mt.modTime, false)}, nil + name := path + if idx := strings.LastIndex(name, "/"); idx >= 0 { + name = name[idx+1:] + } + return &MockFile{Reader: bytes.NewReader([]byte(content)), info: NewFileInfo(name, int64(len(content)), mt.mode, mt.modTime, false)}, nilAlso applies to: 181-181
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mock.go` at line 165, The fs.FileInfo returned by NewFileInfo is being created with the full path (e.g., NewFileInfo(path,...)) so fs.FileInfo.Name() will return the full path instead of just the base name; update the NewFileInfo calls to pass the base name (use filepath.Base(path) or pathpkg.Base(path)) so Name() returns only the file's base name—change the occurrences where NewFileInfo(path, int64(len(content)), mt.mode, mt.modTime, false) is used (and the similar call at the other location) to NewFileInfo(filepath.Base(path), int64(len(content)), mt.mode, mt.modTime, false).sigil/crypto_sigil_test.go-191-193 (1)
191-193:⚠️ Potential issue | 🟡 MinorUse
requirebefore dereferencingerr.Both negative cases call
err.Error()afterassert.ErrorIs. IfNewChaChaPolySigilever returnsnilunexpectedly, the test panics instead of failing cleanly.🧪 Proposed fix
_, err := NewChaChaPolySigil(key, nonce) - assert.ErrorIs(t, err, InvalidNonceError) + require.ErrorIs(t, err, InvalidNonceError) assert.Contains(t, err.Error(), "fixed-nonce []byte path removed; use PreObfuscator or nil") @@ _, err := NewChaChaPolySigil(key, "fixed nonce") - assert.ErrorIs(t, err, InvalidNonceError) + require.ErrorIs(t, err, InvalidNonceError) assert.Contains(t, err.Error(), "nonce must be PreObfuscator or nil")Also applies to: 202-204
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sigil/crypto_sigil_test.go` around lines 191 - 193, The test dereferences err with err.Error() after using assert.ErrorIs which can panic if err is nil; update the checks around NewChaChaPolySigil so you first assert the error is non-nil using require (e.g., require.ErrorIs(t, err, InvalidNonceError) or require.NotNil(t, err)) before calling err.Error(), and apply the same change to the analogous assertions around lines 202-204; reference NewChaChaPolySigil, InvalidNonceError and the err variable when making the replacement.sigil/sigil_test.go-293-297 (1)
293-297:⚠️ Potential issue | 🟡 MinorFail fast before reading
err.Error().
assert.Errordoes not stop the test. If this path unexpectedly returnsnil, Line 296 panics and hides the real regression. Use arequireassertion before checking the message.🧪 Proposed fix
func TestSigil_NewSigil_ChaChaPoly1305RequiresKey_Bad(t *testing.T) { _, err := NewSigil("chacha20poly1305") - assert.Error(t, err) + require.Error(t, err) assert.Contains(t, err.Error(), "scheme requires key material; use NewChaChaPolySigil") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sigil/sigil_test.go` around lines 293 - 297, In TestSigil_NewSigil_ChaChaPoly1305RequiresKey_Bad, change the non-fatal assertion to a fatal one: replace the assert.Error usage that checks the presence of an error from NewSigil("chacha20poly1305") with require.Error so the test fails immediately if err is nil before calling err.Error(); then keep the subsequent check that the error message contains "scheme requires key material; use NewChaChaPolySigil".
🧹 Nitpick comments (3)
tests/cli/io/Taskfile.yaml (1)
24-32: Move-tags integrationbefore the package pattern for idiomatic Go style.Both flag orderings work correctly in Go, but the build flag should precede the package pattern
./...to follow the standard convention documented ingo testusage. Placing build flags after packages is valid but non-idiomatic.Suggested fix
- go test -count=1 ./... -run Integration -tags integration + go test -count=1 -tags integration ./... -run Integration🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/io/Taskfile.yaml` around lines 24 - 32, Update the go test command in the test-integration Taskfile target so the build tag flag appears before the package pattern: modify the command using go test (in the test-integration block) to place -tags integration directly before ./... (i.e., change the ordering so "-tags integration" precedes the package pattern) to follow idiomatic Go flag ordering.node/node.go (1)
352-362: Consider de-duplicatingsortDirEntriesByName.This helper is now duplicated in multiple packages with the same logic. A shared internal helper would reduce drift risk.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node.go` around lines 352 - 362, Duplicate insertion-sort helper sortDirEntriesByName should be consolidated into a single shared internal utility: extract the function (the insertion-sort logic that takes []fs.DirEntry and sorts by Name()) into a common internal package (e.g., internal/fsutils or pkg/util) and export a well-named function (e.g., SortDirEntriesByName); then update all files that currently define their own sortDirEntriesByName to import and call the shared SortDirEntriesByName and remove the duplicate implementations so there is one canonical implementation to maintain.cube/cube_test.go (1)
249-261: Use the sandboxed medium helper for consistency with test patterns.This test uses
local.New(tempDir)directly, which is the only instance in the test suite. The codebase already standardises oncoreio.NewSandboxed(t.TempDir())elsewhere (as seen inmedium_test.go), and the coding guidelines explicitly mandate this pattern for unit tests. Swapping tocoreio.NewSandboxed(tempDir)aligns this test with established conventions.♻️ Suggested refactor
- sandbox, err := local.New(tempDir) + sandbox, err := coreio.NewSandboxed(tempDir)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cube/cube_test.go` around lines 249 - 261, TestCube_Pack_Good uses local.New(tempDir) directly instead of the project-standard sandbox helper; replace the local.New call with coreio.NewSandboxed(tempDir) (keep the tempDir usage and variable name sandbox) so the test creates a sandboxed medium consistent with other tests, update error handling if needed to match the NewSandboxed return signature, and ensure subsequent calls (sandbox.Exists) continue to work with the returned sandbox type.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@actions.go`:
- Around line 398-402: The code currently constructs a host-root filesystem via
local.New("/") (creating localMedium) which bypasses sandboxing; replace
local.New("/") with io.NewSandboxed(sandboxRoot) and use that sandboxed medium
for all file operations (e.g., the localMedium used for WriteMode on outputPath
and the analogous read/write in the 410-417 block). Ensure sandboxRoot is the
intended immutable root (for example the action/workspace directory passed into
the routine), make all paths relative to that root and reject/normalize attempts
to escape (no ".." escapes), and keep existing error handling (return
core.E(...)) when creating the sandboxed medium fails.
- Around line 313-317: Remove the implicit default of "/" for the local action
root and require callers to provide a root; replace the call to local.New(root)
with io.NewSandboxed(root) (or the appropriate sandbox constructor) so the
filesystem is always sandboxed and paths are relative to that immutable root;
ensure you validate that opts.String("root") is non-empty and return an error
(or fail early) when it is missing instead of falling back to "/".
In `@cube/cube.go`:
- Around line 269-273: The Pack/Unpack/Open logic currently creates a filesystem
at host root via local.New("/") which allows escaping the sandbox; update these
to create a sandboxed filesystem using io.NewSandboxed(root) (where root is the
configured sandbox base) and use that returned medium for all file operations
(e.g. replace local.New("/") calls in Pack, Unpack, and Open with
io.NewSandboxed(root)), ensure paths like outputPath (and any input paths in the
other blocks) are treated as relative to the sandbox and do not attempt to join
or resolve outside the sandbox, and keep error handling the same but reference
the sandboxed medium variable (e.g. localMedium) for WriteMode/Read/Open calls.
In `@pkg/api/handlers.go`:
- Around line 229-233: The handler currently accepts client-supplied req.Root
and passes it to coreio.NewSandboxed, which allows callers to pick arbitrary
filesystem roots; replace this by sourcing the sandbox root from trusted server
configuration or an allow-list (e.g., a configured ServerConfig.SandboxRoot or
AllowedRoots map) instead of using req.Root, remove or ignore the
strings.TrimSpace(req.Root) path-check and the call
coreio.NewSandboxed(req.Root), validate that the configured/allowed root exists
and is safe, call coreio.NewSandboxed with that trusted root, and return a clear
error if no configured root is available or the requested allowed identifier is
not permitted.
In `@pkg/api/provider.go`:
- Around line 9-13: The build fails because the imported module alias goapi
(dappco.re/go/api v0.8.0-alpha.1) referenced in pkg/api/provider.go has no
checksum in go.sum; run module tooling from the repository root to populate
go.sum (e.g. run go get dappco.re/go/api@v0.8.0-alpha.1 or simply go mod tidy)
so the checksum entry for dappco.re/go/api is added, then commit the updated
go.sum so imports like the goapi alias in pkg/api/provider.go resolve during CI
and local builds.
---
Outside diff comments:
In `@s3/s3.go`:
- Around line 472-479: The code currently treats a failure reading out.Body as
if the object were empty (leaving existing as an empty slice); instead, when
medium.client.GetObject succeeds but readAllString(out.Body) returns an error,
propagate that read error (do not overwrite existing), e.g. set/return the
readErr so the caller/Write flow fails rather than silently using empty
existing; locate the GetObject call and the readAllString usage around
medium.client.GetObject and out.Body, ensure out.Body is closed and that read
errors are returned or assigned to err (not ignored), so existing is only
populated on a successful read.
In `@store/store.go`:
- Around line 12-16: The NotFoundError sentinel should be a plain errors.New
value rather than core.E(...) so callers can reliably use errors.Is identity
checks; replace the current var NotFoundError = core.E("store", "key not found",
nil) with var NotFoundError = errors.New("store: key not found"), update imports
to include the standard "errors" package and remove the use of core.E, and keep
the existing doc comment referencing errors.Is and the NotFoundError symbol.
---
Major comments:
In `@actions_test.go`:
- Around line 150-154: The test `c.Action(coreio.ActionLocalList).Run(...)` is
currently allowing an unsandboxed root (defaults to "/"); change the test to use
a sandboxed medium instead of relying on the implicit root. Update the call that
builds options (the `core.NewOptions` / `core.Option{Key: "path", Value:
tempDir}` setup) to provide a sandboxed medium (e.g.,
`io.NewSandboxed(t.TempDir())` or `io.NewMemoryMedium()`) or pass an explicit
sandbox-rooted path, and adjust the assertion so that listing without an
explicit `root` fails if you choose to enforce that; ensure you reference and
modify the test around the `ActionLocalList` invocation to supply the sandboxed
medium per testing guidelines.
In `@actions.go`:
- Around line 82-85: This file double-registers the core.io.cube handlers:
remove the redundant registrations in actions.go by deleting or disabling the
c.Action calls for
ActionCubeRead/ActionCubeWrite/ActionCubePack/ActionCubeUnpack (the lines that
register cubeReadAction, cubeWriteAction, cubePackAction, cubeUnpackAction) so
that the single authoritative registration in cube/actions.go remains; if you
need a runtime guard instead, wrap these calls in a clear conditional or lookup
that only registers when no existing handler for those ActionCube* names exists.
- Around line 505-515: Reject tar header names that attempt directory traversal
before calling destination.WriteMode: use path.Clean (from the path package,
since tar headers use forward slashes) on header.Name (after trimming leading
"/"), then reject any cleaned name that is empty, equals "." or starts with ".."
or contains "/.." (this covers names like "../x" or "a/../../b"), and also
ensure absolute paths are not allowed; keep the existing checks for trailing "/"
(core.HasSuffix(name, "/")) and default mode handling, and only call
destination.WriteMode(name, string(content), mode) for names that pass the
traversal checks.
- Around line 446-449: The current archive walk swallows errors from
source.List(archivePath) by returning nil, causing partial archives to be
treated as success; change the error handling so that when entries, err :=
source.List(archivePath) returns a non-nil err you propagate that error (e.g.,
return err or wrap and return it) instead of returning nil. Locate the block
that calls source.List with variables entries and archivePath and replace the
nil return with an appropriate error return from the surrounding function so
failures during listing fail the overall operation.
In `@cube/actions.go`:
- Around line 39-42: The new registrations for ActionRead/ActionWrite (and
similarly for pack/unpack) replace the existing core.io.cube.read/write contract
by only accepting {inner, key}; update the handlers (readAction, writeAction,
packAction, unpackAction) to accept both the original options shape (including a
prebuilt medium) and the new {inner,key} shape or delegate to the shared
existing implementation used by the registry in actions_test.go so the contract
is not narrowed; locate the registrations c.Action(ActionRead, readAction) /
c.Action(ActionWrite, writeAction) and adjust the corresponding handler
functions to detect and handle both option forms (or call the prior common
handler) and preserve backward compatibility.
In `@cube/cube.go`:
- Around line 363-366: The code is suppressing errors from source.List by
returning nil on failure; change the handling so listing errors are propagated
instead of treated as "nothing to archive." Locate the entries, err :=
source.List(path) call and replace the early return with returning the error (or
wrapping it with context via fmt.Errorf or errors.Wrap) so the caller receives
the failure; ensure the surrounding function signature (the function that
performs the archive walk) returns an error and update callers if needed to
handle the propagated error.
- Around line 426-435: Sanitise the tar entry path before writing: after
computing name := core.TrimPrefix(header.Name, "/") and before the HasSuffix
check or destination.WriteMode call in extractTarToMedium, reject any absolute
paths, any paths containing traversal segments (e.g. ".." components) or cleaned
paths that escape the intended root (use path.Clean and verify it does not start
with ".." or contain "/../"), and return a core.E("cube.extract",
core.Concat("invalid tar entry path: ", name), nil) (or similar) when rejected;
only proceed to call destination.WriteMode(name, ...) for validated, non-empty,
non-directory-safe names.
In `@mock.go`:
- Around line 109-118: The Rename method on MockMedium is copying the file data
but not moving its metadata, causing mode/modtime to be lost; update
MockMedium.Rename to also move the entry in m.meta (e.g., set m.meta[newPath] =
m.meta[oldPath] and delete m.meta[oldPath]) inside the existing mutex-protected
block, and handle the case where no metadata exists (do nothing) so metadata is
preserved after renames.
- Around line 90-95: MockMedium.Delete currently always returns nil even if the
path wasn't present, which hides caller errors and diverges from other Medium
implementations; update MockMedium.Delete (the method on type MockMedium that
uses m.mu, m.Files) to check if the key exists before deleting and return a
not-found error (e.g., os.ErrNotExist or a clear fmt.Errorf) when the path is
missing, otherwise perform the delete and return nil.
- Around line 97-106: The DeleteAll implementation only deletes file bodies and
always returns nil, leaving stale directory entries and not signaling missing
paths; update MockMedium.DeleteAll to (1) check whether the target path exists
in either m.Files or m.dirs and return os.ErrNotExist if it does not, (2) delete
all file keys that equal path or start with path+"/", (3) remove any directory
entries in m.dirs that equal or are prefixed by path+"/", and (4) also remove
any associated metadata maps (e.g., m.meta or similar metadata fields) for those
files/dirs; keep the existing m.mu lock/unlock and return nil only on successful
deletion.
In `@pkg/api/provider.go`:
- Around line 27-31: Update Describe() so its text and route/schema descriptions
match the actual behaviour implemented by RegisterRoutes and exercised by
pkg/api/handlers_test.go: remove the stale "seven currently wired" wording and
any references to 501 endpoints, change response codes to reflect 200/400/404
where appropriate, and update request body schemas (e.g., accept "workspace" as
used in tests instead of "identifier" and "password"); ensure all route
descriptions between the Describe() start and the affected block (around the
previous 80-177 region) are consistent with the live handlers and tests.
In `@pkg/medium/github/github.go`:
- Around line 93-108: tokenFromEnvironment currently uses os.Getenv,
os.UserHomeDir, filepath.Join and os.ReadFile which bypass the io.Medium
abstraction; change tokenFromEnvironment to obtain environment and filesystem
data through the project's io.Medium (either by adding an io.Medium parameter or
using the existing Medium instance) and replace
os.Getenv/os.UserHomeDir/filepath.Join/os.ReadFile calls with the corresponding
io.Medium methods (e.g., Medium.Getenv, Medium.UserHomeDir, Medium.JoinPath or
PathJoin, and Medium.ReadFile) so all env and file access goes through the
Medium interface.
In `@pkg/medium/webdav/webdav.go`:
- Around line 67-70: The current fallback assignment uses http.DefaultClient
(client := options.Client; if client == nil { client = http.DefaultClient })
which has no timeout and can hang; change the fallback to construct a dedicated
*http.Client with a sensible Timeout (e.g. 15–30s) and assign that to client
when options.Client is nil so all webdav operations (Read, Write, Delete, Stat,
List, Rename) use a timed client; ensure you only replace the nil-case
(options.Client == nil) and reuse the provided options.Client unchanged.
In `@s3/actions.go`:
- Around line 42-67: The readAction and writeAction handlers currently only
check the type assertion for opts.Get("medium").Value.(*Medium) but don't guard
against a typed nil Medium, which will panic when calling Read/Write; after the
type assertion in both readAction and writeAction, add an explicit nil check
(e.g., if medium == nil) and return a core.Result error using
core.E("s3.readAction", "medium is required", fs.ErrInvalid) for readAction and
core.E("s3.writeAction", "medium is required", fs.ErrInvalid) for writeAction so
that a nil *Medium is treated as invalid rather than causing a panic.
In `@s3/s3.go`:
- Around line 72-85: readAllString currently takes any and uses core.ReadAll
without closing the underlying body, leaking HTTP connections; change
readAllString to accept an io.ReadCloser (or io.Reader plus optional closer),
call core.ReadAll to get bytes/strings, and ensure you always close the
ReadCloser (defer rc.Close()) before returning, propagating any read errors as
returned errors (use fs.ErrInvalid only for type assertions). In the Append
method, stop swallowing read errors from readAllString: call readAllString on
the GetObject body, check the returned error and return that error (or wrap it
with context) instead of treating existing as empty, so a failed read does not
overwrite object data.
In `@workspace/service.go`:
- Around line 270-286: The create/switch action branches currently use ad-hoc
fallback ordering and can call CreateWorkspace(""); replace those fallbacks by
calling the command.workspaceName() helper to resolve the workspace name,
validate the returned name is non-empty and return an error/result for blank
names, then pass that validated name into service.CreateWorkspace(identifier,
passphrase) for WorkspaceCreateAction/legacyWorkspaceCreateAction and into
service.SwitchWorkspace(workspaceID) for
WorkspaceSwitchAction/legacyWorkspaceSwitchAction; reference the functions
command.workspaceName(), CreateWorkspace, and SwitchWorkspace when making the
change.
In `@workspace/workspace.go`:
- Around line 24-29: The code currently silently falls back to coreio.Local when
medium is nil (in NewWorkspace/clipboard where medium variable is set), which
risks routing data to the host FS; remove that implicit fallback and require an
explicit medium: if medium == nil return an error (use core.E with context like
"workspace.NewWorkspace" and fs.ErrInvalid) or accept a sandboxed default by
requiring the caller to pass io.NewSandboxed(root) — update the initialization
around the medium variable to enforce non-nil and document that callers must
pass io.NewSandboxed(...) when a filesystem medium is intended.
---
Minor comments:
In `@mock.go`:
- Line 165: The fs.FileInfo returned by NewFileInfo is being created with the
full path (e.g., NewFileInfo(path,...)) so fs.FileInfo.Name() will return the
full path instead of just the base name; update the NewFileInfo calls to pass
the base name (use filepath.Base(path) or pathpkg.Base(path)) so Name() returns
only the file's base name—change the occurrences where NewFileInfo(path,
int64(len(content)), mt.mode, mt.modTime, false) is used (and the similar call
at the other location) to NewFileInfo(filepath.Base(path), int64(len(content)),
mt.mode, mt.modTime, false).
In `@sigil/crypto_sigil_test.go`:
- Around line 191-193: The test dereferences err with err.Error() after using
assert.ErrorIs which can panic if err is nil; update the checks around
NewChaChaPolySigil so you first assert the error is non-nil using require (e.g.,
require.ErrorIs(t, err, InvalidNonceError) or require.NotNil(t, err)) before
calling err.Error(), and apply the same change to the analogous assertions
around lines 202-204; reference NewChaChaPolySigil, InvalidNonceError and the
err variable when making the replacement.
In `@sigil/sigil_test.go`:
- Around line 293-297: In TestSigil_NewSigil_ChaChaPoly1305RequiresKey_Bad,
change the non-fatal assertion to a fatal one: replace the assert.Error usage
that checks the presence of an error from NewSigil("chacha20poly1305") with
require.Error so the test fails immediately if err is nil before calling
err.Error(); then keep the subsequent check that the error message contains
"scheme requires key material; use NewChaChaPolySigil".
---
Nitpick comments:
In `@cube/cube_test.go`:
- Around line 249-261: TestCube_Pack_Good uses local.New(tempDir) directly
instead of the project-standard sandbox helper; replace the local.New call with
coreio.NewSandboxed(tempDir) (keep the tempDir usage and variable name sandbox)
so the test creates a sandboxed medium consistent with other tests, update error
handling if needed to match the NewSandboxed return signature, and ensure
subsequent calls (sandbox.Exists) continue to work with the returned sandbox
type.
In `@node/node.go`:
- Around line 352-362: Duplicate insertion-sort helper sortDirEntriesByName
should be consolidated into a single shared internal utility: extract the
function (the insertion-sort logic that takes []fs.DirEntry and sorts by Name())
into a common internal package (e.g., internal/fsutils or pkg/util) and export a
well-named function (e.g., SortDirEntriesByName); then update all files that
currently define their own sortDirEntriesByName to import and call the shared
SortDirEntriesByName and remove the duplicate implementations so there is one
canonical implementation to maintain.
In `@tests/cli/io/Taskfile.yaml`:
- Around line 24-32: Update the go test command in the test-integration Taskfile
target so the build tag flag appears before the package pattern: modify the
command using go test (in the test-integration block) to place -tags integration
directly before ./... (i.e., change the ordering so "-tags integration" precedes
the package pattern) to follow idiomatic Go flag ordering.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 57a62775-c9da-4bc7-8151-a00805f6614b
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (48)
actions.goactions_test.gocube/actions.gocube/actions_test.gocube/cube.gocube/cube_test.godatanode/medium.godatanode/medium_test.gogo.modio.golocal/medium.gomock.gonode/node.gonode/node_test.gopkg/api/handlers.gopkg/api/handlers_test.gopkg/api/provider.gopkg/api/provider_test.gopkg/medium/github/github.gopkg/medium/github/github_test.gopkg/medium/github/register.gopkg/medium/pwa/pwa.gopkg/medium/pwa/pwa_test.gopkg/medium/pwa/register.gopkg/medium/sftp/register.gopkg/medium/sftp/sftp.gopkg/medium/sftp/sftp_test.gopkg/medium/webdav/register.gopkg/medium/webdav/webdav.gopkg/medium/webdav/webdav_test.gos3/actions.gos3/actions_test.gos3/s3.gos3/s3_test.gosigil/crypto_sigil.gosigil/crypto_sigil_test.gosigil/sigil_test.gosigil/sigils.gosqlite/sqlite.gostore/medium.gostore/store.gotests/cli/io/Taskfile.yamlworkspace/command.goworkspace/command_test.goworkspace/service.goworkspace/service_test.goworkspace/workspace.goworkspace/workspace_test.go
28 files modified, +936/-767. 80+ findings dispositioned.
Security/correctness (CodeRabbit):
- actions.go: local root no longer defaults to /; cube archive I/O
scoped properly (was using host root); tar traversal blocked
- cube/cube.go: Pack/Unpack/Open use scoped root, not host;
archive walk no longer swallows list errors; tar traversal blocked
- pkg/api/handlers.go: client-controlled local sandbox root closed
- pkg/api/provider.go: dappco.re/go/api checksum fixed; stale
Describe() docs updated
- store/store.go: sentinel uses core.E (not generic error)
- s3/s3.go: append no longer swallows body read failures;
readAllString no longer leaks response body
- s3/actions.go: typed nil *Medium panic guarded
- mock.go: Rename preserves metadata; Delete reports missing paths;
DeleteAll cleans state; FileInfo.Name() returns base name not path
- webdav.go: default HTTP client now has timeout
- workspace/workspace.go: nil medium no longer falls back to global local
- github.go: token file goes through io.Medium
- node/node.go: duplicate sort helper removed
- workspace/service.go: ad-hoc workspace name fallback replaced
Tests + AX:
- actions_test.go: unsandboxed local-list expectation fixed
- sigil/{crypto_sigil,sigil}_test.go: nil error deref guards (no testify)
- cube/cube_test.go: indirected through io.Medium (not direct local.New)
- Taskfile.yaml: idiomatic go test flag order
SonarCloud (50 findings, all FIXED):
- 9 actions.go duplicate literals + cognitive complexity
- 8 cube/cube.go duplicate literals + cognitive complexity
- 1 cube/actions.go repeated key error literal
- 1 mock.go cognitive complexity
- 4 pkg/api/handlers.go TODO + duplicate branch + complexity
- 5 pkg/medium/github.go duplicate literals
- 8 pkg/medium/sftp.go duplicate literals
- 9 pkg/medium/webdav.go duplicate literals
- 1 sigil/sigils.go duplicate gzip literal
- 4 workspace/workspace.go duplicate literals
GHAS: 0 code-scanning + dependabot findings — RESOLVED-COMMENT.
Verification: gofmt clean, GOWORK=off go vet + go test -count=1 ./...
pass with explicit GOPATH/GOMODCACHE/GOCACHE.
Closes findings on #3
Co-authored-by: Codex <noreply@openai.com>
|


Routine dev→main PR opened by Hephaestus PR-cadence task.
This PR exists to:
ahead_by: 21 commit(s) (per gh api compare)If CodeRabbit clears with no blocking comments, this PR is eligible for
gh pr merge --merge(real merge commit, no force-push). Conflicts and review feedback should be addressed on the dev branch before merge.Co-authored-by: Hephaestus hephaestus@cladius
Summary by CodeRabbit
Release Notes
New Features
Chores