Conversation
Co-Authored-By: Virgil <virgil@lethean.io>
Replace all fmt.Errorf/errors.New calls in auth/auth.go, crypt/pgp/pgp.go, crypt/rsa/rsa.go, crypt/chachapoly/chachapoly.go, and trust/trust.go with coreerr.E(op, msg, err) from go-log. No stale pkg/framework imports found. Co-Authored-By: Virgil <virgil@lethean.io>
Go 1.26 rsa.GenerateKey recovers from reader errors internally. Test now verifies no panic instead of expecting an error. Co-Authored-By: Virgil <virgil@lethean.io>
…ions Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughRefactors error construction across many packages to use the centralized core error wrapper ( Changes
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
auth/auth.go (1)
647-664:⚠️ Potential issue | 🟠 MajorFail closed when the Argon2id hash exists but cannot be read or parsed.
This branch currently falls through to the legacy
.lthnverifier on any.hashread or format failure. If a stale.lthnfile is still present, an old password can authoriseverifyPasswordinstead of being rejected.Proposed fix
// Try Argon2id hash first (.hash file) if a.medium.IsFile(userPath(userID, ".hash")) { storedHash, err := a.medium.Read(userPath(userID, ".hash")) - if err == nil && strings.HasPrefix(storedHash, "$argon2id$") { - valid, verr := crypt.VerifyPassword(password, storedHash) - if verr != nil { - return coreerr.E(op, "failed to verify password", nil) - } - if !valid { - return coreerr.E(op, "invalid password", nil) - } - return nil + if err != nil { + return coreerr.E(op, "failed to read password hash", err) } + if !strings.HasPrefix(storedHash, "$argon2id$") { + return coreerr.E(op, "invalid password hash format", nil) + } + valid, verr := crypt.VerifyPassword(password, storedHash) + if verr != nil { + return coreerr.E(op, "failed to verify password", verr) + } + if !valid { + return coreerr.E(op, "invalid password", nil) + } + return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auth/auth.go` around lines 647 - 664, The current logic allows falling back to legacy `.lthn` verification when a `.hash` file exists but cannot be read or isn't in the expected argon2id format; change verifyPassword handling so that if a `.hash` file exists you fail closed on read or format errors instead of falling back. In the block that checks a.medium.IsFile(userPath(userID, ".hash")), ensure that a read error or missing "$argon2id$" prefix returns a clear error via coreerr.E (using op) rather than continuing to the `.lthn` check; keep using crypt.VerifyPassword for successful reads and only proceed to read `.lthn` when the `.hash` file does not exist.
🧹 Nitpick comments (2)
cmd/crypt/cmd_encrypt.go (1)
56-60: Use coreio streaming APIs instead of full-buffer string conversions.Lines 56/60 and 90/94 load entire files into memory via
Read()(returnsstring), then convert to[]byte. Lines 73 and 111 convert the result back tostringforWrite(). This causes unnecessary allocations for each read and write, particularly problematic with large files.The
coreio.Locallibrary exposes streaming methods—ReadStream()andWriteStream()(orOpen()andCreate())—which returnio.ReadCloserandio.WriteCloserrespectively. These are designed for binary payloads and can be used withio.Copyor streamed cipher operations, avoiding full-buffer conversions entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/crypt/cmd_encrypt.go` around lines 56 - 60, The code currently uses coreio.Local.Read (returning string) and string-to-[]byte conversions (variables raw, data) and later writes with string conversions, causing full-buffer allocations; replace those calls with the streaming APIs (coreio.Local.ReadStream / Open / Create / WriteStream) to obtain io.ReadCloser/io.WriteCloser for the same `path` and stream data with io.Copy or streaming cipher operations instead of converting to string/[]byte. Locate the read sites (uses of coreio.Local.Read, variables named raw/data) and the write sites that convert back to string, open a reader and writer via the coreio streaming methods, defer Close(), and pipe data through the encrypt/decrypt stream with io.Copy to avoid loading the entire file into memory. Ensure binary-safe streaming (no string conversions) and proper error handling/closing for the reader and writer.crypt/chachapoly/chachapoly.go (1)
37-37: Consider wrapping other error returns for consistency.The validation error is now wrapped with
coreerr.E, but other error returns in this function (lines 17, 22, 32, 44) still return raw errors. For consistent error context across the codebase, consider wrapping these as well.That said, the current change is correct and aligns with the PR's standardisation effort.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crypt/chachapoly/chachapoly.go` at line 37, Wrap all raw error returns in the chachapoly.Decrypt function using coreerr.E for consistency: locate the Decrypt function and replace each direct error return (the checks for nil key, invalid nonce size, aead.New or aead.Open failures, and any other early returns referenced) with coreerr.E("chachapoly.Decrypt", "<descriptive message>", err) preserving the original error as the third argument; keep the same descriptive messages as the existing ciphertext-too-short message style so all errors from Decrypt use a consistent context and include the original error cause.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@trust/config.go`:
- Around line 38-43: The JSON decoder currently decodes one value but ignores
any trailing data; after calling dec.Decode(&cfg) in LoadPolicies (using
json.NewDecoder and dec.DisallowUnknownFields), attempt a second Decode into an
empty value and ensure it returns io.EOF — if it returns anything else
(including nil), return an error from LoadPolicies instead of proceeding to
convertPolicies(cfg); this guarantees rejection of any trailing JSON/text after
the first object.
---
Outside diff comments:
In `@auth/auth.go`:
- Around line 647-664: The current logic allows falling back to legacy `.lthn`
verification when a `.hash` file exists but cannot be read or isn't in the
expected argon2id format; change verifyPassword handling so that if a `.hash`
file exists you fail closed on read or format errors instead of falling back. In
the block that checks a.medium.IsFile(userPath(userID, ".hash")), ensure that a
read error or missing "$argon2id$" prefix returns a clear error via coreerr.E
(using op) rather than continuing to the `.lthn` check; keep using
crypt.VerifyPassword for successful reads and only proceed to read `.lthn` when
the `.hash` file does not exist.
---
Nitpick comments:
In `@cmd/crypt/cmd_encrypt.go`:
- Around line 56-60: The code currently uses coreio.Local.Read (returning
string) and string-to-[]byte conversions (variables raw, data) and later writes
with string conversions, causing full-buffer allocations; replace those calls
with the streaming APIs (coreio.Local.ReadStream / Open / Create / WriteStream)
to obtain io.ReadCloser/io.WriteCloser for the same `path` and stream data with
io.Copy or streaming cipher operations instead of converting to string/[]byte.
Locate the read sites (uses of coreio.Local.Read, variables named raw/data) and
the write sites that convert back to string, open a reader and writer via the
coreio streaming methods, defer Close(), and pipe data through the
encrypt/decrypt stream with io.Copy to avoid loading the entire file into
memory. Ensure binary-safe streaming (no string conversions) and proper error
handling/closing for the reader and writer.
In `@crypt/chachapoly/chachapoly.go`:
- Line 37: Wrap all raw error returns in the chachapoly.Decrypt function using
coreerr.E for consistency: locate the Decrypt function and replace each direct
error return (the checks for nil key, invalid nonce size, aead.New or aead.Open
failures, and any other early returns referenced) with
coreerr.E("chachapoly.Decrypt", "<descriptive message>", err) preserving the
original error as the third argument; keep the same descriptive messages as the
existing ciphertext-too-short message style so all errors from Decrypt use a
consistent context and include the original error cause.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 07101d56-53d7-455f-b527-05a84d5e2093
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (14)
auth/auth.goauth/session_store.gocmd/crypt/cmd_encrypt.gocmd/testcmd/cmd_runner.gocrypt/chachapoly/chachapoly.gocrypt/pgp/pgp.gocrypt/rsa/rsa.gocrypt/rsa/rsa_test.gogo.modtrust/approval.gotrust/audit.gotrust/config.gotrust/policy.gotrust/trust.go
| dec := json.NewDecoder(r) | ||
| dec.DisallowUnknownFields() | ||
| if err := dec.Decode(&cfg); err != nil { | ||
| return nil, fmt.Errorf("trust.LoadPolicies: %w", err) | ||
| return nil, coreerr.E("trust.LoadPolicies", "failed to decode JSON", err) | ||
| } | ||
| return convertPolicies(cfg) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Go's encoding/json package, does Decoder.Decode accept trailing JSON values or bytes unless a second decode confirms io.EOF?
💡 Result:
Yes. (*json.Decoder).Decode reads one “next JSON-encoded value” from the stream and returns after decoding that value; it does not require the underlying io.Reader to be at io.EOF, so additional JSON values (or other non-whitespace bytes) can remain unread unless you explicitly check for them (commonly by calling Decode a second time and requiring io.EOF). The Decoder also buffers and may read past the end of the first value internally, but it still conceptually decodes one value per Decode call. [1], [2]
Sources:
[1] Go encoding/json docs (Decoder.Decode, NewDecoder) (pkg.go.dev)
[2] “Let’s Go Further” (example showing extra values/garbage are ignored unless a second Decode confirms io.EOF) (files.rileysnyder.dev)
Citations:
- 1: https://pkg.go.dev/encoding/json
- 2: https://files.rileysnyder.dev/media/books/Alex%20Edwards%20-%20Lets%20Go%20Further.pdf
Reject trailing JSON after the first config object.
Decoder.Decode only consumes one JSON value and does not require the underlying reader to be at io.EOF. A file starting with a valid policy document followed by extra JSON or text will be silently accepted, which is a security risk for a trust-policy loader.
Proposed fix
dec := json.NewDecoder(r)
dec.DisallowUnknownFields()
if err := dec.Decode(&cfg); err != nil {
return nil, coreerr.E("trust.LoadPolicies", "failed to decode JSON", err)
}
+ var extra json.RawMessage
+ if err := dec.Decode(&extra); err != io.EOF {
+ if err != nil {
+ return nil, coreerr.E("trust.LoadPolicies", "failed to decode trailing JSON", err)
+ }
+ return nil, coreerr.E("trust.LoadPolicies", "unexpected trailing JSON", nil)
+ }
return convertPolicies(cfg)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@trust/config.go` around lines 38 - 43, The JSON decoder currently decodes one
value but ignores any trailing data; after calling dec.Decode(&cfg) in
LoadPolicies (using json.NewDecoder and dec.DisallowUnknownFields), attempt a
second Decode into an empty value and ensure it returns io.EOF — if it returns
anything else (including nil), return an error from LoadPolicies instead of
proceeding to convertPolicies(cfg); this guarantees rejection of any trailing
JSON/text after the first object.
Forge → GitHub Sync
Commits: 5
Files changed: 15
Automated sync from Forge (forge.lthn.ai) to GitHub mirror.
Co-Authored-By: Virgil virgil@lethean.io
Summary by CodeRabbit
Chores
Refactor