feat(database): filename parser, tag counts, title strategies, and scanner improvements#718
Conversation
…icalization Three related changes to the tag system: 1. Storage-only numeric padding: purely-numeric tag values are zero-padded to width 4 in SQLite (e.g. disc:1 → disc:0001) so ORDER BY sorts correctly. PadTagValue is applied at every DB write site; UnpadTagValue strips at every read site. Public API, NFC tokens, and ZapScript remain in natural form. 2. Net-new upstream tags from PigSaint/GameDataBase: keyboard, touchscreen, positional:4 (input); barcode namespace (addon); vibration:rumble and accelerometer (embedded); vicdual/g80/h1/model1-3/naomi and new manufacturers nichibutsu/taiyo/tecfri:ambush/tourvision (arcadeboard); gameboy:infrared and gameboy:gba (compatibility); archimedes/atari:falcon/ sega:32x/nintendo:disksystem/nintendo:gameandwatch/wonderswan (port); vr and keyword:ubikey (search); comicclassics (reboxed); pcemini/ ninjajajamaru/zeldacollection and 3dfukkoku:01/02 (rerelease); rev:f, set:f1/f2, alt:4/5/6 (range fills); ca (lang); ddrgb/fullchanger (addon controller); mobileadaptergb (link); glasses:mvd, led:powerantenna/bugsensor, pocketsakura, spectrumcommunicator (addon misc); seganet (reboxed). 3. Deprecated alias canonicalization: addon:barcodeboy rewrites to addon:barcode:barcodeboy; addon:controller:jcart to embedded:slot:jcart; addon:controller:rumble to embedded:vibration:rumble. Old NFC tokens and ZapScript files using the former names resolve transparently at query time via CanonicalizeTagAlias in resolveFilter.
parseCommaSeparatedTags was splitting "disc-1" on the dash into ["disc", "1"], returning only media:disc and dropping the number. The full allTagMappings["disc-1"] entry (which maps to both media:disc and disc:1) was never reached. Add a complementary-tag short-circuit in disambiguateTag: if the full normalized tag has an unambiguous multi-type mapping in allTagMappings, use it directly before the dash/comma splitter runs. The TOSEC "Disc X of Y" path (extractSpecialPatterns) was already correct; this fixes the No-Intro "(Disc N)" form used by PSX and other CD-era ROM sets.
TrimRight was stripping the trailing period from "T.V." before the
initialism check ran, leaving only one letter-period pair and bypassing
the {2,} threshold. The roman numeral pass then saw a standalone "V"
and converted it to "5", producing supersmasht5 instead of supersmashtv.
CollapseDottedInitialisms now runs before TrimRight so the full pattern
(e.g. "T.V.") is intact when matched. Adds CollapseDottedInitialisms
helper and regex to slug_helpers.go with unit tests in stages_test.go,
and end-to-end regression cases in slugify_test.go.
- Add ordinalCamelCaseRegex to split "2ndMix" → "2nd Mix" before suffix stripping - NormalizeOrdinals now handles both "2nd MIX" (spaced) and "2ndMix" (camelCase), producing identical slugs - Add 6 end-to-end test cases covering beatmania and DDR IGDB vs LaunchBox variants - Add non-regression test confirming "500thousand" is not affected
…ed series When both query and DB entry have secondary titles and their secondary slugs match, but the main titles differ (e.g. "Touhou 06: X" vs "Touhou Koumakyou: X"), the existing strategies would fall through to fuzzy matching and score 0.75-0.80. The new TrySharedSecondaryTitle strategy (Strategy 4) catches these at 0.92 confidence. A first-token guard on the main title prevents cross-franchise collisions: "Touhou 09: Phantasmagoria of Flower View" will not match "Galaxy Quest: Phantasmagoria of Flower View" because "touhou" != "galaxy". TrySecondaryTitleExact Case 2 (secondary slug search) is restricted to asymmetric queries where the input has no secondary title. Symmetric cases (both sides have secondary titles) now fall through to TrySharedSecondaryTitle which applies the guard. Strategies 4-6 in the old pipeline become 5-7.
…parser Add TOSEC positional publisher, GoodTools single-letter region codes (U/E/J/A), edition/release phrase recognition, disc Side A-D, and company-name promotion to credit:. Extend ZapScript disambiguation to include rev/developer/publisher/credit/ edition/release. Add credit filter union semantics for NOT/OR operators. Unify dynamic tag insertion for rev, developer, publisher, and credit. Extend bracketed year range to 1950-2099 for early-computing-era TOSEC files.
Add a Count field to TagInfo tracking how many media entries use each tag, aggregated from both MediaTags and MediaTitleTags via UNION ALL + SUM. The media.tags response is now capped at 100 tags per type, sorted by popularity, so long-tail categories like credit: don't crowd out others. Fix a cache ordering bug in NewNamesIndex: UpdateLastGenerated was called after PopulateSystemTagsCache, causing invalidateCaches to immediately wipe the cache that was just built. Cache population now runs after UpdateLastGenerated. Add a page_prefetch optimization step that warms the OS buffer cache for search path tables after indexing. Reorder background optimization to run analyze first, giving the query planner accurate statistics before early searches arrive.
- Reader instances from SupportedReaders that are not kept are now
explicitly closed in connectReaders and AutoDetector.DetectReaders,
preventing resource leaks on detection cycles.
- tty2oled defers the operationWorker goroutine start to Open so that
detection probes that never call Open do not leak goroutines.
- PN532 detection now distinguishes ErrNoDevicesFound (Trace) from
unexpected errors (Warn) instead of logging both the same way.
- Media indexing emits PhaseCreatingIndexes and PhaseBuildingCaches
status phases, splitting "Writing database" into labelled sub-steps
("Creating indexes", "Building search caches") with throttle exemptions.
- LauncherMatcher precomputes per-launcher normalized paths, root pairs,
and extensions at construction time to avoid repeated allocs in the
hot fastwalk callback path.
- ListZip replaced with a fast central-directory-only parser that avoids
archive/zip per-entry allocations; uses a pooled tail buffer.
- defaultFsTimeout increased from 10s to 30s for slow SD cards on ARM.
- BatchInserter statement cache changed from a single cachedStmt to a
map[int]*sql.Stmt keyed by row count, enabling reuse across batch sizes.
- Sentry telemetry writer gated at error level via thresholdWriter to
avoid JSON parsing overhead on every log call.
- ParseFilenameToCanonicalTagsForMedia accepts a MediaType and skips
patterns irrelevant to the given type.
- MatchSystemFile and pathIsLauncher log level demoted from Debug to Trace.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 34 minutes and 43 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
📝 WalkthroughWalkthroughThis PR expands tag handling throughout the system by introducing per-tag usage counts, implementing storage padding for numeric values, adding new tag types and mappings, refactoring database queries to aggregate counts, capping long-tail tag results in API responses, adding a new title-resolution strategy, optimizing launcher matching, improving ZIP parsing, and extending slug normalization with dotted-initialism and ordinal handling. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client/API
participant MediaAPI as Media API Handler
participant Cache as TagCache/SQL
participant TagCap as capTagsByCategory
participant Response as TagsResponse
Client->>MediaAPI: HandleMediaTags(system, filters)
MediaAPI->>Cache: GetSystemTagsCached(system)
Cache->>Cache: Query + Aggregate Counts
Cache-->>MediaAPI: []TagInfo{...}
MediaAPI->>TagCap: capTagsByCategory(tags, tagsPerCategoryLimit)
TagCap->>TagCap: Group by Type
TagCap->>TagCap: Sort by Count DESC, Tag ASC
TagCap->>TagCap: Truncate per-type to limit
TagCap-->>MediaAPI: []TagInfo (capped)
MediaAPI->>Response: Build response
Response-->>Client: TagsResponse{Tags:[...]}
sequenceDiagram
participant Scanner as MediaScanner
participant Indexing as Indexing Pipeline
participant DB as MediaDB
participant Cache as Cache/Index
Scanner->>Indexing: NewNamesIndex()
Indexing->>Indexing: SetNormalizeTagCache()
Indexing->>DB: Create secondary indexes
Indexing->>DB: UpdateLastGenerated()
Indexing->>DB: PopulateSystemTagsCache()
DB->>DB: Query + Aggregate Counts
DB-->>Cache: SystemTagsCache (with Count)
Indexing->>DB: prefetchSearchPages()
DB->>DB: SELECT COUNT(*) per table
Indexing->>DB: RebuildTagCache()
DB->>DB: Aggregate + Dedupe
DB-->>Cache: In-memory Tag Index
Indexing->>Indexing: ClearNormalizeTagCache()
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsTimed out fetching pipeline failures after 30000ms Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
pkg/zapscript/launch_title_test.go (1)
22-39:⚠️ Potential issue | 🟡 MinorUse
filepath.Joinfor the new test path.Line 368 adds a POSIX-style path literal in test data. Please construct it with
filepath.Jointo keep the test portable.Proposed adjustment
import ( "context" + "path/filepath" "testing" "github.com/ZaparooProject/go-zapscript" @@ initialResults: []database.SearchResultWithCursor{}, fallbackResults: []database.SearchResultWithCursor{ - {SystemID: "snes", Name: "Quest Named: Ancient Temple", Path: "/test/quest-named.sfc"}, + {SystemID: "snes", Name: "Quest Named: Ancient Temple", Path: filepath.Join("test", "quest-named.sfc")}, },As per coding guidelines,
**/*.go: Usefilepath.Joinfor path construction everywhere, including test files — never hardcode POSIX-style paths like/roms/snes/game.sfcas string literals.Also applies to: 357-369
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/zapscript/launch_title_test.go` around lines 22 - 39, The test uses a hardcoded POSIX path literal (e.g. "/roms/snes/game.sfc"); update the test to construct that path with filepath.Join for portability: add "path/filepath" to the imports and replace the literal(s) (look for variables like romPath/testPath in launch_title_test.go around the failing lines) with filepath.Join("","roms","snes","game.sfc") or filepath.Join("roms","snes","game.sfc") as appropriate so the test works on Windows and POSIX.pkg/zapscript/titles/strategies_test.go (1)
22-33:⚠️ Potential issue | 🟡 MinorAvoid POSIX path literals in the new strategy tests.
The new mock results use hardcoded slash paths. Please construct these with
filepath.Joinin the test fixtures.Example adjustment
import ( "context" "errors" + "path/filepath" "testing" @@ SystemID: "PC", Name: "Touhou Koumakyou: The Embodiment of Scarlet Devil", - Path: "/games/pc/touhou06/th06.exe", + Path: filepath.Join("games", "pc", "touhou06", "th06.exe"), },As per coding guidelines,
**/*.go: Usefilepath.Joinfor path construction everywhere, including test files — never hardcode POSIX-style paths like/roms/snes/game.sfcas string literals.Also applies to: 200-377
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/zapscript/titles/strategies_test.go` around lines 22 - 33, Tests in pkg/zapscript/titles/strategies_test.go use hardcoded POSIX-style path literals in the mock result fixtures; change those string literals to platform-safe constructions using filepath.Join (import "path/filepath") when building paths for mock results and fixtures (replace occurrences in the file, including the region mentioned around lines 200-377). Locate the test variables and helper calls that create mock result paths (e.g., the mock result arrays/structs and fixture builders in this test file) and replace strings like "/roms/snes/game.sfc" with filepath.Join("roms", "snes", "game.sfc") so tests run correctly on non-POSIX platforms.pkg/zapscript/title_strategies_integration_test.go (1)
22-42:⚠️ Potential issue | 🟡 MinorConstruct the new fixture paths with
filepath.Join.The new Touhou/Galaxy Quest fixture paths and their expectations are hardcoded as POSIX paths. Please build both the inserted paths and
expectedPathvalues the same way so the test remains portable.Example adjustment
import ( "context" "database/sql" "os" + "path/filepath" "testing" @@ addGame(insertedPC.DBID, "Touhou Koumakyou: The Embodiment of Scarlet Devil", - "/games/pc/th06/th06.exe") + filepath.Join(string(os.PathSeparator), "games", "pc", "th06", "th06.exe")) @@ name: "shared_secondary_touhou06", input: "PC/Touhou 06: The Embodiment of Scarlet Devil", - expectedPath: "/games/pc/th06/th06.exe", + expectedPath: filepath.Join(string(os.PathSeparator), "games", "pc", "th06", "th06.exe"),As per coding guidelines,
**/*.go: Usefilepath.Joinfor path construction everywhere, including test files — never hardcode POSIX-style paths like/roms/snes/game.sfcas string literals.Also applies to: 400-414, 839-878
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/zapscript/title_strategies_integration_test.go` around lines 22 - 42, Replace all hardcoded POSIX-style fixture paths in title_strategies_integration_test.go (including the newly added Touhou/Galaxy Quest fixture insertions and any expectedPath variables) with filepath.Join-built paths; import "path/filepath" and construct paths by joining each path segment (e.g., filepath.Join("roms","snes","game.sfc")) for both the inserted fixture path values and the expectedPath comparisons so the test is platform-portable.pkg/database/mediadb/sql_tags.go (1)
126-197:⚠️ Potential issue | 🟠 MajorKeep numeric tag storage and lookup backward-compatible.
Line 147 only searches the padded form, so existing unpadded numeric rows can be missed after upgrade. Lines 166 and 196 also let generic inserts bypass the new padded-storage invariant. Pad at the write boundary and either query both padded/natural forms or add a migration that backfills existing rows.
Proposed fix
stmt, err := db.PrepareContext(ctx, ` select DBID, TypeDBID, Tag from Tags where DBID = ? - or Tag = ? + or Tag IN (?, ?) LIMIT 1; `) @@ err = stmt.QueryRowContext(ctx, tagType.DBID, tags.PadTagValue(tagType.Tag), + tags.UnpadTagValue(tagType.Tag), ).Scan( @@ - res, err := stmt.ExecContext(ctx, dbID, row.TypeDBID, row.Tag) + res, err := stmt.ExecContext(ctx, dbID, row.TypeDBID, tags.PadTagValue(row.Tag)) @@ - res, err := stmt.ExecContext(ctx, dbID, row.TypeDBID, row.Tag) + res, err := stmt.ExecContext(ctx, dbID, row.TypeDBID, tags.PadTagValue(row.Tag))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_tags.go` around lines 126 - 197, sqlFindTag only searches the padded Tag and sqlInsertTag/sqlInsertTagWithPreparedStmt write the raw tag, causing numeric tags to be missed; fix by padding at the write boundary and searching both forms: in sqlInsertTag and sqlInsertTagWithPreparedStmt call tags.PadTagValue(row.Tag) before ExecContext so DB always stores the padded form, and in sqlFindTag change the lookup to check both the incoming tag and its padded variant (e.g., use tagType.Tag and tags.PadTagValue(tagType.Tag) in the WHERE clause) while continuing to return tags.UnpadTagValue(row.Tag) to callers.
🟡 Minor comments (13)
pkg/database/mediadb/batch_inserter.go-307-319 (1)
307-319:⚠️ Potential issue | 🟡 MinorReturn cached statement close failures from
Close().Right now
stmt.Close()failures are only logged, soClose()can returnnileven when cleanup failed. Preserve the first close error whenFlush()succeeds.Proposed fix
func (b *BatchInserter) Close() error { err := b.Flush() for rowCount, stmt := range b.stmtCache { if closeErr := stmt.Close(); closeErr != nil { log.Warn().Err(closeErr). Str("table", b.tableName). Int("rows", rowCount). Msg("failed to close cached batch statement") + if err == nil { + err = fmt.Errorf( + "failed to close cached batch statement for table %s with %d rows: %w", + b.tableName, + rowCount, + closeErr, + ) + } } } b.stmtCache = nil return err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/batch_inserter.go` around lines 307 - 319, The Close method currently only logs stmt.Close() failures so Close() may return nil even when cleanup fails; update BatchInserter.Close to preserve and return the first non-nil error from closing cached statements after calling b.Flush() (e.g., keep a variable closeErrFirst or reuse err if Flush succeeded), while still logging each individual close error with log.Warn().Err(...).Str("table", b.tableName).Int("rows", rowCount).Msg(...). Ensure b.stmtCache is still cleared and Close returns the preserved error (or the original Flush error if non-nil).internal/telemetry/telemetry.go-70-75 (1)
70-75:⚠️ Potential issue | 🟡 MinorAdd explicit checks for
NoLevelandDisabledin threshold filtering, and write tests for the newthresholdWritertype.The numeric comparison
level < w.thresholddoes not filter special zerolog levels. Withthreshold: zerolog.ErrorLevel(value 3), bothNoLevel(6) andDisabled(7) pass through to the Sentry writer because they compare as greater than the threshold, even though they are not actual error-level events. Explicitly discard these before delegating to the inner writer.Additionally, as
thresholdWriteris newly added, add test coverage for boundary conditions includingDebug,Info,Warn,Error,Fatal,Panic, andNoLeveldelegation behavior, using testify/mock patterns frompkg/testing/.Proposed fix
func (w *thresholdWriter) WriteLevel(level zerolog.Level, p []byte) (int, error) { - if level < w.threshold { + if level == zerolog.NoLevel || level == zerolog.Disabled || level < w.threshold { return len(p), nil } return w.inner.WriteLevel(level, p) //nolint:wrapcheck // transparent passthrough — wrapping changes error identity }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/telemetry/telemetry.go` around lines 70 - 75, The WriteLevel method on thresholdWriter currently only does numeric comparison (level < w.threshold) which lets special zerolog levels (zerolog.NoLevel, zerolog.Disabled) pass; update thresholdWriter.WriteLevel to explicitly drop levels equal to zerolog.NoLevel and zerolog.Disabled before delegating to inner.WriteLevel, keeping the existing nolint comment for the passthrough; then add unit tests for thresholdWriter covering boundary behaviors for Debug, Info, Warn, Error, Fatal, Panic and NoLevel (using testify/mock patterns from pkg/testing) to assert correct delegation or suppression at each threshold.pkg/readers/pn532/pn532.go-594-601 (1)
594-601:⚠️ Potential issue | 🟡 MinorMake detection.DetectAll injectable or defer testing with justification.
This new error-handling branch for
ErrNoDevicesFoundand unexpected errors is untested. Per coding guidelines, all new Go code must have test coverage. The existing TODO comment documents that fullDetect()integration testing is blocked becausedetection.DetectAllandhelpers.GetSerialDeviceListaren't injectable. Either:
- Refactor to inject these dependencies at an interface boundary (enabling mocks via testify/mock), or
- Add a code comment explaining why this specific path is intentionally deferred, and track it with an issue.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/readers/pn532/pn532.go` around lines 594 - 601, The new error branch around detection.DetectAll and ErrNoDevicesFound lacks tests because detection.DetectAll and helpers.GetSerialDeviceList aren't injectable; refactor to inject these dependencies (e.g., define an interface with methods DetectAll(ctx, *Options) ([]Device, error) and GetSerialDeviceList() ([]string, error), add a field of that interface to the PN532 reader struct or pass it into Detect(), replace direct calls to detection.DetectAll and helpers.GetSerialDeviceList with the interface methods, and add unit tests using testify/mock to simulate ErrNoDevicesFound and unexpected errors), or if you cannot refactor now, add a brief TODO comment above the Detect() implementation referencing detection.DetectAll and helpers.GetSerialDeviceList and create a tracked issue ID in the comment explaining test deferral so the untested branch is explicitly documented.pkg/database/filters/parser_test.go-507-510 (1)
507-510:⚠️ Potential issue | 🟡 MinorUpdate the test case name to match the new expectation.
The expected value now collapses repeated spaces to a single hyphen, but the test name still says “multiple hyphens”.
Proposed rename
- name: "Multiple spaces become multiple hyphens", + name: "Multiple spaces collapse to one hyphen",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/filters/parser_test.go` around lines 507 - 510, The test case name "Multiple spaces become multiple hyphens" is outdated; update the test entry's name string in parser_test.go (the test case that sets input []string{"name:super mario world"} and expected Value "super-mario-world") to reflect that repeated spaces collapse to a single hyphen (e.g., "Multiple spaces collapse to a single hyphen" or "Multiple spaces become single hyphen") so the description matches the new expectation.pkg/database/tags/tag_mappings.go-82-82 (1)
82-82:⚠️ Potential issue | 🟡 MinorMinor:
scandinavia→TagRegionEUis geographically coarse.Scandinavia typically refers to DK/NO/SE (sometimes also FI/IS). Grouping it under the broader EU region means downstream filters by
TagRegionEUwill pick up Scandinavian-only releases. Acceptable as a pragmatic fallback given there's no Nordic-specific tag and the comment acknowledges the multi-country nature — just worth being aware of if aTagRegionNordic/TagRegionScandinaviaever gets introduced.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/tags/tag_mappings.go` at line 82, The mapping for the key "scandinavia" currently uses TagTypeRegion with value TagRegionEU which is geographically coarse; either add a new region enum (e.g., TagRegionNordic or TagRegionScandinavia) in the tag definitions and update the mapping in tag_mappings.go to use that new symbol, or if you intend to keep the EU fallback, add a clear TODO comment next to the "scandinavia" entry referencing TagTypeRegion/TagRegionEU so future reviewers know to remap to TagRegionNordic/TagRegionScandinavia if such a tag is introduced.pkg/database/mediadb/mediadb.go-2130-2134 (1)
2130-2134:⚠️ Potential issue | 🟡 MinorAvoid failing optimization on a best-effort prefetch miss.
page_prefetchis only warming cache pages, but the shared executor marks optimization failed if this returns an error. Give it retries likeanalyze, or wrap it as non-fatal so a transient read/lock error does not leave optimization infailed.🛠️ Possible non-fatal wrapper
optimizationStep{ name: "page_prefetch", fn: func() error { - return db.prefetchSearchPages(db.ctx) + if err := db.prefetchSearchPages(db.ctx); err != nil { + log.Warn().Err(err).Msg("search page prefetch failed; continuing optimization") + } + return nil }, maxRetries: 0, retryDelay: rd, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/mediadb.go` around lines 2130 - 2134, The page_prefetch step currently has maxRetries: 0 and returns any error from db.prefetchSearchPages, which makes the shared executor mark optimization as failed for a best-effort cache warm; change the optimizationStep for name "page_prefetch" so it is non-fatal or retried: either set maxRetries and retryDelay similar to the "analyze" step (increase maxRetries to the same value and reuse rd) or wrap the fn so it calls db.prefetchSearchPages(db.ctx) but logs errors and returns nil (treat failures as non-fatal). Update the entry for optimizationStep{name: "page_prefetch", fn: ..., maxRetries: 0, retryDelay: rd} to use the chosen approach referencing optimizationStep, "page_prefetch", and prefetchSearchPages.pkg/database/mediadb/slug_cache.go-285-287 (1)
285-287:⚠️ Potential issue | 🟡 MinorMove the
gosecsuppression onto the interpolated SQL statement.The
nolint:gosecon Line 285 applies totypeList, but the SQL interpolation happens on Line 287, so CI may still report the gosec finding there.Proposed fix
- //nolint:gosec // Safe: ZapScriptTagTypes contains internal compile-time constants, not user input typeList := "'" + strings.Join(database.ZapScriptTagTypes, "', '") + "'" + //nolint:gosec // Safe: ZapScriptTagTypes contains internal compile-time constants, not user input rows, err := db.conn().QueryContext(ctx, fmt.Sprintf(`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/slug_cache.go` around lines 285 - 287, The gosec nolint comment is on the typeList declaration but the actual SQL interpolation occurs in the fmt.Sprintf passed to db.conn().QueryContext (using typeList), so move the //nolint:gosec comment to the line with the interpolated SQL call (the fmt.Sprintf(...) used in QueryContext) to suppress the finding where the concatenation is used; keep the explanatory comment about safety (ZapScriptTagTypes being compile-time constants) and leave typeList and database.ZapScriptTagTypes unchanged.pkg/database/keys_test.go-190-277 (1)
190-277:⚠️ Potential issue | 🟡 MinorAdd cases for
revand empty-tag skipping.The new table covers most expanded zap-script tags, but not the newly supported
revtype or the “ignore emptyTag” behavior. As per coding guidelines,**/*.go: Write tests for all new code.Suggested table entries
{ name: "release tag uses generic format", systemID: "SNES", gameName: "Game", tags: []TagInfo{{Tag: "homebrew", Type: "release"}}, want: "@SNES/Game (release:homebrew)", }, + { + name: "revision tag uses generic format", + systemID: "SNES", + gameName: "Game", + tags: []TagInfo{{Tag: "1-0", Type: "rev"}}, + want: "@SNES/Game (rev:1-0)", + }, + { + name: "empty tag skipped", + systemID: "SNES", + gameName: "Game", + tags: []TagInfo{{Tag: "", Type: "publisher"}, {Tag: "1994", Type: "year"}}, + want: "@SNES/Game (year:1994)", + }, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/keys_test.go` around lines 190 - 277, TestBuildTitleZapScript lacks cases for the new TagInfo.Type "rev" and for skipping tags whose Tag field is empty; add table entries in the tests slice inside TestBuildTitleZapScript that assert a rev tag renders like the others (e.g., tags: []TagInfo{{Tag:"1", Type:"rev"}} with want "@SNES/Game (rev:1)") and a case where an empty Tag (e.g., Tag:"", Type:"publisher" or any type) is present but should be ignored (expect same output as no-tag case), so update the tests to include both a "rev tag" case and an "empty tag skipped" case referencing TestBuildTitleZapScript and the TagInfo struct.pkg/database/tags/string_parsers.go-149-162 (1)
149-162:⚠️ Potential issue | 🟡 MinorContinue scanning after invalid
(disc/(diskprefixes.Line 160 returns empty on the first
"(disc"prefix if the next character is not whitespace, so"(Disco) Game (Disk 1 of 2)"misses the later valid disk token.Possible direction
- // Try "(disc" first, then "(disk". - prefixLen := 5 - idx := strings.Index(lower, "(disc") - if idx == -1 { - idx = strings.Index(lower, "(disk") - if idx == -1 { - return parseMatch{} - } - } - pos := idx + prefixLen // after "(disc" or "(disk" - // The character immediately after the prefix must be whitespace (not "o" in "Disco", "ette" in "Diskette"). - if pos >= len(s) || !isWhitespace(s[pos]) { - return parseMatch{} - } + prefixLen := 5 + idx := -1 + pos := 0 + for searchFrom := 0; searchFrom < len(lower); { + discIdx := strings.Index(lower[searchFrom:], "(disc") + diskIdx := strings.Index(lower[searchFrom:], "(disk") + if discIdx >= 0 { + discIdx += searchFrom + } + if diskIdx >= 0 { + diskIdx += searchFrom + } + switch { + case discIdx == -1: + idx = diskIdx + case diskIdx == -1: + idx = discIdx + default: + idx = min(discIdx, diskIdx) + } + if idx == -1 { + return parseMatch{} + } + pos = idx + prefixLen + if pos < len(s) && isWhitespace(s[pos]) { + break + } + searchFrom = idx + 1 + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/tags/string_parsers.go` around lines 149 - 162, The current logic stops scanning on the first "(disc"/"(disk" hit when the following char isn't whitespace (using prefixLen, idx, pos) which causes later valid tokens to be missed; update the parsing in string_parsers.go to iterate/search for subsequent occurrences instead of returning parseMatch{} on a whitespace check failure: loop using strings.Index starting from idx+1 (or use strings.Index with a slice of lower) to find the next "(disc" or "(disk" match, recompute pos each time and only return a non-empty parseMatch when isWhitespace(s[pos]) holds, otherwise continue scanning until no more matches and then return parseMatch{}.pkg/database/mediadb/sql_prefetch.go-44-50 (1)
44-50:⚠️ Potential issue | 🟡 MinorReturn when the context is canceled.
Line 47 treats cancellation/deadline errors like ordinary per-table misses, so shutdown can keep issuing queries and report success. Short-circuit on
ctx.Err()before continuing.Proposed fix
for _, table := range prefetchTables { + if ctxErr := ctx.Err(); ctxErr != nil { + return ctxErr + } var count int64 //nolint:gosec // table names are hardcoded literals, not user input if err := db.sql.QueryRowContext(ctx, "SELECT COUNT(*) FROM "+table).Scan(&count); err != nil { + if ctxErr := ctx.Err(); ctxErr != nil { + return ctxErr + } log.Warn().Err(err).Str("table", table).Msg("page prefetch failed, skipping") continue }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_prefetch.go` around lines 44 - 50, In the prefetch loop over prefetchTables inside sql_prefetch.go the code treats context cancellation or deadline exceeded as a normal per-table error; update the loop that runs the db.sql.QueryRowContext(ctx, "SELECT COUNT(*) FROM "+table).Scan(&count) to check ctx.Err() after an error and if non-nil return (or short-circuit) instead of continuing; refer to the loop over prefetchTables, the db.sql.QueryRowContext call and the existing log.Warn().Err(err).Str("table", table).Msg("page prefetch failed, skipping") so you only log-and-continue for non-context errors and immediately return/stop when ctx.Err() indicates cancellation.pkg/database/tags/storage.go-82-91 (1)
82-91:⚠️ Potential issue | 🟡 MinorUse ASCII digit detection for stored tag values.
unicode.IsDigitaccepts non-ASCII digits like Arabic-Indic numerals, but the padding logic uses ASCII"0"and byte lengths. This creates an inconsistency: if a tag segment contains non-ASCII decimal digits,isAllDigitswould return true and the padding logic would proceed with mixed encodings. The helper should restrict to ASCII digits'0'..'9'to match the actual storage implementation.Proposed fix
import ( "strings" - "unicode" ) @@ func isAllDigits(s string) bool { if s == "" { return false } - for _, r := range s { - if !unicode.IsDigit(r) { + for i := 0; i < len(s); i++ { + if s[i] < '0' || s[i] > '9' { return false } } return true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/tags/storage.go` around lines 82 - 91, The isAllDigits helper currently uses unicode.IsDigit which accepts non-ASCII numerals and can mismatch the ASCII "0" padding/byte-length logic; update isAllDigits to only accept ASCII digits by iterating the string and returning false if any rune is outside '0'..'9' (keeping the empty-string check), so functions that rely on ASCII byte-width/padding (the tag segment padding logic) see only true for ASCII-only digit segments.pkg/database/tags/filename_parser_test.go-724-729 (1)
724-729:⚠️ Potential issue | 🟡 MinorMake the “no publisher or credit” regression explicit.
This case currently passes even if parsing adds
credit:*orpublisher:*, because the integration helper only checks required tags and allows extras. Add negative assertions for those prefixes so the stated regression is enforced.Example test tightening
{ name: "No-Intro format produces no publisher or credit", filename: "Super Mario World (USA).sfc", wantTags: []string{"region:us", "lang:en"}, + // Also assert no credit:/publisher: if this table is extended with negative expectations. },As per coding guidelines,
**/*.go: Write tests for all new code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/tags/filename_parser_test.go` around lines 724 - 729, Tighten the "No-Intro format produces no publisher or credit" test in filename_parser_test.go by asserting that parsed tags do not include any tags with the prefixes "credit:" or "publisher:"; locate the test case with name "No-Intro format produces no publisher or credit" (uses filename "Super Mario World (USA).sfc" and wantTags) and after invoking the parser/asserting required tags, add negative checks that fail if any returned tag startsWith "credit:" or "publisher:" so the regression (adding credit/publisher) is explicitly detected.pkg/database/mediadb/tagfilter_test.go-292-311 (1)
292-311:⚠️ Potential issue | 🟡 MinorAssert the full argument count before indexing.
Line 295 only guarantees indexes through
args[7], but this test reads throughargs[11]. Use an exact length assertion so failures report cleanly instead of panicking.Proposed test fix
_, args := BuildTagFilterSQL(filters) // AND: addon:barcodeboy → addon / barcode:barcodeboy (doubled for UNION) - require.Greater(t, len(args), 7) + require.Len(t, args, 12) assert.Equal(t, "addon", args[0])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/tagfilter_test.go` around lines 292 - 311, The test reads args[0]..args[11] but only checks len(args) > 7; update the assertion on the result of BuildTagFilterSQL to assert the exact expected count (12) before indexing so failures show a clear message instead of panicking—replace the require.Greater(t, len(args), 7) with an exact length assertion (e.g., require.Equal/require.Len for 12) for the args slice returned by BuildTagFilterSQL in tagfilter_test.go.
🧹 Nitpick comments (2)
pkg/helpers/utils.go (2)
242-378: Add tests for the newListZiperror/ZIP64 branches.The refactor introduces several new code paths that
TestListZipinpkg/helpers/utils_test.godoesn't exercise:
file too small to be a valid zip(size < 22)invalid zip central directory: offset=… size=… fileSize=…(corrupted/inconsistent EOCD numbers)- The ZIP64 locator + ZIP64 EOCD parsing branch (lines 322–336)
- The >1 KB tail fallback path (line 281 onward), which is exercised only indirectly by the tiny 34-byte
test.txttoday — not by a real zip whose EOCD lies beyond the 1 KB windowAs per coding guidelines: "Write tests for all new code — use testify/mock, sqlmock, afero, and patterns from pkg/testing/". A handful of fixture zips (or in-memory
aferobyte buffers) covering these branches would prevent silent regressions in the manual parser.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/helpers/utils.go` around lines 242 - 378, Add unit tests for ListZip in pkg/helpers/utils_test.go to exercise the newly added error and ZIP64 branches: create fixtures (in-memory via afero or byte buffers per pkg/testing patterns) that simulate (1) a file smaller than zipEOCDMinSize to trigger "file too small to be a valid zip", (2) a corrupted EOCD that yields an inconsistent cdOffset/cdSize to trigger the "invalid zip central directory" error, (3) a ZIP64 archive with a ZIP64 locator and EOCD to exercise the ZIP64 locator + ZIP64 EOCD parsing branch, and (4) a zip whose EOCD is located beyond the 1 KB tail so the >1 KB fallback path is used; implement tests in TestListZip asserting returned names or expected errors using testify assertions and any required mocks/pools to mirror zipTailPool usage.
269-313: Avoid the redundant second read when the file fits entirely in the small tail.If
size <= smallTail(1024),firstBufalready contains the whole file andfindEOCDhas scanned all of it. Falling through to the 64 KB fallback then re-reads the same bytes and grabs a pool buffer just to callfindEOCDagain on identical data, before erroring out. Worth short-circuiting to the EOCD-not-found error whenfirstLen == size:♻️ Proposed short-circuit
eocdOff = findEOCD(firstBuf) if eocdOff >= 0 { tail = firstBuf } else { + if firstLen == size { + // Entire file already scanned; no point re-reading via the pool. + return nil, errors.New("zip EOCD signature not found") + } // Fall back to the full max-comment search (22 + 65535 bytes).Purely a minor efficiency tidy-up for the small-file path; no correctness impact.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/helpers/utils.go` around lines 269 - 313, The small-file path currently re-reads the entire file when firstBuf already contains the full file; after calling findEOCD(firstBuf) in the first-read branch, if eocdOff < 0 and firstLen == size (i.e. the whole file was read), short-circuit and return the same "zip EOCD signature not found" error instead of falling through to the 64KB fallback that reads again and borrows from zipTailPool; modify the logic around firstLen/firstBuf and findEOCD so that when firstLen == size and no EOCD is found you return the error immediately (preserving the existing error text/semantics) and do not allocate or use the pool or perform the second ReadAt.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@go.mod`:
- Line 10: Replace the non-existent dependency version by updating the go.mod
entry for module github.com/ZaparooProject/go-zapscript from v0.7.2 to the
latest released tag v0.7.0 (or explicitly document/justify in the PR why the
unreleased main branch is required), ensuring the module line reads the
published version; run go mod tidy afterwards to verify and commit the updated
go.mod and go.sum.
In `@pkg/api/methods/media.go`:
- Around line 81-84: The truncation currently applied to every tag group (the
block that checks len(group) > limit and slices group) should only run for
capped tag categories; add an allowlist (e.g., cappedTypes or
config.CappedTagTypes) of tag names to limit (credit, long-tail tags) and skip
truncation for taxonomy/complete categories like year. In the loop that
builds/appends groups (where variables group, limit, and result are used in
media.go), replace the unconditional truncation with a membership check against
the allowlist (or config) so only allowed categories are truncated; leave other
categories untouched and append their full group to result. Ensure the allowlist
is injectable/configurable and referenced by the truncation branch.
In
`@pkg/database/filters/testdata/rapid/TestPropertyParseTagFiltersNormalization/TestPropertyParseTagFiltersNormalization-20260420120516-2779021.fail`:
- Around line 1-3: The failure shows ParseTagFilters("a:--") is normalizing the
value "--" down to "-"; inspect the ParseTagFilters implementation and any
normalization helpers it calls (e.g., tokenization or value normalization
functions used by TestPropertyParseTagFiltersNormalization) and stop collapsing
duplicate hyphens or trimming significant punctuation: preserve literal
characters when parsing tag values, ensure the parser treats consecutive '-' as
distinct characters (not as a single token or a range marker), adjust the
normalization routine to return the literal "--" for the key "a" (and update
related helpers so they don't strip/merge repeated punctuation), add/adjust a
unit test asserting ParseTagFilters("a:--") -> "--", and only remove the .fail
file once the parser returns the expected value.
In `@pkg/database/mediadb/migrations/20260421120000_tag_counts.sql`:
- Line 5: The migration adds a NOT NULL Count column to SystemTagsCache but
leaves existing rows at 0; update the migration to populate Count with real tag
usage (e.g., compute per-tag usage from the authoritative tag-usage
table/relationship and UPDATE SystemTagsCache SET Count = <computed_count> for
each tag) or, alternatively, mark the cache as stale so the application rebuilds
it on startup; locate the ALTER TABLE for SystemTagsCache and add an UPDATE that
derives counts from the existing tag-usage source (or set a rebuild flag) so
in-memory warm-up sees correct popularity values immediately.
In `@pkg/database/tags/filename_parser.go`:
- Around line 1189-1208: The code currently promotes the first unknown bracket
group to TagTypePublisher (TOSEC positional rule) before running
classifyUnmappedParen, which lets things classifyUnmappedParen would drop become
false publishers; change the order so you call classifyUnmappedParen(tag,
ctx.CurrentTag) immediately after mapFilenameTagToCanonical returns empty, and
if classified return its result (including nil to drop); only if
classifyUnmappedParen did not classify this group and ctx.CurrentIndex == 0 &&
ctx.YearExtractedFromFile should you create the publisher tag (using
slugifyCompanyName(ctx.CurrentTag) and TagTypePublisher/TagSourceBracketed) so
that only a classified company credit can be promoted to publisher.
In `@pkg/helpers/paths.go`:
- Around line 263-280: The code builds normalized prefix strings by
concatenating with "/" which breaks filepath semantics; update constructions to
use filepath.Join before normalizing (e.g., build normMediaPrefix with
filepath.Join(NormalizePathForComparison(DataDir(pl)), config.MediaDir) and set
lp.normMediaPath using filepath.Join(normMediaPrefix,
strings.ToLower(l.SystemID)) where launcherPrecomp.normMediaPath is assigned),
and when appending rootPairs replace normRoot+"/"+normFolder with
filepath.Join(normRoot, normFolder) (references: NormalizePathForComparison,
DataDir, config.MediaDir, launcherPrecomp.normMediaPath, rootPairs, folderCache,
normRoots, l.Folders, pathHasPrefixNormalized). Also apply the same fixes in the
other block noted (the section around the second occurrence) and add regression
tests that cover root="/" and launcher folder="." to ensure
pathHasPrefixNormalized still matches.
- Around line 310-315: The PathIsLauncher flow is currently passing a fully
normalized path (NormalizePathForComparison) into the launcher Test callbacks
via the optimized MatchSystemFile path, but the public PathIsLauncher call only
lowercases the original path (preserving separator/URI shape), causing
inconsistent behavior for Test implementations; fix by computing and passing a
separate "raw lowercased" value (e.g., rawLower := strings.ToLower(path)) into
m.pathIsLauncher / MatchSystemFile and ensure l.Test is invoked with that
rawLower value (instead of the fully normalized result from
NormalizePathForComparison); update calls in m.pathIsLauncher, MatchSystemFile,
and the PathIsLauncher entry so Test callbacks consistently receive the raw
lowercased path while still using NormalizePathForComparison for internal
matching.
In `@pkg/readers/tty2oled/tty2oled.go`:
- Around line 362-366: The operation worker is being started with a possibly
already-canceled operationWorkerCtx so operationWorker() can exit immediately;
before r.wg.Add(1) and go r.operationWorker() in NewReader/Open, recreate the
context and cancel pair (e.g., set r.operationWorkerCtx, r.operationWorkerCancel
= context.WithCancel(parentCtx)) so the worker always gets a fresh, non-canceled
context, and ensure any previous cancel is invoked only when closing; apply the
same recreation pattern around the other start site(s) referenced (lines around
371-380) so each new Start/Open resets operationWorkerCtx before launching
operationWorker().
In `@pkg/zapscript/titles/strategies.go`:
- Around line 256-260: The DB error from gamesdb.SearchMediaBySecondarySlug is
being swallowed and converted to “no match”; change the error handling in the
block that calls gamesdb.SearchMediaBySecondarySlug(ctx, systemID,
matchInfo.SecondaryTitleSlug, tagFilters) so that instead of returning nil, "",
nil on err you return the error (e.g., nil, "", err) so the caller (ResolveTitle
/ strategyErr) can propagate the failure; you may keep the existing
log.Warn().Err(err).Msgf(...) but ensure the function returns the error rather
than a nil error.
---
Outside diff comments:
In `@pkg/database/mediadb/sql_tags.go`:
- Around line 126-197: sqlFindTag only searches the padded Tag and
sqlInsertTag/sqlInsertTagWithPreparedStmt write the raw tag, causing numeric
tags to be missed; fix by padding at the write boundary and searching both
forms: in sqlInsertTag and sqlInsertTagWithPreparedStmt call
tags.PadTagValue(row.Tag) before ExecContext so DB always stores the padded
form, and in sqlFindTag change the lookup to check both the incoming tag and its
padded variant (e.g., use tagType.Tag and tags.PadTagValue(tagType.Tag) in the
WHERE clause) while continuing to return tags.UnpadTagValue(row.Tag) to callers.
In `@pkg/zapscript/launch_title_test.go`:
- Around line 22-39: The test uses a hardcoded POSIX path literal (e.g.
"/roms/snes/game.sfc"); update the test to construct that path with
filepath.Join for portability: add "path/filepath" to the imports and replace
the literal(s) (look for variables like romPath/testPath in launch_title_test.go
around the failing lines) with filepath.Join("","roms","snes","game.sfc") or
filepath.Join("roms","snes","game.sfc") as appropriate so the test works on
Windows and POSIX.
In `@pkg/zapscript/title_strategies_integration_test.go`:
- Around line 22-42: Replace all hardcoded POSIX-style fixture paths in
title_strategies_integration_test.go (including the newly added Touhou/Galaxy
Quest fixture insertions and any expectedPath variables) with
filepath.Join-built paths; import "path/filepath" and construct paths by joining
each path segment (e.g., filepath.Join("roms","snes","game.sfc")) for both the
inserted fixture path values and the expectedPath comparisons so the test is
platform-portable.
In `@pkg/zapscript/titles/strategies_test.go`:
- Around line 22-33: Tests in pkg/zapscript/titles/strategies_test.go use
hardcoded POSIX-style path literals in the mock result fixtures; change those
string literals to platform-safe constructions using filepath.Join (import
"path/filepath") when building paths for mock results and fixtures (replace
occurrences in the file, including the region mentioned around lines 200-377).
Locate the test variables and helper calls that create mock result paths (e.g.,
the mock result arrays/structs and fixture builders in this test file) and
replace strings like "/roms/snes/game.sfc" with filepath.Join("roms", "snes",
"game.sfc") so tests run correctly on non-POSIX platforms.
---
Minor comments:
In `@internal/telemetry/telemetry.go`:
- Around line 70-75: The WriteLevel method on thresholdWriter currently only
does numeric comparison (level < w.threshold) which lets special zerolog levels
(zerolog.NoLevel, zerolog.Disabled) pass; update thresholdWriter.WriteLevel to
explicitly drop levels equal to zerolog.NoLevel and zerolog.Disabled before
delegating to inner.WriteLevel, keeping the existing nolint comment for the
passthrough; then add unit tests for thresholdWriter covering boundary behaviors
for Debug, Info, Warn, Error, Fatal, Panic and NoLevel (using testify/mock
patterns from pkg/testing) to assert correct delegation or suppression at each
threshold.
In `@pkg/database/filters/parser_test.go`:
- Around line 507-510: The test case name "Multiple spaces become multiple
hyphens" is outdated; update the test entry's name string in parser_test.go (the
test case that sets input []string{"name:super mario world"} and expected
Value "super-mario-world") to reflect that repeated spaces collapse to a single
hyphen (e.g., "Multiple spaces collapse to a single hyphen" or "Multiple spaces
become single hyphen") so the description matches the new expectation.
In `@pkg/database/keys_test.go`:
- Around line 190-277: TestBuildTitleZapScript lacks cases for the new
TagInfo.Type "rev" and for skipping tags whose Tag field is empty; add table
entries in the tests slice inside TestBuildTitleZapScript that assert a rev tag
renders like the others (e.g., tags: []TagInfo{{Tag:"1", Type:"rev"}} with want
"@SNES/Game (rev:1)") and a case where an empty Tag (e.g., Tag:"",
Type:"publisher" or any type) is present but should be ignored (expect same
output as no-tag case), so update the tests to include both a "rev tag" case and
an "empty tag skipped" case referencing TestBuildTitleZapScript and the TagInfo
struct.
In `@pkg/database/mediadb/batch_inserter.go`:
- Around line 307-319: The Close method currently only logs stmt.Close()
failures so Close() may return nil even when cleanup fails; update
BatchInserter.Close to preserve and return the first non-nil error from closing
cached statements after calling b.Flush() (e.g., keep a variable closeErrFirst
or reuse err if Flush succeeded), while still logging each individual close
error with log.Warn().Err(...).Str("table", b.tableName).Int("rows",
rowCount).Msg(...). Ensure b.stmtCache is still cleared and Close returns the
preserved error (or the original Flush error if non-nil).
In `@pkg/database/mediadb/mediadb.go`:
- Around line 2130-2134: The page_prefetch step currently has maxRetries: 0 and
returns any error from db.prefetchSearchPages, which makes the shared executor
mark optimization as failed for a best-effort cache warm; change the
optimizationStep for name "page_prefetch" so it is non-fatal or retried: either
set maxRetries and retryDelay similar to the "analyze" step (increase maxRetries
to the same value and reuse rd) or wrap the fn so it calls
db.prefetchSearchPages(db.ctx) but logs errors and returns nil (treat failures
as non-fatal). Update the entry for optimizationStep{name: "page_prefetch", fn:
..., maxRetries: 0, retryDelay: rd} to use the chosen approach referencing
optimizationStep, "page_prefetch", and prefetchSearchPages.
In `@pkg/database/mediadb/slug_cache.go`:
- Around line 285-287: The gosec nolint comment is on the typeList declaration
but the actual SQL interpolation occurs in the fmt.Sprintf passed to
db.conn().QueryContext (using typeList), so move the //nolint:gosec comment to
the line with the interpolated SQL call (the fmt.Sprintf(...) used in
QueryContext) to suppress the finding where the concatenation is used; keep the
explanatory comment about safety (ZapScriptTagTypes being compile-time
constants) and leave typeList and database.ZapScriptTagTypes unchanged.
In `@pkg/database/mediadb/sql_prefetch.go`:
- Around line 44-50: In the prefetch loop over prefetchTables inside
sql_prefetch.go the code treats context cancellation or deadline exceeded as a
normal per-table error; update the loop that runs the
db.sql.QueryRowContext(ctx, "SELECT COUNT(*) FROM "+table).Scan(&count) to check
ctx.Err() after an error and if non-nil return (or short-circuit) instead of
continuing; refer to the loop over prefetchTables, the db.sql.QueryRowContext
call and the existing log.Warn().Err(err).Str("table", table).Msg("page prefetch
failed, skipping") so you only log-and-continue for non-context errors and
immediately return/stop when ctx.Err() indicates cancellation.
In `@pkg/database/mediadb/tagfilter_test.go`:
- Around line 292-311: The test reads args[0]..args[11] but only checks
len(args) > 7; update the assertion on the result of BuildTagFilterSQL to assert
the exact expected count (12) before indexing so failures show a clear message
instead of panicking—replace the require.Greater(t, len(args), 7) with an exact
length assertion (e.g., require.Equal/require.Len for 12) for the args slice
returned by BuildTagFilterSQL in tagfilter_test.go.
In `@pkg/database/tags/filename_parser_test.go`:
- Around line 724-729: Tighten the "No-Intro format produces no publisher or
credit" test in filename_parser_test.go by asserting that parsed tags do not
include any tags with the prefixes "credit:" or "publisher:"; locate the test
case with name "No-Intro format produces no publisher or credit" (uses filename
"Super Mario World (USA).sfc" and wantTags) and after invoking the
parser/asserting required tags, add negative checks that fail if any returned
tag startsWith "credit:" or "publisher:" so the regression (adding
credit/publisher) is explicitly detected.
In `@pkg/database/tags/storage.go`:
- Around line 82-91: The isAllDigits helper currently uses unicode.IsDigit which
accepts non-ASCII numerals and can mismatch the ASCII "0" padding/byte-length
logic; update isAllDigits to only accept ASCII digits by iterating the string
and returning false if any rune is outside '0'..'9' (keeping the empty-string
check), so functions that rely on ASCII byte-width/padding (the tag segment
padding logic) see only true for ASCII-only digit segments.
In `@pkg/database/tags/string_parsers.go`:
- Around line 149-162: The current logic stops scanning on the first
"(disc"/"(disk" hit when the following char isn't whitespace (using prefixLen,
idx, pos) which causes later valid tokens to be missed; update the parsing in
string_parsers.go to iterate/search for subsequent occurrences instead of
returning parseMatch{} on a whitespace check failure: loop using strings.Index
starting from idx+1 (or use strings.Index with a slice of lower) to find the
next "(disc" or "(disk" match, recompute pos each time and only return a
non-empty parseMatch when isWhitespace(s[pos]) holds, otherwise continue
scanning until no more matches and then return parseMatch{}.
In `@pkg/database/tags/tag_mappings.go`:
- Line 82: The mapping for the key "scandinavia" currently uses TagTypeRegion
with value TagRegionEU which is geographically coarse; either add a new region
enum (e.g., TagRegionNordic or TagRegionScandinavia) in the tag definitions and
update the mapping in tag_mappings.go to use that new symbol, or if you intend
to keep the EU fallback, add a clear TODO comment next to the "scandinavia"
entry referencing TagTypeRegion/TagRegionEU so future reviewers know to remap to
TagRegionNordic/TagRegionScandinavia if such a tag is introduced.
In `@pkg/readers/pn532/pn532.go`:
- Around line 594-601: The new error branch around detection.DetectAll and
ErrNoDevicesFound lacks tests because detection.DetectAll and
helpers.GetSerialDeviceList aren't injectable; refactor to inject these
dependencies (e.g., define an interface with methods DetectAll(ctx, *Options)
([]Device, error) and GetSerialDeviceList() ([]string, error), add a field of
that interface to the PN532 reader struct or pass it into Detect(), replace
direct calls to detection.DetectAll and helpers.GetSerialDeviceList with the
interface methods, and add unit tests using testify/mock to simulate
ErrNoDevicesFound and unexpected errors), or if you cannot refactor now, add a
brief TODO comment above the Detect() implementation referencing
detection.DetectAll and helpers.GetSerialDeviceList and create a tracked issue
ID in the comment explaining test deferral so the untested branch is explicitly
documented.
---
Nitpick comments:
In `@pkg/helpers/utils.go`:
- Around line 242-378: Add unit tests for ListZip in pkg/helpers/utils_test.go
to exercise the newly added error and ZIP64 branches: create fixtures (in-memory
via afero or byte buffers per pkg/testing patterns) that simulate (1) a file
smaller than zipEOCDMinSize to trigger "file too small to be a valid zip", (2) a
corrupted EOCD that yields an inconsistent cdOffset/cdSize to trigger the
"invalid zip central directory" error, (3) a ZIP64 archive with a ZIP64 locator
and EOCD to exercise the ZIP64 locator + ZIP64 EOCD parsing branch, and (4) a
zip whose EOCD is located beyond the 1 KB tail so the >1 KB fallback path is
used; implement tests in TestListZip asserting returned names or expected errors
using testify assertions and any required mocks/pools to mirror zipTailPool
usage.
- Around line 269-313: The small-file path currently re-reads the entire file
when firstBuf already contains the full file; after calling findEOCD(firstBuf)
in the first-read branch, if eocdOff < 0 and firstLen == size (i.e. the whole
file was read), short-circuit and return the same "zip EOCD signature not found"
error instead of falling through to the 64KB fallback that reads again and
borrows from zipTailPool; modify the logic around firstLen/firstBuf and findEOCD
so that when firstLen == size and no EOCD is found you return the error
immediately (preserving the existing error text/semantics) and do not allocate
or use the pool or perform the second ReadAt.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5215ec68-04a3-40ae-9bdc-cf4a1ee920ff
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (63)
go.modinternal/telemetry/telemetry.gopkg/api/methods/media.gopkg/api/methods/media_search_test.gopkg/api/methods/media_tags_test.gopkg/database/database.gopkg/database/filters/parser_property_test.gopkg/database/filters/parser_test.gopkg/database/filters/testdata/rapid/TestPropertyParseTagFiltersNormalization/TestPropertyParseTagFiltersNormalization-20260420120516-2779021.failpkg/database/keys_test.gopkg/database/mediadb/batch_inserter.gopkg/database/mediadb/concurrent_operations_test.gopkg/database/mediadb/mediadb.gopkg/database/mediadb/mediadb_integration_test.gopkg/database/mediadb/migrations/20260421120000_tag_counts.sqlpkg/database/mediadb/optimization_test.gopkg/database/mediadb/slug_cache.gopkg/database/mediadb/sql_cache.gopkg/database/mediadb/sql_prefetch.gopkg/database/mediadb/sql_search.gopkg/database/mediadb/sql_tags.gopkg/database/mediadb/sql_test.gopkg/database/mediadb/tag_cache.gopkg/database/mediadb/tag_cache_test.gopkg/database/mediadb/tagfilter.gopkg/database/mediadb/tagfilter_test.gopkg/database/mediascanner/fsutil.gopkg/database/mediascanner/indexing_pipeline.gopkg/database/mediascanner/mediascanner.gopkg/database/slugs/media_parsing_game.gopkg/database/slugs/normalization_test.gopkg/database/slugs/slug_helpers.gopkg/database/slugs/slugify_test.gopkg/database/slugs/stages_test.gopkg/database/tags/aliases.gopkg/database/tags/aliases_test.gopkg/database/tags/filename_parser.gopkg/database/tags/filename_parser_test.gopkg/database/tags/storage.gopkg/database/tags/storage_test.gopkg/database/tags/string_parsers.gopkg/database/tags/string_parsers_test.gopkg/database/tags/tag_mappings.gopkg/database/tags/tag_mappings_test.gopkg/database/tags/tag_values.gopkg/database/tags/tags.gopkg/helpers/logging.gopkg/helpers/paths.gopkg/helpers/utils.gopkg/helpers/utils_test.gopkg/readers/pn532/pn532.gopkg/readers/tty2oled/tty2oled.gopkg/readers/tty2oled/tty2oled_test.gopkg/service/autodetect.gopkg/service/autodetect_test.gopkg/service/readers.gopkg/service/readers_test.gopkg/zapscript/launch_title_test.gopkg/zapscript/title_strategies_integration_test.gopkg/zapscript/titles/constants.gopkg/zapscript/titles/resolve.gopkg/zapscript/titles/strategies.gopkg/zapscript/titles/strategies_test.go
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
pkg/database/mediadb/sql_tags.go (1)
128-150:⚠️ Potential issue | 🟠 MajorScope tag lookup by
TypeDBIDto avoid cross-type collisions.Line 133 through Line 149 can return the wrong tag row when multiple tag types share the same padded value, e.g.
players:2,disc:2,rev:2all become0002. This can attach media to the wrongTagDBID.🐛 Proposed fix
select DBID, TypeDBID, Tag from Tags where DBID = ? - or Tag = ? - or Tag = ? + or (TypeDBID = ? and (Tag = ? or Tag = ?)) LIMIT 1; @@ err = stmt.QueryRowContext(ctx, tagType.DBID, + tagType.TypeDBID, tagType.Tag, tags.PadTagValue(tagType.Tag), ).Scan(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/sql_tags.go` around lines 128 - 150, The query in sql_tags.go uses DBID and ORs to match Tag which allows cross-type collisions; change the SQL in PrepareContext to scope by TypeDBID and match Tag values within that type (e.g. "WHERE TypeDBID = ? AND (Tag = ? OR Tag = ?) LIMIT 1"), then update the parameters passed to stmt.QueryRowContext to use the tag type identifier (replace tagType.DBID with the correct TypeDBID field) followed by tagType.Tag and tags.PadTagValue(tagType.Tag) so the lookup returns the tag row for the correct tag type (adjust field name if your type id is named differently).pkg/database/mediadb/slug_cache.go (2)
230-260:⚠️ Potential issue | 🟠 MajorUnpad tag values on these read paths too.
With
sqlInsertTag*now storing padded values, these directTags.Tagscans can leak padded values such asplayers:0002into cached search results and ZapScript tags.🐛 Proposed fix
import ( @@ "github.com/ZaparooProject/go-zapscript" "github.com/ZaparooProject/zaparoo-core/v2/pkg/database" + dbtags "github.com/ZaparooProject/zaparoo-core/v2/pkg/database/tags" "github.com/rs/zerolog/log" ) @@ if scanErr := rows.Scan(&tag.Tag, &tag.Type); scanErr != nil { return result, fmt.Errorf("failed to scan tag: %w", scanErr) } + tag.Tag = dbtags.UnpadTagValue(tag.Tag) result.Tags = append(result.Tags, tag) @@ if scanErr := rows.Scan(&tag.Type, &tag.Tag); scanErr != nil { return nil, fmt.Errorf("failed to scan tag: %w", scanErr) } + tag.Tag = dbtags.UnpadTagValue(tag.Tag) tags = append(tags, tag)Also applies to: 346-352
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/slug_cache.go` around lines 230 - 260, The SELECT scans currently read padded tag strings into database.TagInfo.Tag (in the rows.Next/rows.Scan loop) and append them to result.Tags, which leaks padded values; after scanning into tag.Tag call the existing unpad helper (or add a small UnpadTag utility) to remove the padding (e.g., convert "players:0002" → "players:2") before appending to result.Tags, and apply the same unpadding change to the other read path mentioned (the block at lines 346-352) so both query results and ZapScript/tag-cache consumers receive unpadded tag values; reference rows.Scan, database.TagInfo.Tag, result.Tags append, and sqlInsertTag* as the relevant symbols to change.
285-336:⚠️ Potential issue | 🔴 CriticalFix mutable slice interpolation and missing tag unpadding in GetZapScriptTagsBySystemAndPath.
Two issues here:
Mutable slice in SQL:
database.ZapScriptTagTypesis an exportedvar []string, not a compile-time constant. The gosec justification is incorrect. Use parameterized placeholders instead.Missing tag unpadding: Lines 349-352 return
tag.Tagdirectly without unpadding, while all other read paths (tag_cache.go,sql_tags.go,sql_search.go,sql_cache.go) consistently unpad before returningTagInfo. This exposes padded values (e.g.,"0002") instead of the expected display values (e.g.,"2").🛡️ Proposed fix
+ import "path/to/dbtags" // or use the correct package + if len(database.ZapScriptTagTypes) == 0 { return nil, nil } typeList := prepareVariadic("?", ",", len(database.ZapScriptTagTypes)) queryArgs := make([]any, 0, 2+(len(database.ZapScriptTagTypes)*4)) queryArgs = append(queryArgs, systemID, path) for range 4 { for _, tagType := range database.ZapScriptTagTypes { queryArgs = append(queryArgs, tagType) } } - typeList := "'" + strings.Join(database.ZapScriptTagTypes, "', '") + "'" - //nolint:gosec // ZapScriptTagTypes is a compile-time constant list, not user input rows, err := db.conn().QueryContext(ctx, fmt.Sprintf(` ... - `, typeList, typeList, typeList, typeList), systemID, path) + `, typeList, typeList, typeList, typeList), queryArgs...) var tags []database.TagInfo for rows.Next() { var tag database.TagInfo if scanErr := rows.Scan(&tag.Type, &tag.Tag); scanErr != nil { return nil, fmt.Errorf("failed to scan tag: %w", scanErr) } + tag.Tag = dbtags.UnpadTagValue(tag.Tag) tags = append(tags, tag) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/slug_cache.go` around lines 285 - 336, GetZapScriptTagsBySystemAndPath currently interpolates the mutable variable database.ZapScriptTagTypes into SQL (remove the incorrect //nolint:gosec) and returns padded tag values without unpadding; fix by building a parameterized IN-list of placeholders for the tag types, append those type values to the QueryContext args (after systemID and path) and call the query with placeholders instead of fmt.Sprintf, and ensure you unpad tag.Tag before constructing the returned TagInfo (use the existing unpad helper used elsewhere, e.g., UnpadTag or the project equivalent) so returned tags match other read paths.pkg/database/tags/filename_parser.go (1)
608-632:⚠️ Potential issue | 🟠 MajorCarry
mediaTypethrough all media-specific parsers.
ParseFilenameToCanonicalTagsForMedia(..., slugs.MediaTypeGame)still extractsS01E02season/episode tags, and non-game media can still map[T+Eng]as a ROM translation because bracket parsing has no media-type context.Suggested fix direction
type ParseContext struct { Filename string CurrentTag string + MediaType slugs.MediaType ParenTags []string BracketTags []string ProcessedTags []CanonicalTag- if seasonStr, episodeStr, found := findSeasonEpisode(remaining); found { + if mediaType == "" || mediaType == slugs.MediaTypeTVShow { + if seasonStr, episodeStr, found := findSeasonEpisode(remaining); found { season := strings.TrimLeft(seasonStr, "0") if season == "" { season = "0" } episode := strings.TrimLeft(episodeStr, "0") if episode == "" { episode = "0" } tags = append(tags, CanonicalTag{ Type: TagTypeSeason, Value: TagValue(season), Source: TagSourceInferred, }, CanonicalTag{ Type: TagTypeEpisode, Value: TagValue(episode), Source: TagSourceInferred, }, ) remaining = removeSeasonEpisode(remaining) + } }if ctx.CurrentBracketType == BracketTypeSquare { - return mapBracketTag(ctx.CurrentTag) + return mapBracketTag(ctx.CurrentTag, ctx.MediaType) }-func mapBracketTag(tag string) []CanonicalTag { +func mapBracketTag(tag string, mediaType slugs.MediaType) []CanonicalTag { // Check for translation patterns first (T+Eng, T-Chi, etc.) // This must come before normalization to preserve the T+/T- prefix - if transTags, matched := parseTranslationPattern(tag, TagSourceBracketed, true); matched { - return transTags + if mediaType == "" || mediaType == slugs.MediaTypeGame { + if transTags, matched := parseTranslationPattern(tag, TagSourceBracketed, true); matched { + return transTags + } }ctx := &ParseContext{ Filename: filename, + MediaType: mediaType, ParenTags: parenTags, BracketTags: bracketTags, ProcessedTags: allTags,Also applies to: 1009-1013, 1319-1324
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/tags/filename_parser.go` around lines 608 - 632, The mediaType argument is not being propagated into media-specific parsing steps so ParseFilenameToCanonicalTagsForMedia (and its callers) still runs season/episode extraction (findSeasonEpisode/removeSeasonEpisode) and generic bracket parsing that treats tags like "[T+Eng]" as ROM translations for games; update the parsing flow to pass the mediaType into every media-specific parser and into the functions that extract/remove season/episode and parse bracketed tags, then short-circuit or change behavior when mediaType == slugs.MediaTypeGame (skip SxxExx extraction and do not map bracket tokens to ROM translation for games) so game filenames no longer get TV/ROM tags.
♻️ Duplicate comments (2)
pkg/helpers/paths.go (1)
263-280:⚠️ Potential issue | 🟠 MajorUse
filepath.Joinbefore caching normalized prefixes.This still builds cached prefixes with manual
"/"concatenation. Roots like/or launcher folders like.produce prefixes such as//nesor/roms/., which can failpathHasPrefixNormalizedeven thoughfilepath.Joinwould match correctly.Suggested fix
- normDataDir := NormalizePathForComparison(DataDir(pl)) - normMediaPrefix := normDataDir + "/" + config.MediaDir + "/" + normDataDir := NormalizePathForComparison(DataDir(pl)) + normMediaPrefix := NormalizePathForComparison(filepath.Join(normDataDir, config.MediaDir)) @@ if l.SystemID != "" { - lp.normMediaPath = normMediaPrefix + strings.ToLower(l.SystemID) + lp.normMediaPath = NormalizePathForComparison(filepath.Join(normMediaPrefix, l.SystemID)) } @@ if !filepath.IsAbs(folder) { normFolder := folderCache[folder] - lp.rootPairs = append(lp.rootPairs, normRoot+"/"+normFolder) + lp.rootPairs = append(lp.rootPairs, NormalizePathForComparison(filepath.Join(normRoot, normFolder))) } @@ } else if l.SystemID != "" { - normZaparooMedia := m.normDataDir + "/" + config.MediaDir + "/" + strings.ToLower(l.SystemID) + normZaparooMedia := NormalizePathForComparison(filepath.Join(m.normDataDir, config.MediaDir, l.SystemID)) if pathHasPrefixNormalized(normPath, normZaparooMedia) { inDataDir = true } @@ normFolder := m.normFolderCache[folder] - normFull := normRoot + "/" + normFolder + normFull := NormalizePathForComparison(filepath.Join(normRoot, normFolder)) if pathHasPrefixNormalized(normPath, normFull) { inRoot = true breakPlease add regression coverage for
/roots and.launcher folders. As per coding guidelines,**/*.go: “Usefilepath.Joinfor path construction everywhere, including test files — never hardcode POSIX-style paths like/roms/snes/game.sfcas string literals”.Also applies to: 355-388
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/helpers/paths.go` around lines 263 - 280, The prefix building uses manual "/" concatenation which yields bad prefixes (e.g., "//nes" or "/roms/.") and breaks pathHasPrefixNormalized; update the code that sets launcherPrecomp.normMediaPath and that appends to launcherPrecomp.rootPairs to use filepath.Join (e.g., join NormalizePathForComparison(DataDir(pl)), config.MediaDir, strings.ToLower(l.SystemID) and join normRoot with normFolder) before caching, and apply the same change in the other occurrence handling rootPairs; add regression tests exercising normRoots=["/"] and launcher Folders=["."] (and any edge cases) to ensure pathHasPrefixNormalized matches correctly.pkg/zapscript/titles/strategies.go (1)
187-190:⚠️ Potential issue | 🟠 MajorPropagate partial secondary-title lookup failures.
Line 190 turns a database failure into “no match”, so title resolution can continue into later fallback strategies and potentially select the wrong media.
TrySecondaryTitleExactalready returns anerror, and the caller wrapsstrategyErr.Proposed fix
partialResults, partialErr := gamesdb.SearchMediaBySecondarySlug(ctx, systemID, searchSlug, tagFilters) if partialErr != nil { log.Warn().Err(partialErr).Msgf("partial secondary title search failed for '%s'", searchSlug) - return nil, "", nil + return nil, "", fmt.Errorf("failed to search partial secondary slug '%s': %w", searchSlug, partialErr) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/zapscript/titles/strategies.go` around lines 187 - 190, The code currently swallows DB errors from gamesdb.SearchMediaBySecondarySlug by logging and returning nil, "" and nil, which treats failures as “no match” and can cause incorrect fallbacks; instead propagate the error up from TrySecondaryTitleExact: when partialErr != nil return nil, "", partialErr (or wrap it with context) so the caller that aggregates strategyErr can distinguish DB failures from genuine no-match results; update the error return path in TrySecondaryTitleExact where partialResults, partialErr are handled to return the error rather than nil.
🧹 Nitpick comments (1)
pkg/database/mediadb/batch_inserter.go (1)
151-188: Consider boundingstmtCachegrowth for irregular flush patterns.In the hot path (
Addauto-flushing atbatchSize), the cache stays at one entry — great. However, callers invokingFlush()externally with varyingcurrentCountvalues (e.g., time/size-triggered partial flushes) will accumulate one prepared*sql.Stmtper distinct row count, up tobatchSizeentries, each holding server-side statement resources on the transaction. Not exploitable, just worth guarding.If you expect only two steady-state sizes (full batch + final remainder), an easy improvement is to cache only the full-
batchSizestatement and prepare-and-close ad hoc for anything smaller, mirroring whatexecuteChunkalready does.♻️ Sketch
- // Reuse cached statement for this row count when available - if stmt, ok := b.stmtCache[b.currentCount]; ok { + // Reuse cached statement only for the common full-batch size + if stmt, ok := b.stmtCache[b.currentCount]; ok { _, err := stmt.ExecContext(b.ctx, b.buffer[:b.currentCount*b.columnCount]...) ... } ... - // Cache for reuse across future flushes with the same row count - b.stmtCache[b.currentCount] = stmt + // Only cache the full batch-size statement; smaller partials are one-off + if b.currentCount == b.batchSize { + b.stmtCache[b.currentCount] = stmt + } else { + defer func() { + if cerr := stmt.Close(); cerr != nil { + log.Warn().Err(cerr).Msg("failed to close one-off batch statement") + } + }() + }Note this would also require removing the
//nolint:sqlclosecheckon line 165 or narrowing its scope.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/batch_inserter.go` around lines 151 - 188, The stmtCache can grow unbounded for varying flush sizes; change Flush (the block using b.currentCount, b.stmtCache and tx.PrepareContext in batch_inserter.go) to only cache the prepared statement when b.currentCount == b.batchSize (the full batch used by Add), and for any smaller/ad-hoc currentCount prepare the statement, ExecContext it, then Close() the stmt immediately (mirroring executeChunk/flushChunked behavior); update usage of generateMultiRowInsertSQL, tx.PrepareContext and remove or narrow the //nolint:sqlclosecheck since non-cached stmts must be closed, and keep Close() responsible for closing only cached statements.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/telemetry/telemetry_test.go`:
- Around line 70-82: The Windows test expectations are over-redacting the path
by removing the "Users" path component; update the expected strings for the test
cases named "windows path", "windows path lowercase drive", and "windows path
different drive" so they preserve "\Users\" and only redact the username (e.g.
"C:\\Users\\<user>\\AppData\\Local\\zaparoo\\config.toml",
"c:\\Users\\<user>\\Documents\\zaparoo", "D:\\Users\\<user>\\zaparoo\\logs");
this aligns the Windows expectations with the Linux/macOS behavior exercised by
the sanitization logic tested in telemetry_test.go.
In `@pkg/database/mediadb/mediadb.go`:
- Around line 2131-2137: The page_prefetch wrapper around prefetchSearchPages is
currently swallowing all errors, turning context cancellation into a warning;
change the wrapper in the "page_prefetch" fn so that if prefetchSearchPages
returns a context cancellation (errors.Is(err, context.Canceled) or
errors.Is(err, context.DeadlineExceeded) or err == db.ctx.Err()) the wrapper
returns that error (propagating cancellation), but for other non-nil errors keep
the existing log-and-return-nil behavior so browse_cache still runs; reference
the "page_prefetch" wrapper and the prefetchSearchPages/sql_prefetch.go
cancellation behavior when making the change.
In `@pkg/database/tags/filename_parser.go`:
- Around line 75-88: The parser currently only applies editionQualifiers for
suffix/slugs, letting standalone tokens like "(GOTY)" or "(Game of the Year)"
fall through to looksLikeCompanyName and become credit tags; update the token
classification in the filename parsing path (the function that
iterates/normalizes tokens—look for calls to looksLikeCompanyName and usage of
editionQualifiers) to check if a token exactly matches a key in
editionQualifiers and emit the corresponding TagEdition* before calling
looksLikeCompanyName, and add unit tests (new *_test.go) asserting standalone
tokens such as "goty" and "game-of-the-year" produce edition tags (e.g.,
TagEditionGoty) rather than credit tags.
In `@pkg/database/tags/tag_mappings.go`:
- Around line 704-718: The language-to-tag mappings in tag_mappings.go currently
attach region tags for language-only aliases (e.g., the entries for "french",
"german", "spanish", "portuguese", etc.) which incorrectly infers a country;
update the alias map so those keys only contain the TagTypeLang entry (remove
the {Type: TagTypeRegion, ...} elements) and leave explicit country keys (e.g.,
"France", "Spain") unchanged; locate the map entries shown in the diff (the
alias -> []Tag mappings) and strip out region tag items for all pure-language
keys so only TagTypeLang values remain.
---
Outside diff comments:
In `@pkg/database/mediadb/slug_cache.go`:
- Around line 230-260: The SELECT scans currently read padded tag strings into
database.TagInfo.Tag (in the rows.Next/rows.Scan loop) and append them to
result.Tags, which leaks padded values; after scanning into tag.Tag call the
existing unpad helper (or add a small UnpadTag utility) to remove the padding
(e.g., convert "players:0002" → "players:2") before appending to result.Tags,
and apply the same unpadding change to the other read path mentioned (the block
at lines 346-352) so both query results and ZapScript/tag-cache consumers
receive unpadded tag values; reference rows.Scan, database.TagInfo.Tag,
result.Tags append, and sqlInsertTag* as the relevant symbols to change.
- Around line 285-336: GetZapScriptTagsBySystemAndPath currently interpolates
the mutable variable database.ZapScriptTagTypes into SQL (remove the incorrect
//nolint:gosec) and returns padded tag values without unpadding; fix by building
a parameterized IN-list of placeholders for the tag types, append those type
values to the QueryContext args (after systemID and path) and call the query
with placeholders instead of fmt.Sprintf, and ensure you unpad tag.Tag before
constructing the returned TagInfo (use the existing unpad helper used elsewhere,
e.g., UnpadTag or the project equivalent) so returned tags match other read
paths.
In `@pkg/database/mediadb/sql_tags.go`:
- Around line 128-150: The query in sql_tags.go uses DBID and ORs to match Tag
which allows cross-type collisions; change the SQL in PrepareContext to scope by
TypeDBID and match Tag values within that type (e.g. "WHERE TypeDBID = ? AND
(Tag = ? OR Tag = ?) LIMIT 1"), then update the parameters passed to
stmt.QueryRowContext to use the tag type identifier (replace tagType.DBID with
the correct TypeDBID field) followed by tagType.Tag and
tags.PadTagValue(tagType.Tag) so the lookup returns the tag row for the correct
tag type (adjust field name if your type id is named differently).
In `@pkg/database/tags/filename_parser.go`:
- Around line 608-632: The mediaType argument is not being propagated into
media-specific parsing steps so ParseFilenameToCanonicalTagsForMedia (and its
callers) still runs season/episode extraction
(findSeasonEpisode/removeSeasonEpisode) and generic bracket parsing that treats
tags like "[T+Eng]" as ROM translations for games; update the parsing flow to
pass the mediaType into every media-specific parser and into the functions that
extract/remove season/episode and parse bracketed tags, then short-circuit or
change behavior when mediaType == slugs.MediaTypeGame (skip SxxExx extraction
and do not map bracket tokens to ROM translation for games) so game filenames no
longer get TV/ROM tags.
---
Duplicate comments:
In `@pkg/helpers/paths.go`:
- Around line 263-280: The prefix building uses manual "/" concatenation which
yields bad prefixes (e.g., "//nes" or "/roms/.") and breaks
pathHasPrefixNormalized; update the code that sets launcherPrecomp.normMediaPath
and that appends to launcherPrecomp.rootPairs to use filepath.Join (e.g., join
NormalizePathForComparison(DataDir(pl)), config.MediaDir,
strings.ToLower(l.SystemID) and join normRoot with normFolder) before caching,
and apply the same change in the other occurrence handling rootPairs; add
regression tests exercising normRoots=["/"] and launcher Folders=["."] (and any
edge cases) to ensure pathHasPrefixNormalized matches correctly.
In `@pkg/zapscript/titles/strategies.go`:
- Around line 187-190: The code currently swallows DB errors from
gamesdb.SearchMediaBySecondarySlug by logging and returning nil, "" and nil,
which treats failures as “no match” and can cause incorrect fallbacks; instead
propagate the error up from TrySecondaryTitleExact: when partialErr != nil
return nil, "", partialErr (or wrap it with context) so the caller that
aggregates strategyErr can distinguish DB failures from genuine no-match
results; update the error return path in TrySecondaryTitleExact where
partialResults, partialErr are handled to return the error rather than nil.
---
Nitpick comments:
In `@pkg/database/mediadb/batch_inserter.go`:
- Around line 151-188: The stmtCache can grow unbounded for varying flush sizes;
change Flush (the block using b.currentCount, b.stmtCache and tx.PrepareContext
in batch_inserter.go) to only cache the prepared statement when b.currentCount
== b.batchSize (the full batch used by Add), and for any smaller/ad-hoc
currentCount prepare the statement, ExecContext it, then Close() the stmt
immediately (mirroring executeChunk/flushChunked behavior); update usage of
generateMultiRowInsertSQL, tx.PrepareContext and remove or narrow the
//nolint:sqlclosecheck since non-cached stmts must be closed, and keep Close()
responsible for closing only cached statements.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8072bb06-a14b-404e-8b21-7bb8be97d257
📒 Files selected for processing (29)
docs/api/methods.mdinternal/telemetry/telemetry.gointernal/telemetry/telemetry_test.gopkg/api/methods/media.gopkg/api/methods/media_search_test.gopkg/api/methods/media_tags_test.gopkg/database/filters/parser_test.gopkg/database/keys_test.gopkg/database/mediadb/batch_inserter.gopkg/database/mediadb/mediadb.gopkg/database/mediadb/migrations/20260421120000_tag_counts.sqlpkg/database/mediadb/slug_cache.gopkg/database/mediadb/sql_prefetch.gopkg/database/mediadb/sql_tags.gopkg/database/mediadb/tagfilter_test.gopkg/database/tags/filename_parser.gopkg/database/tags/filename_parser_test.gopkg/database/tags/storage.gopkg/database/tags/string_parsers.gopkg/database/tags/tag_mappings.gopkg/database/tags/tag_values.gopkg/database/tags/tags.gopkg/database/tags/tags_test.gopkg/helpers/paths.gopkg/helpers/paths_matcher_test.gopkg/readers/pn532/pn532.gopkg/readers/tty2oled/tty2oled.gopkg/zapscript/titles/strategies.gopkg/zapscript/titles/strategies_test.go
✅ Files skipped from review due to trivial changes (3)
- docs/api/methods.md
- pkg/database/tags/tags_test.go
- pkg/database/mediadb/migrations/20260421120000_tag_counts.sql
🚧 Files skipped from review as they are similar to previous changes (10)
- pkg/readers/pn532/pn532.go
- pkg/readers/tty2oled/tty2oled.go
- pkg/database/tags/storage.go
- pkg/database/filters/parser_test.go
- pkg/zapscript/titles/strategies_test.go
- pkg/database/mediadb/sql_prefetch.go
- internal/telemetry/telemetry.go
- pkg/api/methods/media_tags_test.go
- pkg/database/tags/filename_parser_test.go
- pkg/database/tags/tag_values.go
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (6)
internal/telemetry/telemetry_test.go (1)
126-229: MovemockLevelWritertopkg/testing/for reuse across telemetry and logging tests.
mockLevelWriteris a generic mock for thezerolog.LevelWriterinterface. No existing fixture for this interface exists inpkg/testing/. Moving this mock to the shared testing location enables future logging tests to reuse it, consistent with the guideline to use existing mocks frompkg/testing/rather than creating local duplicates.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/telemetry/telemetry_test.go` around lines 126 - 229, Remove the local mockLevelWriter definition from telemetry_test.go and move it into the shared pkg/testing package as an exported MockLevelWriter that implements zerolog.LevelWriter (methods Write and WriteLevel); update telemetry tests to instantiate the shared MockLevelWriter (instead of the local mockLevelWriter) when constructing thresholdWriter and when setting expectations on WriteLevel/Write; ensure the moved struct and its methods remain compatible with calls in TestThresholdWriter, TestThresholdWriter's uses of thresholdWriter, WriteLevel, and Write still compile.pkg/database/mediadb/slug_cache_test.go (1)
632-958: Usefilepath.Joinfor hardcoded paths in the new tests.Lines 663, 927 and 940 use hardcoded POSIX-style literals (
/roms/nes/numeric.nes,/roms/nes/players2.nes,/roms/nes/players4.nes). The repository convention calls forfilepath.Joineverywhere, including tests. (The rest of the file already does this pervasively in existing tests, but since you're adding new code, please bring the new tests in line.)♻️ Suggested adjustment
- Path: "/roms/nes/numeric.nes", + Path: filepath.Join("/", "roms", "nes", "numeric.nes"), ... - Path: "/roms/nes/players2.nes", + Path: filepath.Join("/", "roms", "nes", "players2.nes"), ... - Path: "/roms/nes/players4.nes", + Path: filepath.Join("/", "roms", "nes", "players4.nes"), ... - resultTags, err := mediaDB.GetZapScriptTagsBySystemAndPath(ctx, nesSystem.ID, "/roms/nes/players2.nes") + resultTags, err := mediaDB.GetZapScriptTagsBySystemAndPath(ctx, nesSystem.ID, filepath.Join("/", "roms", "nes", "players2.nes"))Don't forget to add
"path/filepath"to the import block.As per coding guidelines: "Use
filepath.Joinfor path construction everywhere, including test files — never hardcode POSIX-style paths like/roms/snes/game.sfcas string literals".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/slug_cache_test.go` around lines 632 - 958, Replace hardcoded POSIX path literals with filepath.Join in the new tests to follow repo conventions: in TestGetMediaByDBID_UnpadsNumericTags_Integration replace "/roms/nes/numeric.nes" (used when constructing media.Path) with filepath.Join("roms", "nes", "numeric.nes"); in TestGetZapScriptTagsBySystemAndPath_UnpadsNumericTags_Integration replace "/roms/nes/players2.nes" and "/roms/nes/players4.nes" (media.Path and GetZapScriptTagsBySystemAndPath calls) with filepath.Join("roms", "nes", "players2.nes") and filepath.Join("roms", "nes", "players4.nes") respectively; also add the "path/filepath" import to the test file import block so the tests compile.pkg/database/mediadb/slug_cache.go (1)
355-368: Minor: returningnilinstead of an empty slice on no-results.Changing the accumulator from a pre-allocated slice to
var resultTags []database.TagInfomeansGetZapScriptTagsBySystemAndPathnow returnsnil(not[]) when no disambiguating tags exist. Callers usinglen(...)orrangeare unaffected, but if this value ever flows into a JSON response it will serialize asnullinstead of[], which can be a subtle API surface change. Consider pre-allocating to keep the previous "always non-nil" contract.♻️ Proposed tweak
- var resultTags []database.TagInfo + resultTags := make([]database.TagInfo, 0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/database/mediadb/slug_cache.go` around lines 355 - 368, GetZapScriptTagsBySystemAndPath currently declares resultTags as a nil slice which causes callers to receive null when JSON-encoded; change the accumulator to a non-nil empty slice (e.g., initialize resultTags as an empty slice with make([]database.TagInfo, 0)) before the rows loop so the function returns [] instead of nil when there are no tags, leaving the rest of the scanning and error handling (rows.Scan, tags.UnpadTagValue, rows.Err()) unchanged.pkg/zapscript/titles/strategies_test.go (1)
327-346: Tighten the DB-error assertion to match sibling test.The updated "partial search returns error" case in
TestTrySecondaryTitleExact(Lines 547–566) now usesErrorIs+ErrorContainsto verify the wrapped error is preserved. This new "DB search error — propagates to caller" case only checksrequire.Error, so it wouldn't catch a regression where the error is replaced or unwrapped. Consider mirroring the pattern so both error-propagation tests assert wrapping.Proposed fix
{ name: "DB search error — propagates to caller", matchInfo: GameMatchInfo{ ... }, slug: "touhou06embodimentofscarletdevil", systemID: "PC", setupMock: func(m *helpers.MockMediaDBI) { m.On("SearchMediaBySecondarySlug", mock.Anything, "PC", "embodimentofscarletdevil", []zapscript.TagFilter(nil)). - Return([]database.SearchResultWithCursor{}, errors.New("database error")) + Return([]database.SearchResultWithCursor{}, sharedSecondarySearchErr) }, ... + expectedErr: sharedSecondarySearchErr, shouldError: true, },…and add the matching
ErrorIs/ErrorContainsassertions (checking for"SearchMediaBySecondarySlug") in the result block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/zapscript/titles/strategies_test.go` around lines 327 - 346, In the "DB search error — propagates to caller" case inside TestTrySecondaryTitleExact, replace the loose require.Error check with the same tightened assertions used in the sibling "partial search returns error" case: assert the returned error wraps the original using require.ErrorIs(err, errors.New("database error")) (or the actual sentinel if available) and assert the error message contains the function name "SearchMediaBySecondarySlug" using require.ErrorContains; this ensures the DB error is preserved and wrapped rather than replaced.pkg/zapscript/titles/strategies.go (2)
150-217: Asymmetric error handling between Case 1 and Case 2.Case 2 now correctly propagates partial-search errors (Line 190), but Case 1 (Lines 153–156) still logs and swallows
SearchMediaBySlugfailures. With the new guardif !matchInfo.HasSecondaryTitlegating Case 2, when the input has a secondary title and the exact search errors, the function returnsnil, "", nilat Line 216, silently hiding DB outages fromResolveTitle. Consider propagating Case 1 errors too for symmetric behavior.Proposed fix
exactResults, exactErr := gamesdb.SearchMediaBySlug(ctx, systemID, searchSlug, tagFilters) if exactErr != nil { - log.Warn().Err(exactErr).Msgf("exact secondary title search failed for '%s'", searchSlug) - } else if len(exactResults) > 0 { + log.Warn().Err(exactErr).Msgf("exact secondary title search failed for '%s'", searchSlug) + return nil, "", fmt.Errorf("exact secondary title search failed: %w", exactErr) + } + if len(exactResults) > 0 { // Post-filter: only keep DB entries WITHOUT secondary title🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/zapscript/titles/strategies.go` around lines 150 - 217, The exact-search branch currently logs and swallows errors from gamesdb.SearchMediaBySlug (exactErr) causing asymmetric behavior with the partial branch; update the exact branch that handles exactResults/exactErr to propagate the error back (e.g., after logging, return nil, "", fmt.Errorf("exact secondary title search failed: %w", exactErr)) so SearchMediaBySlug failures surface to ResolveTitle; modify the block around exactResults/exactErr (the SearchMediaBySlug call and its error handling) to mirror the partial-case error propagation while keeping the existing logging and post-filter logic.
262-296: Hoist thequeryFirstToken == ""guard out of the loop.
queryFirstTokenis computed once before the loop (Line 266) and the empty-string check on Lines 283–285 is loop-invariant — if it's empty, every iteration does filter work only tocontinue. Short-circuit before the loop for clarity and to avoid needlessly callingGenerateMatchInfoon every result.Proposed fix
queryFirstToken := firstMainTitleToken(mediaType, matchInfo.OriginalInput) + if queryFirstToken == "" { + return nil, "", nil + } var filtered []database.SearchResultWithCursor for _, result := range results { dbMatchInfo := GenerateMatchInfo(mediaType, result.Name) // Require DB entry also has a secondary title with the same slug. if !dbMatchInfo.HasSecondaryTitle || dbMatchInfo.SecondaryTitleSlug != matchInfo.SecondaryTitleSlug { continue } // Skip entries where the full slug is identical — those are exact matches handled earlier. if dbMatchInfo.CanonicalSlug == matchInfo.CanonicalSlug { continue } - // Guard: main titles must share a first token to prevent cross-franchise collisions. - if queryFirstToken == "" { - continue - } dbFirstToken := firstMainTitleToken(mediaType, dbMatchInfo.OriginalInput) if dbFirstToken != queryFirstToken { continue }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/zapscript/titles/strategies.go` around lines 262 - 296, The loop currently recomputes GenerateMatchInfo for every result even when queryFirstToken is empty; hoist the loop-invariant guard by checking queryFirstToken (computed via firstMainTitleToken(mediaType, matchInfo.OriginalInput)) before iterating results and return nil, "", nil if it's empty, and remove the in-loop `if queryFirstToken == "" { continue }` check so the loop only processes entries when a valid first token exists.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/database/tags/filename_parser.go`:
- Around line 1014-1021: The media-type check in mapBracketTag is inverted: it
currently skips parsing bracketed ROM translation tags for slugs.MediaTypeGame
and allows them for non-game types; flip the condition so
parseTranslationPattern(tag, TagSourceBracketed, true) runs for MediaTypeGame
(and is skipped for non-game) to match the bracketless logic in
filename_parser.go (and keep TagSourceBracketed usage). Also update the
corresponding expectations in pkg/database/tags/filename_parser_test.go so game
media asserts that bracketed ROM translations are parsed and non-game media
asserts they are skipped.
---
Nitpick comments:
In `@internal/telemetry/telemetry_test.go`:
- Around line 126-229: Remove the local mockLevelWriter definition from
telemetry_test.go and move it into the shared pkg/testing package as an exported
MockLevelWriter that implements zerolog.LevelWriter (methods Write and
WriteLevel); update telemetry tests to instantiate the shared MockLevelWriter
(instead of the local mockLevelWriter) when constructing thresholdWriter and
when setting expectations on WriteLevel/Write; ensure the moved struct and its
methods remain compatible with calls in TestThresholdWriter,
TestThresholdWriter's uses of thresholdWriter, WriteLevel, and Write still
compile.
In `@pkg/database/mediadb/slug_cache_test.go`:
- Around line 632-958: Replace hardcoded POSIX path literals with filepath.Join
in the new tests to follow repo conventions: in
TestGetMediaByDBID_UnpadsNumericTags_Integration replace "/roms/nes/numeric.nes"
(used when constructing media.Path) with filepath.Join("roms", "nes",
"numeric.nes"); in
TestGetZapScriptTagsBySystemAndPath_UnpadsNumericTags_Integration replace
"/roms/nes/players2.nes" and "/roms/nes/players4.nes" (media.Path and
GetZapScriptTagsBySystemAndPath calls) with filepath.Join("roms", "nes",
"players2.nes") and filepath.Join("roms", "nes", "players4.nes") respectively;
also add the "path/filepath" import to the test file import block so the tests
compile.
In `@pkg/database/mediadb/slug_cache.go`:
- Around line 355-368: GetZapScriptTagsBySystemAndPath currently declares
resultTags as a nil slice which causes callers to receive null when
JSON-encoded; change the accumulator to a non-nil empty slice (e.g., initialize
resultTags as an empty slice with make([]database.TagInfo, 0)) before the rows
loop so the function returns [] instead of nil when there are no tags, leaving
the rest of the scanning and error handling (rows.Scan, tags.UnpadTagValue,
rows.Err()) unchanged.
In `@pkg/zapscript/titles/strategies_test.go`:
- Around line 327-346: In the "DB search error — propagates to caller" case
inside TestTrySecondaryTitleExact, replace the loose require.Error check with
the same tightened assertions used in the sibling "partial search returns error"
case: assert the returned error wraps the original using require.ErrorIs(err,
errors.New("database error")) (or the actual sentinel if available) and assert
the error message contains the function name "SearchMediaBySecondarySlug" using
require.ErrorContains; this ensures the DB error is preserved and wrapped rather
than replaced.
In `@pkg/zapscript/titles/strategies.go`:
- Around line 150-217: The exact-search branch currently logs and swallows
errors from gamesdb.SearchMediaBySlug (exactErr) causing asymmetric behavior
with the partial branch; update the exact branch that handles
exactResults/exactErr to propagate the error back (e.g., after logging, return
nil, "", fmt.Errorf("exact secondary title search failed: %w", exactErr)) so
SearchMediaBySlug failures surface to ResolveTitle; modify the block around
exactResults/exactErr (the SearchMediaBySlug call and its error handling) to
mirror the partial-case error propagation while keeping the existing logging and
post-filter logic.
- Around line 262-296: The loop currently recomputes GenerateMatchInfo for every
result even when queryFirstToken is empty; hoist the loop-invariant guard by
checking queryFirstToken (computed via firstMainTitleToken(mediaType,
matchInfo.OriginalInput)) before iterating results and return nil, "", nil if
it's empty, and remove the in-loop `if queryFirstToken == "" { continue }` check
so the loop only processes entries when a valid first token exists.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: aa110a8e-a45d-40fb-afb8-8b59d8bc69a2
📒 Files selected for processing (17)
internal/telemetry/telemetry.gointernal/telemetry/telemetry_test.gopkg/database/mediadb/batch_inserter.gopkg/database/mediadb/batch_inserter_test.gopkg/database/mediadb/mediadb.gopkg/database/mediadb/optimization_test.gopkg/database/mediadb/slug_cache.gopkg/database/mediadb/slug_cache_test.gopkg/database/mediadb/sql_tags.gopkg/database/mediadb/sql_test.gopkg/database/tags/filename_parser.gopkg/database/tags/filename_parser_test.gopkg/database/tags/tag_mappings.gopkg/helpers/paths.gopkg/helpers/paths_matcher_test.gopkg/zapscript/titles/strategies.gopkg/zapscript/titles/strategies_test.go
🚧 Files skipped from review as they are similar to previous changes (4)
- pkg/helpers/paths_matcher_test.go
- pkg/database/mediadb/mediadb.go
- pkg/database/mediadb/sql_test.go
- pkg/database/mediadb/optimization_test.go
(Disc N)bracket tokens now producedisc:Ntags; previously the parenthesised form was silently dropped.T.V.andU.S.A.are no longer converted to digits during slug normalization.GameBoy) are now normalized correctly in the slug pipeline.media.tagsAPI returns per-tag usage counts and supports a per-category result cap.PhaseCreatingIndexesandPhaseBuildingCachessub-phases so clients showCreating indexesandBuilding search cachesinstead of a single longWriting databasestep.LauncherMatcherprecomputes per-launcher normalized paths and extensions at construction time.archive/zipper-entry allocations.ParseFilenameToCanonicalTagsForMediaaccepts aMediaTypeand skips patterns irrelevant to the given type.defaultFsTimeoutraised from 10s to 30s for slow SD cards on ARM devices.BatchInserterstatement cache changed tomap[int]*sql.Stmtkeyed by row count for reuse across batch sizes.thresholdWriterto avoid JSON parse overhead on sub-error log calls.connectReadersandAutoDetector.DetectReaders.tty2oleddefersoperationWorkergoroutine start toOpenso detection probes that never open don't leak goroutines.ErrNoDevicesFoundat Trace level and unexpected errors at Warn level.Summary by CodeRabbit
Release Notes
New Features
Performance Improvements
Bug Fixes
Improvements