fix(desktop): add size rotation and retention policy to debug logger#494
fix(desktop): add size rotation and retention policy to debug logger#494HiddenPuppy wants to merge 1 commit into
Conversation
- Add maxFileSize option (default 50MB) for daily file rotation - Add retentionDays option (default 14) for old file cleanup - Update CreateDebugLoggerOptions interface - Add unit tests for rotation and cleanup behavior Closes #411
|
Hi @HiddenPuppy, DetailsInstructions for interacting with me using comments are available here. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the desktop debug logger by introducing log rotation and retention management. These changes prevent log files from growing indefinitely and ensure that old logs are automatically pruned, improving disk space usage and maintainability without introducing external dependencies. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize the Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counterproductive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request implements log rotation and retention for the debug logger, adding support for file size limits and automatic purging of logs older than a specified retention period. The review feedback identifies a potential race condition on Windows where renaming a file immediately after closing an asynchronous write stream may fail due to file locks. Additionally, it is recommended to wrap the synchronous cleanup logic in a try-catch block to prevent application crashes during initialization and to optimize the cleanup process by using lexicographical string comparisons instead of repeated Date object instantiation.
| writeStream.end(); | ||
| writeStream = null; | ||
|
|
||
| let rotIdx = 1; | ||
| while (existsSync(join(debugDir, `debug-${today}.${rotIdx}.ndjson`))) { | ||
| rotIdx++; | ||
| } | ||
| renameSync(currentFilePath(), join(debugDir, `debug-${today}.${rotIdx}.ndjson`)); |
There was a problem hiding this comment.
The writeStream.end() call is asynchronous. On Windows, calling renameSync immediately after end() will likely fail with an EBUSY or EPERM error because the file descriptor is still held open by the stream until it finishes flushing its internal buffer.
Since this is a synchronous logging flow, you should consider using fs.appendFileSync for the entire logger to simplify rotation, or you must ensure the stream is fully closed before renaming. However, WriteStream does not provide a synchronous close method. A common workaround in Electron apps for simple debug logging is to avoid WriteStream and use manual file descriptor management with fs.writeSync and fs.closeSync.
| function cleanupOldLogs(dir: string, retentionDays: number): void { | ||
| const cutoff = Date.now() - retentionDays * 24 * 60 * 60 * 1000; | ||
| const entries = readdirSync(dir, { withFileTypes: true }); | ||
| for (const entry of entries) { | ||
| if (!entry.isFile()) continue; | ||
| const match = entry.name.match(/^debug-(\d{4}-\d{2}-\d{2})(?:\.\d+)?\.ndjson$/); | ||
| if (!match) continue; | ||
| const fileDate = new Date(match[1]).getTime(); | ||
| if (fileDate < cutoff) { | ||
| unlinkSync(join(dir, entry.name)); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
The cleanupOldLogs function performs synchronous I/O operations (readdirSync, unlinkSync) on the main thread during initialization. If the log directory contains a very large number of files or if there are permission issues, this could block the Electron main process and delay app startup. It is recommended to wrap these operations in a try...catch block to prevent a minor cleanup failure from crashing the entire application.
function cleanupOldLogs(dir: string, retentionDays: number): void {
try {
const cutoff = Date.now() - retentionDays * 24 * 60 * 60 * 1000;
const entries = readdirSync(dir, { withFileTypes: true });
for (const entry of entries) {
if (!entry.isFile()) continue;
const match = entry.name.match(/^debug-(\d{4}-\d{2}-\d{2})(?:\.\d+)?\.ndjson$/);
if (!match) continue;
const fileDate = new Date(match[1]).getTime();
if (fileDate < cutoff) {
try {
unlinkSync(join(dir, entry.name));
} catch (err) {
console.error("[debug] failed to delete old log " + entry.name + ":", err);
}
}
}
} catch (err) {
console.error("[debug] log cleanup failed:", err);
}
}| const fileDate = new Date(match[1]).getTime(); | ||
| if (fileDate < cutoff) { |
There was a problem hiding this comment.
Creating a new Date object for every file in the directory is inefficient. Since the log filenames use the YYYY-MM-DD format, you can perform a lexicographical string comparison instead. This is faster and avoids potential timezone edge cases during date parsing.
| const fileDate = new Date(match[1]).getTime(); | |
| if (fileDate < cutoff) { | |
| const cutoffDate = new Date(Date.now() - retentionDays * 24 * 60 * 60 * 1000).toISOString().slice(0, 10); | |
| if (match[1] < cutoffDate) { |
mvanhorn
left a comment
There was a problem hiding this comment.
@HiddenPuppy nice work on this, especially the 247-line test file. Two of gemini's three points are worth fixing; the third is a nit.
-
(high, logger.ts:125) - Windows rename race is real.
writeStream.end()is async and the fd stays open until flush completes;renameSyncimmediately after will throw EBUSY/EPERM on Windows. Ubuntu CI won't catch it because POSIX permits renaming open files. Simplest fix: drop the rename entirely. When rotation triggers, just open the next.N.ndjsonslot and write there - leave the current file as-is. The naming already supports this (debug-YYYY-MM-DD.ndjson,.1.ndjson,.2.ndjson...) and natural-sort reads back in order. -
(medium, logger.ts:225) - Agree, wrap
cleanupOldLogsin try/catch. A perm or race issue during init shouldn't crash the main process. Per-unlinkSync try/catch is also worth it so one stuck file doesn't abort the loop. -
(medium, logger.ts:221) - This is a nit. The lex YYYY-MM-DD comparison is a perf win at very large file counts and avoids the (minor)
new Date('2026-05-15')UTC-midnight interpretation, but the current code isn't broken. Take it or leave it.
CI is fully green here unlike #491, which is the main reason I'd land this once 1+2 are addressed.
Closes #411
Changes
CreateDebugLoggerOptions— added optionalmaxFileSize(default 50 MB) andretentionDays(default 14).maxFileSize, subsequent writes go todebug-YYYY-MM-DD.1.ndjson,.2.ndjson, etc.retentionDaysare purged fromdebugDir.