feat(scheduler,biometrics): clock validation, sleep records, and plugin module system#119
feat(scheduler,biometrics): clock validation, sleep records, and plugin module system#119
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (6)
📝 WalkthroughWalkthroughThis PR establishes a separate biometrics database infrastructure with Python modules (piezo-processor, sleep-detector) that stream processed sensor data into dedicated SQLite tables. The main database schema is refactored to remove biometrics tables, add power-state tracking, and include new indexes. Integration points include hardware device-state synchronization writing sleep sessions and instrumentation startup flow improvements. Documentation is consolidated with new ADR files. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/scheduler/jobManager.ts (1)
284-293: Recommend makingexecuteRebootasync so failures propagate to the scheduler.Currently
executeReboot()returnsvoidwith a dangling internal Promise. The job handlers always report success — even ifsystemctl rebootfails — because the error only surfaces inside the callback after the job has already completed. Making itasynclets the job handlerawaitit and surface failures through the scheduler's error-reporting path.♻️ Proposed refactor
- private executeReboot(): void { - // Import inline to avoid issues in test environments where child_process may be mocked - import('child_process').then(({ exec }) => { - exec('systemctl reboot', (error) => { - if (error) console.error('Reboot command failed:', error.message) - }) - }).catch(error => { - console.error('Failed to import child_process for reboot:', error) - }) - } + private async executeReboot(): Promise<void> { + // Import inline to avoid issues in test environments where child_process may be mocked + const { exec } = await import('child_process') + return new Promise((resolve, reject) => { + exec('systemctl reboot', (error) => { + if (error) { + console.error('Reboot command failed:', error.message) + reject(error) + } + else { + resolve() + } + }) + }) + }Then update both job handlers:
- this.executeReboot() + await this.executeReboot()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/scheduler/jobManager.ts` around lines 284 - 293, The executeReboot function currently returns void and swallows errors inside an internal Promise/callback; change executeReboot to an async function that returns a Promise which resolves when the reboot command succeeds and rejects on failures (e.g., wrap child_process.exec in a Promise or use util.promisify(exec)), ensure caught import errors are re-thrown instead of only console.error so failures propagate, and then update the two job handler functions that call executeReboot to await or return its Promise so the scheduler can observe and report errors from executeReboot.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/hardware/deviceStateSync.ts`:
- Around line 39-56: The SELECT and subsequent upsert are vulnerable to TOCTOU
races: wrap the read-modify-write in a single transaction by moving the SELECT
(the code that reads prev from deviceState), the logic that calls
createSleepRecord(side, poweredOnAt, now) and the upsertSide call into a
db.transaction(...) callback so they execute atomically; use the transaction
object (tx) for the SELECT/UPSERT (e.g.,
tx.select(...).from(deviceState).where(...).limit(1) and tx.upsertSide or adapt
upsertSide to accept tx), and ensure createSleepRecord either accepts the tx or
performs its writes via the same tx so the sleep record creation and side upsert
happen inside the same transaction.
In `@src/scheduler/jobManager.ts`:
- Line 290: The CI lint failure is caused by the arrow-parens rule on the
promise rejection handler in jobManager: change the `.catch(error => {` usage to
include parentheses around the parameter (i.e., `.catch((error) => {`) where the
promise chain ending with `}).catch(error => {` appears in
src/scheduler/jobManager.ts; apply the same fix to any other `.catch(error =>
...)` occurrences in that file to satisfy the linter.
---
Nitpick comments:
In `@src/scheduler/jobManager.ts`:
- Around line 284-293: The executeReboot function currently returns void and
swallows errors inside an internal Promise/callback; change executeReboot to an
async function that returns a Promise which resolves when the reboot command
succeeds and rejects on failures (e.g., wrap child_process.exec in a Promise or
use util.promisify(exec)), ensure caught import errors are re-thrown instead of
only console.error so failures propagate, and then update the two job handler
functions that call executeReboot to await or return its Promise so the
scheduler can observe and report errors from executeReboot.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
instrumentation.tssrc/db/migrations/0002_flaky_doctor_strange.sqlsrc/db/migrations/meta/0002_snapshot.jsonsrc/db/migrations/meta/_journal.jsonsrc/db/schema.tssrc/hardware/deviceStateSync.tssrc/scheduler/jobManager.ts
| exec('systemctl reboot', (error) => { | ||
| if (error) console.error('Reboot command failed:', error.message) | ||
| }) | ||
| }).catch(error => { |
There was a problem hiding this comment.
Fix arrow-parens lint error — this is breaking CI.
.catch(error => { must have parentheses around the argument.
🛠️ Proposed fix
- }).catch(error => {
+ }).catch((error) => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| }).catch(error => { | |
| }).catch((error) => { |
🧰 Tools
🪛 ESLint
[error] 290-290: Expected parentheses around arrow function argument having a body with curly braces.
(@stylistic/arrow-parens)
🪛 GitHub Actions: CI
[error] 290-290: ESLint: Expected parentheses around arrow function argument having a body with curly braces. (@stylistic/arrow-parens)
🪛 GitHub Check: Lint
[failure] 290-290:
Expected parentheses around arrow function argument having a body with curly braces
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scheduler/jobManager.ts` at line 290, The CI lint failure is caused by
the arrow-parens rule on the promise rejection handler in jobManager: change the
`.catch(error => {` usage to include parentheses around the parameter (i.e.,
`.catch((error) => {`) where the promise chain ending with `}).catch(error => {`
appears in src/scheduler/jobManager.ts; apply the same fix to any other
`.catch(error => ...)` occurrences in that file to satisfy the linter.
… sleep records - Wait for valid system clock before starting scheduler; pod can boot with RTC reset to ~2010 before NTP syncs, causing cron jobs to fire at incorrect times - Fix daily-reboot job: was a no-op log statement, now executes systemctl reboot and propagates errors through the scheduler - Auto-schedule a pre-prime reboot 1hr before daily priming when primePodDaily is enabled; ensures clean device state before the prime cycle (matches free-sleep behaviour) - Add poweredOnAt column to device_state; DeviceStateSync now wraps SELECT + UPSERT in a transaction to prevent TOCTOU races on rapid sync() calls, detects OFF→ON and ON→OFF power transitions, and creates stub sleep records on power-off (sessions < 5min ignored; presentIntervals left null for biometrics modules to backfill)
Moves time-series health data (vitals, sleep_records, movement) into a
dedicated biometrics.db, decoupled from the main config database.
- biometrics-schema.ts defines the public contract: any module that
writes to these tables works automatically with the existing tRPC API
- Second Drizzle connection with WAL mode and 5s busy_timeout to safely
support concurrent module writes
- db:biometrics:generate/migrate/studio scripts for independent schema
management; migration 0003 drops moved tables from sleepypod.db
Bundled Python modules (modules/):
- piezo-processor: tails /persistent/*.RAW, decodes CBOR piezo data,
bandpass filters + HeartPy peak detection → HR/HRV/breathing rate
written to vitals ~every 60s per side
- sleep-detector: reads capacitance presence records, tracks session
boundaries and bed exits → sleep_records + movement
Both modules run as isolated systemd services in Python venvs with
resource limits and health reporting to system_health table.
Install script updated: biometrics DB migration, Python venv setup,
module installation and service registration.
ModuleManifest interface (src/modules/types.ts) documents the contract
for third-party module authors. ADR 0012 records the decision.
4e350fe to
64d9706
Compare
Merge shallow, one-decision ADRs into coherent documents: - Combine 0003 (TypeScript/React) + 0009 (Lingui) into single core stack ADR - Combine 0006 (ESLint) + 0007 (Vitest) + 0008 (conventional commits) + pnpm rationale from 0002/0011 into single developer tooling ADR - Delete redundant ADRs: 0002, 0007, 0008, 0009, 0011 Update 0012 (biometrics module system): clarify WebSocket transport is not ruled out for future use (e.g. modules on separate host, real-time UI push)
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/db/schema.ts (1)
163-178:⚠️ Potential issue | 🔴 CriticalAdd data migration step to preserve biometrics tables before dropping from main schema.
Migration
0003_odd_gargoyle.sqldropssleep_records,vitals, andmovementtables without preserving their data. These tables are recreated inbiometrics-schema.tsin a separatebiometrics.db, but no cross-database data migration step exists to copy rows before the DROP executes.If these tables contain production data, the migration will cause permanent data loss on upgrade. Either add a migration step to copy data from
sleepypod.dbtobiometrics.dbbefore the DROP statements, or explicitly document that these are brand-new tables with no pre-existing data.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/db/schema.ts` around lines 163 - 178, The migration 0003_odd_gargoyle.sql currently drops sleep_records, vitals, and movement without preserving data; before those DROP statements add a migration step that copies rows from sleepypod.db into the new biometrics.db (the tables defined in biometrics-schema.ts) — e.g. ATTACH the biometrics.db, create target tables if missing, then INSERT INTO biometrics.<table> SELECT * FROM main.<table> for sleep_records, vitals, and movement, verify row counts, and only then run the DROP TABLE on the old main tables; alternatively, if these are guaranteed empty, update the migration comment to state explicitly that sleep_records/vitals/movement are brand-new with no existing data.
🧹 Nitpick comments (6)
src/scheduler/jobManager.ts (1)
285-298: Prefer static import andexecFileover dynamicexec.Two minor improvements worth considering:
child_processis a Node.js built-in — dynamicawait import(...)adds an unnecessary async step on every reboot invocation. A static import at the top of the file is idiomatic.execspawns a shell (/bin/sh -c systemctl reboot). Since the command and its arguments are fixed strings with no user input,execFileis cleaner — no shell overhead, explicit argv list.♻️ Proposed refactor
At the top of the file, add:
+import { execFile } from 'child_process'Then replace
executeReboot:- private async executeReboot(): Promise<void> { - const { exec } = await import('child_process') - return new Promise((resolve, reject) => { - exec('systemctl reboot', (error) => { - if (error) { - console.error('Reboot command failed:', error.message) - reject(error) - } - else { - resolve() - } - }) - }) - } + private executeReboot(): Promise<void> { + return new Promise((resolve, reject) => { + execFile('systemctl', ['reboot'], (error) => { + if (error) { + console.error('Reboot command failed:', error.message) + reject(error) + } + else { + resolve() + } + }) + }) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/scheduler/jobManager.ts` around lines 285 - 298, The executeReboot function currently uses a dynamic import and child_process.exec; change it to a static top-level import of execFile from 'child_process' and update the executeReboot implementation to call execFile('systemctl', ['reboot'], ...) (no shell) and return a Promise that resolves/rejects as before; remove the await import(...) in executeReboot and keep the existing error handling (console.error('Reboot command failed:', error.message) and reject(error)) so the function signature and behavior remain the same.src/modules/types.ts (1)
21-33: Tightenwrites/providestyping to the biometrics contract.This prevents typos and keeps module manifests aligned to the schema.
♻️ Suggested typing refinement
+export type BiometricsTable = 'vitals' | 'sleep_records' | 'movement' +export type BiometricsField = `${BiometricsTable}.${string}` + export interface ModuleManifest { /** * Specific fields this module writes, in "table.column" format. * Used to tell users what data they can expect from this module. * e.g. ["vitals.heartRate", "vitals.hrv", "vitals.breathingRate"] */ - provides: string[] + provides: BiometricsField[] /** * Top-level table names this module writes to. * Must be a subset of ["vitals", "sleep_records", "movement"]. * e.g. ["vitals"] */ - writes: string[] + writes: BiometricsTable[]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/types.ts` around lines 21 - 33, Tighten the Module manifest types by replacing the loose string[] for the provides and writes properties with explicit unions derived from the biometrics contract: define (or import) a BiometricsTableName union like 'vitals' | 'sleep_records' | 'movement' and a BiometricsProvidedField union of the allowed "table.column" literals (e.g. 'vitals.heartRate', 'vitals.hrv', etc.), then change the interface properties to writes: BiometricsTableName[] and provides: BiometricsProvidedField[] (or readonly arrays) so ModuleManifest.provides and ModuleManifest.writes are constrained to the schema.src/server/routers/biometrics.ts (1)
57-69: Extract the repeated date-range.refine()into a shared helper.The identical
.refine()block (validatestartDate ≤ endDate, message, path) appears verbatim three times (getSleepRecords,getVitals,getMovement). A single shared schema or schema-builder would remove the duplication.♻️ Proposed refactor
+// at the top of the file, alongside other imports +const dateRangeSchema = z + .object({ + startDate: z.date().optional(), + endDate: z.date().optional(), + }) + .refine( + (data) => { + if (data.startDate && data.endDate) { + return validateDateRange(data.startDate, data.endDate) + } + return true + }, + { + message: 'startDate must be before or equal to endDate', + path: ['endDate'], + } + )Then each procedure input becomes:
- z - .object({ - side: sideSchema, - startDate: z.date().optional(), - endDate: z.date().optional(), - limit: z.number().int().min(1).max(100).default(30), - }) - .strict() - .refine( - (data) => { ... }, - { message: '...', path: ['endDate'] } - ) + dateRangeSchema + .extend({ side: sideSchema, limit: z.number().int().min(1).max(100).default(30) }) + .strict()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/routers/biometrics.ts` around lines 57 - 69, The three identical .refine blocks enforcing startDate ≤ endDate should be extracted into a shared helper schema to remove duplication: create a reusable Zod schema or schema-builder function (e.g., buildDateRangeSchema or dateRangeRefine) that applies the .refine using validateDateRange with the same message/path, then replace the inline .refine in getSleepRecords, getVitals, and getMovement inputs to use that shared schema (or compose it into each procedure's input) so validation logic is centralized and reused.src/hardware/deviceStateSync.ts (1)
90-95: Consider a retry/queue for failed sleep-record writes.Since
poweredOnAtis cleared before the biometrics insert, any insert failure permanently drops the sleep record. A lightweight retry queue or a persisted “pending sleep record” marker would make this more robust.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/hardware/deviceStateSync.ts` around lines 90 - 95, The current flow clears poweredOnAt and then calls createSleepRecord(side, enteredBedAt, leftBedAt) so a failure permanently drops the sleep record; modify the logic in deviceStateSync to persist a “pending sleep record” marker (e.g., on the deviceState row or a lightweight local queue) before clearing poweredOnAt and/or retry the createSleepRecord operation on failure: ensure createSleepRecord failures enqueue a retry (in-memory durable queue or DB flag) and that a background worker or next startup reads that queue/flag and retries the biometricsDb insert until success, and clear the pending marker only after a successful insert; reference createSleepRecord, poweredOnAt, deviceState row and biometricsDb so the pending marker and retry handling are added around the existing insert call.src/db/biometrics-schema.ts (1)
27-38: Use a DB‑level default forcreated_atto safeguard the public contract.The schema is documented as a public contract for external modules (Python, Rust, etc.) that write directly to the database. Currently,
createdAtrelies only on$defaultFn()for the default value, which means any writer that omits the field will fail theNOT NULLconstraint. A DB-level default ensures safety regardless of how the row is inserted.Apply the following change to the schema, then regenerate the migration:
Schema update
-import { index, integer, real, sqliteTable, text } from 'drizzle-orm/sqlite-core' +import { index, integer, real, sqliteTable, text } from 'drizzle-orm/sqlite-core' +import { sql } from 'drizzle-orm' export const sleepRecords = sqliteTable('sleep_records', { id: integer('id').primaryKey({ autoIncrement: true }), side: text('side', { enum: ['left', 'right'] }).notNull(), enteredBedAt: integer('entered_bed_at', { mode: 'timestamp' }).notNull(), leftBedAt: integer('left_bed_at', { mode: 'timestamp' }).notNull(), sleepDurationSeconds: integer('sleep_duration_seconds').notNull(), timesExitedBed: integer('times_exited_bed').notNull().default(0), presentIntervals: text('present_intervals', { mode: 'json' }), // [[start, end], ...] notPresentIntervals: text('not_present_intervals', { mode: 'json' }), // [[start, end], ...] createdAt: integer('created_at', { mode: 'timestamp' }) .notNull() - .$defaultFn(() => new Date()), + .default(sql`(unixepoch())`),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/db/biometrics-schema.ts` around lines 27 - 38, Replace the client-side default on the sleepRecords.createdAt column (currently using .$defaultFn(() => new Date())) with a SQLite DB-level default so inserts from external writers won't violate the NOT NULL constraint; update the createdAt column definition in the sqliteTable('sleep_records', ...) schema to use a SQLite DEFAULT that produces the timestamp (e.g., CURRENT_TIMESTAMP or strftime-based integer seconds) and then regenerate the migration so the DB schema is updated.modules/sleep-detector/main.py (1)
69-71: Silence the unusedframeparameter.
Ruff flags the unused arg; renaming to_framekeeps intent without noise.💡 Suggested tweak
-def _on_signal(signum, frame): +def _on_signal(signum, _frame):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/sleep-detector/main.py` around lines 69 - 71, The signal handler _on_signal currently defines an unused parameter named frame which triggers linter warnings; rename the parameter to _frame (def _on_signal(signum, _frame):) to indicate it's intentionally unused and silence Ruff, keeping the same behavior of logging the signum and setting _shutdown via log.info and _shutdown.set() inside the _on_signal function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/adr/0012-biometrics-module-system.md`:
- Around line 24-32: The fenced code block containing the ASCII architecture
diagram (the block starting with the line "/persistent/*.RAW" and including
lines like "[module process]" and "[biometrics.db]") needs a language identifier
to satisfy markdownlint MD040; change the opening fence from ``` to ```text
(i.e., add the word "text" right after the three backticks) and keep the closing
fence as-is so the diagram is treated as plain text in rendered Markdown.
- Around line 139-151: Change the fenced code block that contains the modules/
tree to include a language identifier (e.g., replace the opening ``` with
```text) so the tree block is marked as plain text for markdownlint; leave the
closing fence as ``` and ensure the rest of the block content is unchanged.
In `@drizzle.biometrics.config.ts`:
- Around line 3-9: The default DB URL in the Drizzle config (inside the
defineConfig call) currently falls back to 'file:./biometrics.db' when
BIOMETRICS_DATABASE_URL is unset; change that fallback to match runtime by using
'file:./biometrics.dev.db' so tooling and runtime point at the same database
(update the dbCredentials.url value referenced by BIOMETRICS_DATABASE_URL in
drizzle.biometrics.config.ts).
In `@modules/piezo-processor/main.py`:
- Around line 224-255: The code uses the 30s buffer (hr_arr) for both HR and HRV
because compute_heart_rate_hrv is called only with hr_arr; fix by computing HR
from the 30s buffer (np.array(self._hr_buf)) and computing HRV from the
five‑minute buffer (np.array(self._hrv_buf)) so the HRV_WINDOW_S-backed deque
(_hrv_buf) is honored; update the call(s) around
compute_heart_rate_hrv/compute_heart_rate_hrv(...)/compute_heart_rate_hrv usage
in _maybe_write (and ensure ingest still appends samples to _hrv_buf) so
write_vitals and the log use the HR from the short buffer and HRV from the long
buffer.
In `@modules/sleep-detector/main.py`:
- Around line 203-224: The session close can append absent intervals with end <
start because _close_session(left_ts) is called with _last_present_ts while
_interval_start may already point to the first absent sample; update
_close_session to guard and fix this by ensuring the closing timestamp is not
earlier than _interval_start (e.g., compute effective_left = max(left_ts,
self._interval_start) when _interval_start is not None) or alternatively adjust
the caller to pass self._interval_start; then before appending to
_absent_intervals/_present_intervals check that effective_left >
self._interval_start to avoid emitting negative ranges, referencing
_close_session, _interval_start, _last_present_ts, _was_present,
_present_intervals and _absent_intervals.
In `@scripts/install`:
- Around line 239-266: The install_module function currently calls python3 -m
venv without guarding for missing python3-venv; add a pre-check before creating
the venv (e.g., run a quick python3 -m venv --help or probe for the venv module)
and if it fails, emit a clear error and either install the system package (on
Debian/Ubuntu) or abort the installer with a descriptive message; update the
logic around the python3 -m venv invocation in install_module and the
surrounding installer flow that references MODULES_SRC/MODULES_DEST and the
python3 calls to handle the missing-venv case gracefully.
In `@src/db/index.ts`:
- Around line 22-28: The JSDoc and console message for closeDatabase() are
misleading because the function only closes the main sqlite connection; update
the JSDoc to state "Close main database connection" (not "all"), and change the
console.log to a singular, specific message like "Closing main sqlite database
connection..." to match behavior; alternatively, if the intent was to close both
databases, modify closeDatabase() to also call closeBiometricsDatabase() and
update instrumentation.ts usage accordingly so both connections are closed.
Ensure references to sqlite, closeDatabase, and closeBiometricsDatabase are
consistent.
In `@src/db/migrations/0003_odd_gargoyle.sql`:
- Around line 1-3: This migration currently drops the movement, sleep_records,
and vitals tables directly; modify it to first copy all rows from movement,
sleep_records, and vitals into the new biometrics database (e.g., insert/select
to biometrics.db tables) inside a safe transaction or staged job, verify row
counts/checksums after the copy, and only then drop the originals using DROP
TABLE IF EXISTS `movement`, `sleep_records`, `vitals`; ensure the migration
script (0003_odd_gargoyle.sql) includes the export/import steps (or calls a
backfill procedure) and a verification step so the DROP statements run only
after successful backfill.
---
Outside diff comments:
In `@src/db/schema.ts`:
- Around line 163-178: The migration 0003_odd_gargoyle.sql currently drops
sleep_records, vitals, and movement without preserving data; before those DROP
statements add a migration step that copies rows from sleepypod.db into the new
biometrics.db (the tables defined in biometrics-schema.ts) — e.g. ATTACH the
biometrics.db, create target tables if missing, then INSERT INTO
biometrics.<table> SELECT * FROM main.<table> for sleep_records, vitals, and
movement, verify row counts, and only then run the DROP TABLE on the old main
tables; alternatively, if these are guaranteed empty, update the migration
comment to state explicitly that sleep_records/vitals/movement are brand-new
with no existing data.
---
Nitpick comments:
In `@modules/sleep-detector/main.py`:
- Around line 69-71: The signal handler _on_signal currently defines an unused
parameter named frame which triggers linter warnings; rename the parameter to
_frame (def _on_signal(signum, _frame):) to indicate it's intentionally unused
and silence Ruff, keeping the same behavior of logging the signum and setting
_shutdown via log.info and _shutdown.set() inside the _on_signal function.
In `@src/db/biometrics-schema.ts`:
- Around line 27-38: Replace the client-side default on the
sleepRecords.createdAt column (currently using .$defaultFn(() => new Date()))
with a SQLite DB-level default so inserts from external writers won't violate
the NOT NULL constraint; update the createdAt column definition in the
sqliteTable('sleep_records', ...) schema to use a SQLite DEFAULT that produces
the timestamp (e.g., CURRENT_TIMESTAMP or strftime-based integer seconds) and
then regenerate the migration so the DB schema is updated.
In `@src/hardware/deviceStateSync.ts`:
- Around line 90-95: The current flow clears poweredOnAt and then calls
createSleepRecord(side, enteredBedAt, leftBedAt) so a failure permanently drops
the sleep record; modify the logic in deviceStateSync to persist a “pending
sleep record” marker (e.g., on the deviceState row or a lightweight local queue)
before clearing poweredOnAt and/or retry the createSleepRecord operation on
failure: ensure createSleepRecord failures enqueue a retry (in-memory durable
queue or DB flag) and that a background worker or next startup reads that
queue/flag and retries the biometricsDb insert until success, and clear the
pending marker only after a successful insert; reference createSleepRecord,
poweredOnAt, deviceState row and biometricsDb so the pending marker and retry
handling are added around the existing insert call.
In `@src/modules/types.ts`:
- Around line 21-33: Tighten the Module manifest types by replacing the loose
string[] for the provides and writes properties with explicit unions derived
from the biometrics contract: define (or import) a BiometricsTableName union
like 'vitals' | 'sleep_records' | 'movement' and a BiometricsProvidedField union
of the allowed "table.column" literals (e.g. 'vitals.heartRate', 'vitals.hrv',
etc.), then change the interface properties to writes: BiometricsTableName[] and
provides: BiometricsProvidedField[] (or readonly arrays) so
ModuleManifest.provides and ModuleManifest.writes are constrained to the schema.
In `@src/scheduler/jobManager.ts`:
- Around line 285-298: The executeReboot function currently uses a dynamic
import and child_process.exec; change it to a static top-level import of
execFile from 'child_process' and update the executeReboot implementation to
call execFile('systemctl', ['reboot'], ...) (no shell) and return a Promise that
resolves/rejects as before; remove the await import(...) in executeReboot and
keep the existing error handling (console.error('Reboot command failed:',
error.message) and reject(error)) so the function signature and behavior remain
the same.
In `@src/server/routers/biometrics.ts`:
- Around line 57-69: The three identical .refine blocks enforcing startDate ≤
endDate should be extracted into a shared helper schema to remove duplication:
create a reusable Zod schema or schema-builder function (e.g.,
buildDateRangeSchema or dateRangeRefine) that applies the .refine using
validateDateRange with the same message/path, then replace the inline .refine in
getSleepRecords, getVitals, and getMovement inputs to use that shared schema (or
compose it into each procedure's input) so validation logic is centralized and
reused.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (36)
docs/adr/0002-yarn-package-manager.mddocs/adr/0003-core-stack.mddocs/adr/0006-developer-tooling.mddocs/adr/0007-vitest.mddocs/adr/0008-conventional-commits.mddocs/adr/0009-linguijs.mddocs/adr/0011-switch-to-pnpm.mddocs/adr/0012-biometrics-module-system.mddrizzle.biometrics.config.tsinstrumentation.tsmodules/piezo-processor/main.pymodules/piezo-processor/manifest.jsonmodules/piezo-processor/requirements.txtmodules/piezo-processor/sleepypod-piezo-processor.servicemodules/sleep-detector/main.pymodules/sleep-detector/manifest.jsonmodules/sleep-detector/requirements.txtmodules/sleep-detector/sleepypod-sleep-detector.servicepackage.jsonscripts/installsrc/db/biometrics-migrations/0000_noisy_onslaught.sqlsrc/db/biometrics-migrations/meta/0000_snapshot.jsonsrc/db/biometrics-migrations/meta/_journal.jsonsrc/db/biometrics-schema.tssrc/db/biometrics.tssrc/db/index.tssrc/db/migrations/0002_flaky_doctor_strange.sqlsrc/db/migrations/0003_odd_gargoyle.sqlsrc/db/migrations/meta/0002_snapshot.jsonsrc/db/migrations/meta/0003_snapshot.jsonsrc/db/migrations/meta/_journal.jsonsrc/db/schema.tssrc/hardware/deviceStateSync.tssrc/modules/types.tssrc/scheduler/jobManager.tssrc/server/routers/biometrics.ts
💤 Files with no reviewable changes (5)
- docs/adr/0008-conventional-commits.md
- docs/adr/0011-switch-to-pnpm.md
- docs/adr/0007-vitest.md
- docs/adr/0002-yarn-package-manager.md
- docs/adr/0009-linguijs.md
✅ Files skipped from review due to trivial changes (3)
- docs/adr/0003-core-stack.md
- src/db/migrations/meta/0003_snapshot.json
- modules/piezo-processor/sleepypod-piezo-processor.service
🚧 Files skipped from review as they are similar to previous changes (1)
- instrumentation.ts
| DROP TABLE `movement`;--> statement-breakpoint | ||
| DROP TABLE `sleep_records`;--> statement-breakpoint | ||
| DROP TABLE `vitals`; No newline at end of file |
There was a problem hiding this comment.
Dropping biometrics tables without data migration risks data loss.
This migration deletes movement, sleep_records, and vitals from the primary DB without any in-migration to biometrics.db. Upgrades will lose historical biometrics unless data is copied first. Consider a data-migration step (copy rows into biometrics DB) and only drop after a successful backfill/verification. If you want a defensive guard, add IF EXISTS to avoid hard failures on unexpected states.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/db/migrations/0003_odd_gargoyle.sql` around lines 1 - 3, This migration
currently drops the movement, sleep_records, and vitals tables directly; modify
it to first copy all rows from movement, sleep_records, and vitals into the new
biometrics database (e.g., insert/select to biometrics.db tables) inside a safe
transaction or staged job, verify row counts/checksums after the copy, and only
then drop the originals using DROP TABLE IF EXISTS `movement`, `sleep_records`,
`vitals`; ensure the migration script (0003_odd_gargoyle.sql) includes the
export/import steps (or calls a backfill procedure) and a verification step so
the DROP statements run only after successful backfill.
…ases piezo-processor: HRV was computed from the 30s HR buffer instead of the dedicated 5-minute HRV buffer (_hrv_buf). Fix by splitting the two compute_heart_rate_hrv calls so HR uses _hr_buf and HRV uses _hrv_buf. sleep-detector: when a session times out due to absence, _interval_start already points to the first absent sample so closing at _last_present_ts could produce absent intervals with end < start. Close at _interval_start when available. Also guard the final absent-interval append so it only emits when left_ts > interval_start.
- src/db/index.ts: fix misleading JSDoc/log for closeDatabase() — it only closes the main DB; note closeBiometricsDatabase() must be called separately - src/db/migrations/0003: add IF EXISTS to DROP TABLE statements (defensive) - drizzle.biometrics.config.ts: align fallback URL to biometrics.dev.db to match runtime default (was biometrics.db — mismatched tooling vs runtime) - scripts/install: guard against missing python3-venv on Debian/Ubuntu minimal images by checking and installing python3-venv before python3 -m venv - docs/adr/0012: add language identifiers to fenced code blocks (markdownlint MD040)
Greenfield project — no live deployments. Replace 4 incremental migrations (create tables, recreate side_settings, add powered_on_at, drop biometrics tables) with one clean migration reflecting the current schema.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
scripts/install (2)
190-217:⚠️ Potential issue | 🟠 Major
BIOMETRICS_DATABASE_URLis missing from the systemd service environment.The
.envfile is configured withBIOMETRICS_DATABASE_URL, but the systemd unit at line 202-203 only setsDATABASE_URLandDAC_SOCK_PATH. If the Next.js app doesn't load.envwhen started via systemd (e.g., whenNODE_ENV=production), the biometrics DB connection will fall back to a hardcoded default, which may differ from the installed path.Proposed fix
Environment="DATABASE_URL=file:$DATA_DIR/sleepypod.db" +Environment="BIOMETRICS_DATABASE_URL=file:$DATA_DIR/biometrics.db" Environment="DAC_SOCK_PATH=$DAC_SOCK_PATH"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/install` around lines 190 - 217, The systemd unit for sleepypod.service is missing the BIOMETRICS_DATABASE_URL environment, so add an Environment entry for BIOMETRICS_DATABASE_URL in the [Service] block (alongside Environment="DATABASE_URL=..." and Environment="DAC_SOCK_PATH=...") so systemd receives the same biometrics DB path the app expects; update the template that writes sleepypod.service (the heredoc that generates /etc/systemd/system/sleepypod.service) to include Environment="BIOMETRICS_DATABASE_URL=$BIOMETRICS_DATABASE_URL" or Environment="BIOMETRICS_DATABASE_URL=file:$DATA_DIR/biometrics.db" as appropriate so the service runs with the correct BIOMETRICS_DATABASE_URL.
340-378:⚠️ Potential issue | 🟠 Major
sp-updatedoes not handle biometrics DB backup, migration, or module updates.The update script backs up
sleepypod.db(line 335-338), runspnpm db:generateandpnpm db:migrate(lines 348-349), but does not:
- Back up
biometrics.db- Run
pnpm db:biometrics:generateandpnpm db:biometrics:migrate- Re-install or update biometrics modules (piezo-processor, sleep-detector)
The rollback path also only restores
sleepypod.db. A schema update to biometrics tables would be silently skipped, and a failed rollback would leave biometrics.db in an inconsistent state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/install` around lines 340 - 378, The update script in scripts/install currently only handles sleepypod.db and misses biometrics DB and modules; modify the update path to also backup "$DATA_DIR/biometrics.db" before pulling, run "pnpm db:biometrics:generate" and "pnpm db:biometrics:migrate" after the other DB steps, and ensure biometrics native modules are rebuilt/updated (e.g., run pnpm install/rebuild for piezo-processor and sleep-detector or execute their build steps like (cd node_modules/piezo-processor && npm run build-release) and same for sleep-detector) before starting the service; likewise extend the rollback path to restore "$DATA_DIR/biometrics.db.bak" (like sleepypod.db) and re-run the module install/build steps and pnpm build so a failed update doesn’t leave biometrics.db or native modules inconsistent (reference variables PREV_COMMIT and DATA_DIR and commands pnpm db:biometrics:generate, pnpm db:biometrics:migrate, piezo-processor, sleep-detector).
♻️ Duplicate comments (2)
modules/piezo-processor/main.py (2)
97-113:⚠️ Potential issue | 🔴 CriticalSame
ON CONFLICT(component)issue as in sleep-detector.The
system_healthtable lacks a UNIQUE constraint oncomponent, so the upsert at lines 103-108 will fail at runtime. See the related comment on the ADR and sleep-detector module.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/piezo-processor/main.py` around lines 97 - 113, The upsert in report_health uses "ON CONFLICT(component)" but the system_health table has no UNIQUE constraint on component, so the query will fail; fix by either ensuring the schema defines component as UNIQUE (or PRIMARY KEY) in the system_health table creation/migration, or change the write logic in report_health to use a supported pattern (e.g., INSERT OR REPLACE or update-if-exists via SELECT then UPDATE) that matches the existing schema; update the schema migration or the report_health function accordingly and keep references to the ON CONFLICT(component) clause and the system_health table to locate the change.
293-300:⚠️ Potential issue | 🟡 MinorSame exception-path
report_healthoverwrite and file handle leak as in sleep-detector.On exception,
report_health("down", str(e))at line 295 is overwritten byreport_health("down", "piezo-processor stopped")at line 299. Also,RawFileFollower._fileis not closed on shutdown.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/piezo-processor/main.py` around lines 293 - 300, The exception-path currently overwrites the original error health message and leaks an open file handle; capture the exception message into a variable (e.g. err_msg) in the except block and call report_health("down", err_msg) after cleanup (or ensure you don't overwrite it in the finally), and in the finally block ensure db_conn is closed only if not None and close any open RawFileFollower._file (check the RawFileFollower instance(s) and call its ._file.close() if present and not already closed) so the original error reason is preserved and file handles are released; update references in main loop exception handling and the finally cleanup to use these names (report_health, db_conn.close, RawFileFollower._file).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/adr/0012-biometrics-module-system.md`:
- Around line 153-157: The system_health table lacks a UNIQUE constraint on
component but the modules (sleep-detector, piezo-processor) use ON
CONFLICT(component) DO UPDATE, which will fail; fix by updating the schema in
migration 0000_fine_frightful_four.sql to add a UNIQUE constraint or index on
system_health.component (or alternatively change the modules' health-write logic
to perform a SELECT for the component and then conditional INSERT or UPDATE
instead of using ON CONFLICT); reference the system_health table and the ON
CONFLICT(component) DO UPDATE usage when making the change so both writers work
without runtime errors.
In `@modules/piezo-processor/main.py`:
- Around line 161-173: In compute_breathing_rate, scipy.signal.welch's nperseg
can become a float when fs is passed as float (from fs * 10), causing a type
error; update the call to welch (in compute_breathing_rate) to pass an integer
nperseg by wrapping the min(...) expression with int(...) (or compute nperseg =
max(1, int(min(len(cleaned), fs * 10))) and pass that) so nperseg is always an
int and at least 1.
In `@modules/sleep-detector/main.py`:
- Around line 120-135: The upsert uses ON CONFLICT(component) but the
system_health table has no UNIQUE constraint on component, so this will fail;
update the report_health function in modules/sleep-detector/main.py (and mirror
the same change in modules/piezo-processor/main.py) to perform a safe upsert
without relying on a UNIQUE constraint: open the DB connection, run an UPDATE
system_health SET status=?, message=?, last_checked=? WHERE
component='sleep-detector', check the number of rows affected, and if zero run a
plain INSERT INTO system_health (component, status, message, last_checked)
VALUES ('sleep-detector', ?, ?, ?); keep the existing exception handling and
close the connection.
- Around line 261-293: RawFileFollower.read_records currently exits its loop
when _shutdown is set but never closes an open self._file; add cleanup so any
open file is closed when read_records returns (e.g., wrap the while loop in
try/finally or ensure a final check after the loop) and also consider providing
an explicit close() method on RawFileFollower that closes self._file and resets
self._path so callers can trigger orderly shutdown; reference RawFileFollower,
read_records, self._file, and _shutdown when applying the fix.
- Around line 175-192: The update method currently appends a zero-length absent
interval when the first present=True sample starts a new session; inside update
(function update) after you set self._session_start and self._interval_start for
a new session, set self._was_present = True (and optionally
self._last_present_ts = ts) so the subsequent check that appends to
self._absent_intervals is skipped, or alternatively guard the absent-interval
append with a condition that it only runs if the session was already started
(e.g., skip when you just opened the session).
- Around line 322-329: The finally block currently always calls
report_health("down", "sleep-detector stopped") which overwrites the real error
reported by report_health("down", str(e)) in the exception handler; update the
shutdown logic around report_health so the final call only reports the generic
"sleep-detector stopped" on a clean/normal shutdown and preserves the original
error message on error paths—e.g., introduce a local flag or variable (e.g.,
shutdown_reason or had_exception) set in the except block and checked in the
finally block, or set the exit reason before sys.exit, and only call
report_health("down", "sleep-detector stopped") from finally when no exception
occurred; touch symbols: report_health, sys.exit, and the try/except/finally
surrounding db_conn.close.
---
Outside diff comments:
In `@scripts/install`:
- Around line 190-217: The systemd unit for sleepypod.service is missing the
BIOMETRICS_DATABASE_URL environment, so add an Environment entry for
BIOMETRICS_DATABASE_URL in the [Service] block (alongside
Environment="DATABASE_URL=..." and Environment="DAC_SOCK_PATH=...") so systemd
receives the same biometrics DB path the app expects; update the template that
writes sleepypod.service (the heredoc that generates
/etc/systemd/system/sleepypod.service) to include
Environment="BIOMETRICS_DATABASE_URL=$BIOMETRICS_DATABASE_URL" or
Environment="BIOMETRICS_DATABASE_URL=file:$DATA_DIR/biometrics.db" as
appropriate so the service runs with the correct BIOMETRICS_DATABASE_URL.
- Around line 340-378: The update script in scripts/install currently only
handles sleepypod.db and misses biometrics DB and modules; modify the update
path to also backup "$DATA_DIR/biometrics.db" before pulling, run "pnpm
db:biometrics:generate" and "pnpm db:biometrics:migrate" after the other DB
steps, and ensure biometrics native modules are rebuilt/updated (e.g., run pnpm
install/rebuild for piezo-processor and sleep-detector or execute their build
steps like (cd node_modules/piezo-processor && npm run build-release) and same
for sleep-detector) before starting the service; likewise extend the rollback
path to restore "$DATA_DIR/biometrics.db.bak" (like sleepypod.db) and re-run the
module install/build steps and pnpm build so a failed update doesn’t leave
biometrics.db or native modules inconsistent (reference variables PREV_COMMIT
and DATA_DIR and commands pnpm db:biometrics:generate, pnpm
db:biometrics:migrate, piezo-processor, sleep-detector).
---
Duplicate comments:
In `@modules/piezo-processor/main.py`:
- Around line 97-113: The upsert in report_health uses "ON CONFLICT(component)"
but the system_health table has no UNIQUE constraint on component, so the query
will fail; fix by either ensuring the schema defines component as UNIQUE (or
PRIMARY KEY) in the system_health table creation/migration, or change the write
logic in report_health to use a supported pattern (e.g., INSERT OR REPLACE or
update-if-exists via SELECT then UPDATE) that matches the existing schema;
update the schema migration or the report_health function accordingly and keep
references to the ON CONFLICT(component) clause and the system_health table to
locate the change.
- Around line 293-300: The exception-path currently overwrites the original
error health message and leaks an open file handle; capture the exception
message into a variable (e.g. err_msg) in the except block and call
report_health("down", err_msg) after cleanup (or ensure you don't overwrite it
in the finally), and in the finally block ensure db_conn is closed only if not
None and close any open RawFileFollower._file (check the RawFileFollower
instance(s) and call its ._file.close() if present and not already closed) so
the original error reason is preserved and file handles are released; update
references in main loop exception handling and the finally cleanup to use these
names (report_health, db_conn.close, RawFileFollower._file).
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
docs/adr/0012-biometrics-module-system.mddrizzle.biometrics.config.tsmodules/piezo-processor/main.pymodules/sleep-detector/main.pyscripts/installsrc/db/index.tssrc/db/migrations/0000_fine_frightful_four.sqlsrc/db/migrations/0001_little_the_twelve.sqlsrc/db/migrations/meta/0000_snapshot.jsonsrc/db/migrations/meta/0001_snapshot.jsonsrc/db/migrations/meta/_journal.json
💤 Files with no reviewable changes (2)
- src/db/migrations/meta/0001_snapshot.json
- src/db/migrations/0001_little_the_twelve.sql
🚧 Files skipped from review as they are similar to previous changes (2)
- drizzle.biometrics.config.ts
- src/db/migrations/meta/_journal.json
| Each module: | ||
| 1. Tails `/persistent/*.RAW` for new CBOR sensor data | ||
| 2. Processes its relevant signal type | ||
| 3. Writes results to `biometrics.db` using transactions | ||
| 4. Writes its health status to `system_health` table in `sleepypod.db` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify system_health schema for UNIQUE constraint on component
echo "=== Migration SQL ==="
fd "0000_fine_frightful_four.sql" --exec cat {}
echo ""
echo "=== Schema definition ==="
ast-grep --pattern 'export const systemHealth = sqliteTable($$$)'
echo ""
echo "=== Biometrics migration for system_health ==="
fd -e sql --path-separator / -p biometrics-migration --exec cat {}
echo ""
echo "=== All references to system_health ==="
rg -n "system_health" --type-add 'ts:*.ts' --type-add 'py:*.py' -t ts -t py -t sqlRepository: sleepypod/core
Length of output: 5942
🏁 Script executed:
# Search for Python module implementations
echo "=== Python modules using ON CONFLICT ==="
rg -n "ON CONFLICT.*component" -t py
echo ""
echo "=== Find all Python files in biometrics module ==="
fd -e py --path "*biometric*" | head -20
echo ""
echo "=== Check ADR file for context ==="
cat -n docs/adr/0012-biometrics-module-system.md | sed -n '150,160p'Repository: sleepypod/core
Length of output: 996
🏁 Script executed:
# Get full INSERT statements from both Python modules
echo "=== sleep-detector INSERT statement ==="
sed -n '125,130p' modules/sleep-detector/main.py
echo ""
echo "=== piezo-processor INSERT statement ==="
sed -n '98,110p' modules/piezo-processor/main.py
echo ""
echo "=== Check for any UNIQUE index on system_health.component ==="
rg -n "CREATE.*INDEX" -t sql | grep -i "system_health\|component"Repository: sleepypod/core
Length of output: 1154
Add UNIQUE constraint on system_health.component or refactor module health-write logic.
The system_health table in sleepypod.db (defined in migration 0000_fine_frightful_four.sql) has no UNIQUE constraint on the component column—only an auto-incrementing id primary key. However, both Python modules (sleep-detector and piezo-processor) use ON CONFLICT(component) DO UPDATE SET... when writing health status. SQLite requires a UNIQUE constraint on any column referenced in an ON CONFLICT(...) clause. Without it, these inserts will fail at runtime.
Add a UNIQUE constraint/index on system_health.component in the schema, or refactor the module health-write logic to use SELECT + conditional UPDATE/INSERT.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/adr/0012-biometrics-module-system.md` around lines 153 - 157, The
system_health table lacks a UNIQUE constraint on component but the modules
(sleep-detector, piezo-processor) use ON CONFLICT(component) DO UPDATE, which
will fail; fix by updating the schema in migration 0000_fine_frightful_four.sql
to add a UNIQUE constraint or index on system_health.component (or alternatively
change the modules' health-write logic to perform a SELECT for the component and
then conditional INSERT or UPDATE instead of using ON CONFLICT); reference the
system_health table and the ON CONFLICT(component) DO UPDATE usage when making
the change so both writers work without runtime errors.
| def report_health(status: str, message: str) -> None: | ||
| try: | ||
| conn = sqlite3.connect(str(SLEEPYPOD_DB), timeout=2.0) | ||
| with conn: | ||
| conn.execute( | ||
| """INSERT INTO system_health (component, status, message, last_checked) | ||
| VALUES ('sleep-detector', ?, ?, ?) | ||
| ON CONFLICT(component) DO UPDATE SET | ||
| status=excluded.status, | ||
| message=excluded.message, | ||
| last_checked=excluded.last_checked""", | ||
| (status, message, int(time.time())), | ||
| ) | ||
| conn.close() | ||
| except Exception as e: | ||
| log.warning("Could not write health status: %s", e) |
There was a problem hiding this comment.
ON CONFLICT(component) requires a UNIQUE constraint that doesn't exist.
The system_health table (defined in the main migration) has no UNIQUE constraint on component — only an auto-incrementing id as PK. SQLite's ON CONFLICT(component) requires component to be the target of a UNIQUE index or constraint. This will fail at runtime.
This is the same issue flagged in the ADR review and also affects modules/piezo-processor/main.py report_health.
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 134-134: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/sleep-detector/main.py` around lines 120 - 135, The upsert uses ON
CONFLICT(component) but the system_health table has no UNIQUE constraint on
component, so this will fail; update the report_health function in
modules/sleep-detector/main.py (and mirror the same change in
modules/piezo-processor/main.py) to perform a safe upsert without relying on a
UNIQUE constraint: open the DB connection, run an UPDATE system_health SET
status=?, message=?, last_checked=? WHERE component='sleep-detector', check the
number of rows affected, and if zero run a plain INSERT INTO system_health
(component, status, message, last_checked) VALUES ('sleep-detector', ?, ?, ?);
keep the existing exception handling and close the connection.
Both Python modules use ON CONFLICT(component) DO UPDATE to upsert health status. SQLite requires a UNIQUE constraint on the conflict target — without it the INSERT fails at runtime. Add .unique() to the column definition and regenerate the migration (squashing into the single initial schema).
… and sleep-detector
piezo-processor:
- Cast nperseg to int before passing to scipy.signal.welch — fs is typed
as float so fs*10 can evaluate to a float, which scipy rejects
- Move report_health("stopped") out of finally so it only runs on clean
shutdown, not after sys.exit(1) (which would overwrite the real error)
sleep-detector:
- Fix zero-duration absent interval on session start: set _was_present=True
inside the new-session block and change the return-from-absence check to
elif so it cannot fire on the same sample that opened the session
- Close RawFileFollower file handle after the read loop exits on shutdown
- Move report_health("stopped") out of finally for the same reason as above
|
@coderabbitai All feedback has been addressed. Four comments are still showing as unresolved but are stale against earlier commits:
Please re-review the latest commits. |
🧩 Analysis chain🏁 Script executed: Length of output: 4027 🏁 Script executed: Length of output: 1180
All feedback has been addressed. Triggering a fresh review of the latest commits now. ✅ Actions performedReview triggered.
|
|
🎉 This PR is included in version 1.1.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary
executeReboot()is now async and rejects on failure, so scheduler error-handling surfacessystemctl rebootfailures instead of silently swallowing themdb.transaction()to prevent TOCTOU double-creation; stub sleep records are written tobiometrics.db(sessions < 5 min ignored)vitals,sleep_records,movement) to a dedicatedbiometrics.dbwith its own Drizzle connection (WAL + 5s busy_timeout for concurrent module writes). Greenfield project — tables were dropped and recreated cleanly; no data migration needed.piezo-processor(HR/HRV/breathing from piezo FFT) andsleep-detector(session boundaries from capacitance presence) — with systemd service templates and installation automation0000reflecting the current schema. No live deployments, so no migration path needed.Scope Breakdown
instrumentation.tswaitForValidSystemDate()polling before scheduler initsrc/scheduler/jobManager.tsexecuteReboot()with proper rejection; addschedulePrimePreReboot()src/hardware/deviceStateSync.tssrc/db/biometrics-schema.ts+biometrics.ts; separate Drizzle connection; squashed to single migrationmodules/piezo-processorandsleep-detectorPython sidecars with manifests and systemd unitsscripts/installdocs/adr/Design rationale
Why a separate biometrics.db?
Signal processing (HR from 500 Hz piezo data) is compute-heavy and better suited to Python/Rust than Node.js. Modules are independent OS processes — a crash in the HR module doesn't affect the core app. The SQLite schema is the public contract; any module that writes matching rows works automatically with the existing UI. Two DB files lets biometrics data be cleared, backed up, or handed off independently of config/state.
Why squash migrations?
No live deployments. The original history created biometrics tables in
sleepypod.dband then dropped them two migrations later — noise with no benefit. One clean migration is easier to reason about.Why not WebSockets for module transport?
Modules run on the same device and can read
/persistent/*.RAWdirectly — no proxy needed today. WebSockets are not ruled out for future use (e.g. modules on a separate host, or real-time sensor push to the UI).Test plan
primePodDaily— verifydaily-primeandprime-prerebootjobs appear in logs, prereboot 1hr before primerebootDaily— verifydaily-rebootcallssystemctl rebootsleep_recordsrow created inbiometrics.dbpnpm db:generate && pnpm db:biometrics:generate— verify single clean migrations for each DBsystemctl status sleepypod-piezo-processor sleepypod-sleep-detectorSummary by CodeRabbit
Release Notes
New Features
Infrastructure