Skip to content

feat(scheduler,biometrics): clock validation, sleep records, and plugin module system#119

Merged
ng merged 9 commits intodevfrom
feat/system-clock-and-scheduling
Feb 24, 2026
Merged

feat(scheduler,biometrics): clock validation, sleep records, and plugin module system#119
ng merged 9 commits intodevfrom
feat/system-clock-and-scheduling

Conversation

@ng
Copy link
Copy Markdown
Contributor

@ng ng commented Feb 24, 2026

Summary

  • System clock validation: Pod can boot with RTC reset to ~2010; scheduler now polls until the system clock is valid (year ≥ 2024) before starting, preventing cron jobs from firing at wrong times
  • Reboot job fix: executeReboot() is now async and rejects on failure, so scheduler error-handling surfaces systemctl reboot failures instead of silently swallowing them
  • Pre-prime reboot: Auto-schedule a device reboot 1 hour before daily priming to ensure a clean state (matches free-sleep behaviour)
  • Power-based sleep records: Detect ON→OFF power transitions inside an atomic db.transaction() to prevent TOCTOU double-creation; stub sleep records are written to biometrics.db (sessions < 5 min ignored)
  • Biometrics DB separation: Move time-series data (vitals, sleep_records, movement) to a dedicated biometrics.db with its own Drizzle connection (WAL + 5s busy_timeout for concurrent module writes). Greenfield project — tables were dropped and recreated cleanly; no data migration needed.
  • Bundled Python modules: Ship two production-ready sidecars — piezo-processor (HR/HRV/breathing from piezo FFT) and sleep-detector (session boundaries from capacitance presence) — with systemd service templates and installation automation
  • Squashed migrations: Replaced 4 incremental migrations (create → alter → drop) with a single clean 0000 reflecting the current schema. No live deployments, so no migration path needed.
  • ADR consolidation: Merged shallow one-decision ADRs into two coherent documents (core stack, developer tooling); added ADR 0012 for the biometrics plugin architecture

Scope Breakdown

Area Summary
instrumentation.ts Add waitForValidSystemDate() polling before scheduler init
src/scheduler/jobManager.ts Async executeReboot() with proper rejection; add schedulePrimePreReboot()
src/hardware/deviceStateSync.ts Wrap SELECT+UPSERT in transaction; detect power transitions; write sleep records
src/db/ New biometrics-schema.ts + biometrics.ts; separate Drizzle connection; squashed to single migration
modules/ piezo-processor and sleep-detector Python sidecars with manifests and systemd units
scripts/install Biometrics DB migration; Python venv setup with python3-venv guard; module installation
docs/adr/ Consolidate 5 ADRs into 2; add ADR 0012 for plugin/sidecar architecture

Design rationale

Why a separate biometrics.db?
Signal processing (HR from 500 Hz piezo data) is compute-heavy and better suited to Python/Rust than Node.js. Modules are independent OS processes — a crash in the HR module doesn't affect the core app. The SQLite schema is the public contract; any module that writes matching rows works automatically with the existing UI. Two DB files lets biometrics data be cleared, backed up, or handed off independently of config/state.

Why squash migrations?
No live deployments. The original history created biometrics tables in sleepypod.db and then dropped them two migrations later — noise with no benefit. One clean migration is easier to reason about.

Why not WebSockets for module transport?
Modules run on the same device and can read /persistent/*.RAW directly — no proxy needed today. WebSockets are not ruled out for future use (e.g. modules on a separate host, or real-time sensor push to the UI).

Test plan

  • Start with system clock set to 2015 — scheduler should poll and wait, not crash
  • Start with valid clock — scheduler should start immediately
  • Enable primePodDaily — verify daily-prime and prime-prereboot jobs appear in logs, prereboot 1hr before prime
  • Enable rebootDaily — verify daily-reboot calls systemctl reboot
  • Power pod on, wait 10s, power off — verify sleep_records row created in biometrics.db
  • Power cycle < 5s — verify no sleep record created
  • Run pnpm db:generate && pnpm db:biometrics:generate — verify single clean migrations for each DB
  • Check module services: systemctl status sleepypod-piezo-processor sleepypod-sleep-detector

Summary by CodeRabbit

Release Notes

  • New Features

    • Sleep detection and session tracking with bed occupancy analysis
    • Biometric signals: heart rate, HRV, and breathing rate measurements
    • Movement tracking with per-bed-side granularity
    • Modular biometrics plugin system for extensibility
    • Power state tracking with automatic session boundaries
    • System clock validation at startup for reliability
  • Infrastructure

    • Dedicated biometrics database for improved data organization
    • Enhanced scheduler with pre-prime reboot capability

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 24, 2026

Warning

Rate limit exceeded

@ng has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 11 minutes and 54 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 4ac7016 and f9dfe2b.

📒 Files selected for processing (6)
  • modules/piezo-processor/main.py
  • modules/sleep-detector/main.py
  • src/db/migrations/0000_zippy_mephistopheles.sql
  • src/db/migrations/meta/0000_snapshot.json
  • src/db/migrations/meta/_journal.json
  • src/db/schema.ts
📝 Walkthrough

Walkthrough

This PR establishes a separate biometrics database infrastructure with Python modules (piezo-processor, sleep-detector) that stream processed sensor data into dedicated SQLite tables. The main database schema is refactored to remove biometrics tables, add power-state tracking, and include new indexes. Integration points include hardware device-state synchronization writing sleep sessions and instrumentation startup flow improvements. Documentation is consolidated with new ADR files.

Changes

Cohort / File(s) Summary
Biometrics Database Core
drizzle.biometrics.config.ts, src/db/biometrics.ts, src/db/biometrics-schema.ts, src/db/index.ts
New Drizzle ORM configuration, SQLite connection setup with WAL/performance PRAGMAs, and schema definitions for vitals, sleep_records, and movement tables. Exports biometricsDb instance and closeBiometricsDatabase function.
Biometrics Database Migrations
src/db/biometrics-migrations/0000_noisy_onslaught.sql, src/db/biometrics-migrations/meta/0000_snapshot.json, src/db/biometrics-migrations/meta/_journal.json
Initial migration creating three biometrics tables (movement, sleep_records, vitals) with timestamps and side-based indexes. Includes migration snapshot and journal metadata.
Main Database Schema Updates
src/db/schema.ts, src/db/migrations/0000_fine_frightful_four.sql, src/db/migrations/meta/0000_snapshot.json
Added poweredOnAt column to device_state; removed sleep_records, vitals, and movement tables; added indexes on alarm_schedules, power_schedules, and temperature_schedules; removed default value from side_settings.name.
Main Database Migrations Cleanup
src/db/migrations/0001_little_the_twelve.sql, src/db/migrations/meta/0001_snapshot.json, src/db/migrations/meta/_journal.json
Removed legacy migration file and old snapshot; consolidated migration journal for simplified schema evolution.
Piezo Processor Module
modules/piezo-processor/main.py, modules/piezo-processor/manifest.json, modules/piezo-processor/requirements.txt, modules/piezo-processor/sleepypod-piezo-processor.service
New Python module for processing piezoelectric sensor data with signal processing (bandpass filtering, HRV via HeartPy) and writing vitals (heart rate, HRV, breathing rate) to biometrics DB. Includes systemd service with resource limits and sandboxing.
Sleep Detector Module
modules/sleep-detector/main.py, modules/sleep-detector/manifest.json, modules/sleep-detector/requirements.txt, modules/sleep-detector/sleepypod-sleep-detector.service
New Python module detecting bed occupancy and sleep sessions from capacitive sensor data with per-side presence tracking, session management, and movement aggregation. Persists sleep_records and movement to biometrics DB via systemd service.
Hardware/Scheduler Integration
src/hardware/deviceStateSync.ts, src/scheduler/jobManager.ts, src/instrumentation.ts
DeviceStateSync now tracks power transitions and records sleep sessions to biometrics DB. Scheduler adds pre-prime reboot scheduling 1 hour before priming. Instrumentation adds system clock validation (waitForValidSystemDate) and dual database shutdown.
Server Router Updates
src/server/routers/biometrics.ts
Updated data source from main db to biometricsDb and schema imports from biometrics-schema; query logic unchanged.
Module Type Definitions
src/modules/types.ts
New ModuleManifest interface defining metadata contract for biometrics modules (name, version, provides, writes, service, language, minVersion).
Build and Installation
package.json, scripts/install, drizzle.biometrics.config.ts
Added biometrics Drizzle CLI scripts (generate, migrate, studio). Install script extended with BIOMETRICS_DATABASE_URL setup, biometrics module installation, systemd service enablement, and CLI help text updates.
Documentation—ADR Updates
docs/adr/0002-yarn-package-manager.md, docs/adr/0003-core-stack.md, docs/adr/0006-developer-tooling.md, docs/adr/0007-vitest.md, docs/adr/0008-conventional-commits.md, docs/adr/0009-linguijs.md, docs/adr/0011-switch-to-pnpm.md, docs/adr/0012-biometrics-module-system.md
Removed older ADRs (0002, 0007, 0008, 0009, 0011) consolidating duplicate/outdated decisions. Added new ADRs for core stack (TypeScript/React), developer tooling (ESLint/Vitest/semantic-release/pnpm), and biometrics module system (plugin architecture with SQLite contract and multi-language module support).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 Hop, hop, the piezo sensors sing,
Sleep detection's now the thing!
Python modules process the beat,
Heart rate, breathing—the data's sweet!
Biometrics dance in SQLite's glow,
Where SleepyPod's vitals flow!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 29.73% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly describes the main features added: clock validation, sleep records, and plugin module system, aligning well with the substantial changes across scheduler, biometrics, and module infrastructure.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/system-clock-and-scheduling

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
src/scheduler/jobManager.ts (1)

284-293: Recommend making executeReboot async so failures propagate to the scheduler.

Currently executeReboot() returns void with a dangling internal Promise. The job handlers always report success — even if systemctl reboot fails — because the error only surfaces inside the callback after the job has already completed. Making it async lets the job handler await it and surface failures through the scheduler's error-reporting path.

♻️ Proposed refactor
-  private executeReboot(): void {
-    // Import inline to avoid issues in test environments where child_process may be mocked
-    import('child_process').then(({ exec }) => {
-      exec('systemctl reboot', (error) => {
-        if (error) console.error('Reboot command failed:', error.message)
-      })
-    }).catch(error => {
-      console.error('Failed to import child_process for reboot:', error)
-    })
-  }
+  private async executeReboot(): Promise<void> {
+    // Import inline to avoid issues in test environments where child_process may be mocked
+    const { exec } = await import('child_process')
+    return new Promise((resolve, reject) => {
+      exec('systemctl reboot', (error) => {
+        if (error) {
+          console.error('Reboot command failed:', error.message)
+          reject(error)
+        }
+        else {
+          resolve()
+        }
+      })
+    })
+  }

Then update both job handlers:

-      this.executeReboot()
+      await this.executeReboot()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scheduler/jobManager.ts` around lines 284 - 293, The executeReboot
function currently returns void and swallows errors inside an internal
Promise/callback; change executeReboot to an async function that returns a
Promise which resolves when the reboot command succeeds and rejects on failures
(e.g., wrap child_process.exec in a Promise or use util.promisify(exec)), ensure
caught import errors are re-thrown instead of only console.error so failures
propagate, and then update the two job handler functions that call executeReboot
to await or return its Promise so the scheduler can observe and report errors
from executeReboot.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/hardware/deviceStateSync.ts`:
- Around line 39-56: The SELECT and subsequent upsert are vulnerable to TOCTOU
races: wrap the read-modify-write in a single transaction by moving the SELECT
(the code that reads prev from deviceState), the logic that calls
createSleepRecord(side, poweredOnAt, now) and the upsertSide call into a
db.transaction(...) callback so they execute atomically; use the transaction
object (tx) for the SELECT/UPSERT (e.g.,
tx.select(...).from(deviceState).where(...).limit(1) and tx.upsertSide or adapt
upsertSide to accept tx), and ensure createSleepRecord either accepts the tx or
performs its writes via the same tx so the sleep record creation and side upsert
happen inside the same transaction.

In `@src/scheduler/jobManager.ts`:
- Line 290: The CI lint failure is caused by the arrow-parens rule on the
promise rejection handler in jobManager: change the `.catch(error => {` usage to
include parentheses around the parameter (i.e., `.catch((error) => {`) where the
promise chain ending with `}).catch(error => {` appears in
src/scheduler/jobManager.ts; apply the same fix to any other `.catch(error =>
...)` occurrences in that file to satisfy the linter.

---

Nitpick comments:
In `@src/scheduler/jobManager.ts`:
- Around line 284-293: The executeReboot function currently returns void and
swallows errors inside an internal Promise/callback; change executeReboot to an
async function that returns a Promise which resolves when the reboot command
succeeds and rejects on failures (e.g., wrap child_process.exec in a Promise or
use util.promisify(exec)), ensure caught import errors are re-thrown instead of
only console.error so failures propagate, and then update the two job handler
functions that call executeReboot to await or return its Promise so the
scheduler can observe and report errors from executeReboot.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9158835 and f4ba35f.

📒 Files selected for processing (7)
  • instrumentation.ts
  • src/db/migrations/0002_flaky_doctor_strange.sql
  • src/db/migrations/meta/0002_snapshot.json
  • src/db/migrations/meta/_journal.json
  • src/db/schema.ts
  • src/hardware/deviceStateSync.ts
  • src/scheduler/jobManager.ts

Comment thread src/hardware/deviceStateSync.ts Outdated
Comment thread src/scheduler/jobManager.ts Outdated
exec('systemctl reboot', (error) => {
if (error) console.error('Reboot command failed:', error.message)
})
}).catch(error => {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix arrow-parens lint error — this is breaking CI.

.catch(error => { must have parentheses around the argument.

🛠️ Proposed fix
-    }).catch(error => {
+    }).catch((error) => {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
}).catch(error => {
}).catch((error) => {
🧰 Tools
🪛 ESLint

[error] 290-290: Expected parentheses around arrow function argument having a body with curly braces.

(@stylistic/arrow-parens)

🪛 GitHub Actions: CI

[error] 290-290: ESLint: Expected parentheses around arrow function argument having a body with curly braces. (@stylistic/arrow-parens)

🪛 GitHub Check: Lint

[failure] 290-290:
Expected parentheses around arrow function argument having a body with curly braces

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scheduler/jobManager.ts` at line 290, The CI lint failure is caused by
the arrow-parens rule on the promise rejection handler in jobManager: change the
`.catch(error => {` usage to include parentheses around the parameter (i.e.,
`.catch((error) => {`) where the promise chain ending with `}).catch(error => {`
appears in src/scheduler/jobManager.ts; apply the same fix to any other
`.catch(error => ...)` occurrences in that file to satisfy the linter.

ng added 2 commits February 23, 2026 22:36
… sleep records

- Wait for valid system clock before starting scheduler; pod can boot
  with RTC reset to ~2010 before NTP syncs, causing cron jobs to fire
  at incorrect times

- Fix daily-reboot job: was a no-op log statement, now executes
  systemctl reboot and propagates errors through the scheduler

- Auto-schedule a pre-prime reboot 1hr before daily priming when
  primePodDaily is enabled; ensures clean device state before the
  prime cycle (matches free-sleep behaviour)

- Add poweredOnAt column to device_state; DeviceStateSync now wraps
  SELECT + UPSERT in a transaction to prevent TOCTOU races on rapid
  sync() calls, detects OFF→ON and ON→OFF power transitions, and
  creates stub sleep records on power-off (sessions < 5min ignored;
  presentIntervals left null for biometrics modules to backfill)
Moves time-series health data (vitals, sleep_records, movement) into a
dedicated biometrics.db, decoupled from the main config database.

- biometrics-schema.ts defines the public contract: any module that
  writes to these tables works automatically with the existing tRPC API
- Second Drizzle connection with WAL mode and 5s busy_timeout to safely
  support concurrent module writes
- db:biometrics:generate/migrate/studio scripts for independent schema
  management; migration 0003 drops moved tables from sleepypod.db

Bundled Python modules (modules/):
  - piezo-processor: tails /persistent/*.RAW, decodes CBOR piezo data,
    bandpass filters + HeartPy peak detection → HR/HRV/breathing rate
    written to vitals ~every 60s per side
  - sleep-detector: reads capacitance presence records, tracks session
    boundaries and bed exits → sleep_records + movement

Both modules run as isolated systemd services in Python venvs with
resource limits and health reporting to system_health table.

Install script updated: biometrics DB migration, Python venv setup,
module installation and service registration.

ModuleManifest interface (src/modules/types.ts) documents the contract
for third-party module authors. ADR 0012 records the decision.
@ng ng force-pushed the feat/system-clock-and-scheduling branch from 4e350fe to 64d9706 Compare February 24, 2026 06:36
@ng ng changed the title feat(scheduler): system clock validation, reboot fix, and power-based sleep records feat(scheduler,biometrics): clock validation, sleep records, and plugin module system Feb 24, 2026
Merge shallow, one-decision ADRs into coherent documents:
- Combine 0003 (TypeScript/React) + 0009 (Lingui) into single core stack ADR
- Combine 0006 (ESLint) + 0007 (Vitest) + 0008 (conventional commits) + pnpm
  rationale from 0002/0011 into single developer tooling ADR
- Delete redundant ADRs: 0002, 0007, 0008, 0009, 0011

Update 0012 (biometrics module system): clarify WebSocket transport is not ruled
out for future use (e.g. modules on separate host, real-time UI push)
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/db/schema.ts (1)

163-178: ⚠️ Potential issue | 🔴 Critical

Add data migration step to preserve biometrics tables before dropping from main schema.

Migration 0003_odd_gargoyle.sql drops sleep_records, vitals, and movement tables without preserving their data. These tables are recreated in biometrics-schema.ts in a separate biometrics.db, but no cross-database data migration step exists to copy rows before the DROP executes.

If these tables contain production data, the migration will cause permanent data loss on upgrade. Either add a migration step to copy data from sleepypod.db to biometrics.db before the DROP statements, or explicitly document that these are brand-new tables with no pre-existing data.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/db/schema.ts` around lines 163 - 178, The migration 0003_odd_gargoyle.sql
currently drops sleep_records, vitals, and movement without preserving data;
before those DROP statements add a migration step that copies rows from
sleepypod.db into the new biometrics.db (the tables defined in
biometrics-schema.ts) — e.g. ATTACH the biometrics.db, create target tables if
missing, then INSERT INTO biometrics.<table> SELECT * FROM main.<table> for
sleep_records, vitals, and movement, verify row counts, and only then run the
DROP TABLE on the old main tables; alternatively, if these are guaranteed empty,
update the migration comment to state explicitly that
sleep_records/vitals/movement are brand-new with no existing data.
🧹 Nitpick comments (6)
src/scheduler/jobManager.ts (1)

285-298: Prefer static import and execFile over dynamic exec.

Two minor improvements worth considering:

  1. child_process is a Node.js built-in — dynamic await import(...) adds an unnecessary async step on every reboot invocation. A static import at the top of the file is idiomatic.
  2. exec spawns a shell (/bin/sh -c systemctl reboot). Since the command and its arguments are fixed strings with no user input, execFile is cleaner — no shell overhead, explicit argv list.
♻️ Proposed refactor

At the top of the file, add:

+import { execFile } from 'child_process'

Then replace executeReboot:

-  private async executeReboot(): Promise<void> {
-    const { exec } = await import('child_process')
-    return new Promise((resolve, reject) => {
-      exec('systemctl reboot', (error) => {
-        if (error) {
-          console.error('Reboot command failed:', error.message)
-          reject(error)
-        }
-        else {
-          resolve()
-        }
-      })
-    })
-  }
+  private executeReboot(): Promise<void> {
+    return new Promise((resolve, reject) => {
+      execFile('systemctl', ['reboot'], (error) => {
+        if (error) {
+          console.error('Reboot command failed:', error.message)
+          reject(error)
+        }
+        else {
+          resolve()
+        }
+      })
+    })
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/scheduler/jobManager.ts` around lines 285 - 298, The executeReboot
function currently uses a dynamic import and child_process.exec; change it to a
static top-level import of execFile from 'child_process' and update the
executeReboot implementation to call execFile('systemctl', ['reboot'], ...) (no
shell) and return a Promise that resolves/rejects as before; remove the await
import(...) in executeReboot and keep the existing error handling
(console.error('Reboot command failed:', error.message) and reject(error)) so
the function signature and behavior remain the same.
src/modules/types.ts (1)

21-33: Tighten writes/provides typing to the biometrics contract.

This prevents typos and keeps module manifests aligned to the schema.

♻️ Suggested typing refinement
+export type BiometricsTable = 'vitals' | 'sleep_records' | 'movement'
+export type BiometricsField = `${BiometricsTable}.${string}`
+
 export interface ModuleManifest {
   /**
    * Specific fields this module writes, in "table.column" format.
    * Used to tell users what data they can expect from this module.
    * e.g. ["vitals.heartRate", "vitals.hrv", "vitals.breathingRate"]
    */
-  provides: string[]
+  provides: BiometricsField[]
 
   /**
    * Top-level table names this module writes to.
    * Must be a subset of ["vitals", "sleep_records", "movement"].
    * e.g. ["vitals"]
    */
-  writes: string[]
+  writes: BiometricsTable[]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/types.ts` around lines 21 - 33, Tighten the Module manifest types
by replacing the loose string[] for the provides and writes properties with
explicit unions derived from the biometrics contract: define (or import) a
BiometricsTableName union like 'vitals' | 'sleep_records' | 'movement' and a
BiometricsProvidedField union of the allowed "table.column" literals (e.g.
'vitals.heartRate', 'vitals.hrv', etc.), then change the interface properties to
writes: BiometricsTableName[] and provides: BiometricsProvidedField[] (or
readonly arrays) so ModuleManifest.provides and ModuleManifest.writes are
constrained to the schema.
src/server/routers/biometrics.ts (1)

57-69: Extract the repeated date-range .refine() into a shared helper.

The identical .refine() block (validate startDate ≤ endDate, message, path) appears verbatim three times (getSleepRecords, getVitals, getMovement). A single shared schema or schema-builder would remove the duplication.

♻️ Proposed refactor
+// at the top of the file, alongside other imports
+const dateRangeSchema = z
+  .object({
+    startDate: z.date().optional(),
+    endDate: z.date().optional(),
+  })
+  .refine(
+    (data) => {
+      if (data.startDate && data.endDate) {
+        return validateDateRange(data.startDate, data.endDate)
+      }
+      return true
+    },
+    {
+      message: 'startDate must be before or equal to endDate',
+      path: ['endDate'],
+    }
+  )

Then each procedure input becomes:

-    z
-      .object({
-        side: sideSchema,
-        startDate: z.date().optional(),
-        endDate: z.date().optional(),
-        limit: z.number().int().min(1).max(100).default(30),
-      })
-      .strict()
-      .refine(
-        (data) => { ... },
-        { message: '...', path: ['endDate'] }
-      )
+    dateRangeSchema
+      .extend({ side: sideSchema, limit: z.number().int().min(1).max(100).default(30) })
+      .strict()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/routers/biometrics.ts` around lines 57 - 69, The three identical
.refine blocks enforcing startDate ≤ endDate should be extracted into a shared
helper schema to remove duplication: create a reusable Zod schema or
schema-builder function (e.g., buildDateRangeSchema or dateRangeRefine) that
applies the .refine using validateDateRange with the same message/path, then
replace the inline .refine in getSleepRecords, getVitals, and getMovement inputs
to use that shared schema (or compose it into each procedure's input) so
validation logic is centralized and reused.
src/hardware/deviceStateSync.ts (1)

90-95: Consider a retry/queue for failed sleep-record writes.

Since poweredOnAt is cleared before the biometrics insert, any insert failure permanently drops the sleep record. A lightweight retry queue or a persisted “pending sleep record” marker would make this more robust.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/hardware/deviceStateSync.ts` around lines 90 - 95, The current flow
clears poweredOnAt and then calls createSleepRecord(side, enteredBedAt,
leftBedAt) so a failure permanently drops the sleep record; modify the logic in
deviceStateSync to persist a “pending sleep record” marker (e.g., on the
deviceState row or a lightweight local queue) before clearing poweredOnAt and/or
retry the createSleepRecord operation on failure: ensure createSleepRecord
failures enqueue a retry (in-memory durable queue or DB flag) and that a
background worker or next startup reads that queue/flag and retries the
biometricsDb insert until success, and clear the pending marker only after a
successful insert; reference createSleepRecord, poweredOnAt, deviceState row and
biometricsDb so the pending marker and retry handling are added around the
existing insert call.
src/db/biometrics-schema.ts (1)

27-38: Use a DB‑level default for created_at to safeguard the public contract.

The schema is documented as a public contract for external modules (Python, Rust, etc.) that write directly to the database. Currently, createdAt relies only on $defaultFn() for the default value, which means any writer that omits the field will fail the NOT NULL constraint. A DB-level default ensures safety regardless of how the row is inserted.

Apply the following change to the schema, then regenerate the migration:

Schema update
-import { index, integer, real, sqliteTable, text } from 'drizzle-orm/sqlite-core'
+import { index, integer, real, sqliteTable, text } from 'drizzle-orm/sqlite-core'
+import { sql } from 'drizzle-orm'
 
 export const sleepRecords = sqliteTable('sleep_records', {
   id: integer('id').primaryKey({ autoIncrement: true }),
   side: text('side', { enum: ['left', 'right'] }).notNull(),
   enteredBedAt: integer('entered_bed_at', { mode: 'timestamp' }).notNull(),
   leftBedAt: integer('left_bed_at', { mode: 'timestamp' }).notNull(),
   sleepDurationSeconds: integer('sleep_duration_seconds').notNull(),
   timesExitedBed: integer('times_exited_bed').notNull().default(0),
   presentIntervals: text('present_intervals', { mode: 'json' }), // [[start, end], ...]
   notPresentIntervals: text('not_present_intervals', { mode: 'json' }), // [[start, end], ...]
   createdAt: integer('created_at', { mode: 'timestamp' })
     .notNull()
-    .$defaultFn(() => new Date()),
+    .default(sql`(unixepoch())`),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/db/biometrics-schema.ts` around lines 27 - 38, Replace the client-side
default on the sleepRecords.createdAt column (currently using .$defaultFn(() =>
new Date())) with a SQLite DB-level default so inserts from external writers
won't violate the NOT NULL constraint; update the createdAt column definition in
the sqliteTable('sleep_records', ...) schema to use a SQLite DEFAULT that
produces the timestamp (e.g., CURRENT_TIMESTAMP or strftime-based integer
seconds) and then regenerate the migration so the DB schema is updated.
modules/sleep-detector/main.py (1)

69-71: Silence the unused frame parameter.
Ruff flags the unused arg; renaming to _frame keeps intent without noise.

💡 Suggested tweak
-def _on_signal(signum, frame):
+def _on_signal(signum, _frame):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modules/sleep-detector/main.py` around lines 69 - 71, The signal handler
_on_signal currently defines an unused parameter named frame which triggers
linter warnings; rename the parameter to _frame (def _on_signal(signum,
_frame):) to indicate it's intentionally unused and silence Ruff, keeping the
same behavior of logging the signum and setting _shutdown via log.info and
_shutdown.set() inside the _on_signal function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/adr/0012-biometrics-module-system.md`:
- Around line 24-32: The fenced code block containing the ASCII architecture
diagram (the block starting with the line "/persistent/*.RAW" and including
lines like "[module process]" and "[biometrics.db]") needs a language identifier
to satisfy markdownlint MD040; change the opening fence from ``` to ```text
(i.e., add the word "text" right after the three backticks) and keep the closing
fence as-is so the diagram is treated as plain text in rendered Markdown.
- Around line 139-151: Change the fenced code block that contains the modules/
tree to include a language identifier (e.g., replace the opening ``` with
```text) so the tree block is marked as plain text for markdownlint; leave the
closing fence as ``` and ensure the rest of the block content is unchanged.

In `@drizzle.biometrics.config.ts`:
- Around line 3-9: The default DB URL in the Drizzle config (inside the
defineConfig call) currently falls back to 'file:./biometrics.db' when
BIOMETRICS_DATABASE_URL is unset; change that fallback to match runtime by using
'file:./biometrics.dev.db' so tooling and runtime point at the same database
(update the dbCredentials.url value referenced by BIOMETRICS_DATABASE_URL in
drizzle.biometrics.config.ts).

In `@modules/piezo-processor/main.py`:
- Around line 224-255: The code uses the 30s buffer (hr_arr) for both HR and HRV
because compute_heart_rate_hrv is called only with hr_arr; fix by computing HR
from the 30s buffer (np.array(self._hr_buf)) and computing HRV from the
five‑minute buffer (np.array(self._hrv_buf)) so the HRV_WINDOW_S-backed deque
(_hrv_buf) is honored; update the call(s) around
compute_heart_rate_hrv/compute_heart_rate_hrv(...)/compute_heart_rate_hrv usage
in _maybe_write (and ensure ingest still appends samples to _hrv_buf) so
write_vitals and the log use the HR from the short buffer and HRV from the long
buffer.

In `@modules/sleep-detector/main.py`:
- Around line 203-224: The session close can append absent intervals with end <
start because _close_session(left_ts) is called with _last_present_ts while
_interval_start may already point to the first absent sample; update
_close_session to guard and fix this by ensuring the closing timestamp is not
earlier than _interval_start (e.g., compute effective_left = max(left_ts,
self._interval_start) when _interval_start is not None) or alternatively adjust
the caller to pass self._interval_start; then before appending to
_absent_intervals/_present_intervals check that effective_left >
self._interval_start to avoid emitting negative ranges, referencing
_close_session, _interval_start, _last_present_ts, _was_present,
_present_intervals and _absent_intervals.

In `@scripts/install`:
- Around line 239-266: The install_module function currently calls python3 -m
venv without guarding for missing python3-venv; add a pre-check before creating
the venv (e.g., run a quick python3 -m venv --help or probe for the venv module)
and if it fails, emit a clear error and either install the system package (on
Debian/Ubuntu) or abort the installer with a descriptive message; update the
logic around the python3 -m venv invocation in install_module and the
surrounding installer flow that references MODULES_SRC/MODULES_DEST and the
python3 calls to handle the missing-venv case gracefully.

In `@src/db/index.ts`:
- Around line 22-28: The JSDoc and console message for closeDatabase() are
misleading because the function only closes the main sqlite connection; update
the JSDoc to state "Close main database connection" (not "all"), and change the
console.log to a singular, specific message like "Closing main sqlite database
connection..." to match behavior; alternatively, if the intent was to close both
databases, modify closeDatabase() to also call closeBiometricsDatabase() and
update instrumentation.ts usage accordingly so both connections are closed.
Ensure references to sqlite, closeDatabase, and closeBiometricsDatabase are
consistent.

In `@src/db/migrations/0003_odd_gargoyle.sql`:
- Around line 1-3: This migration currently drops the movement, sleep_records,
and vitals tables directly; modify it to first copy all rows from movement,
sleep_records, and vitals into the new biometrics database (e.g., insert/select
to biometrics.db tables) inside a safe transaction or staged job, verify row
counts/checksums after the copy, and only then drop the originals using DROP
TABLE IF EXISTS `movement`, `sleep_records`, `vitals`; ensure the migration
script (0003_odd_gargoyle.sql) includes the export/import steps (or calls a
backfill procedure) and a verification step so the DROP statements run only
after successful backfill.

---

Outside diff comments:
In `@src/db/schema.ts`:
- Around line 163-178: The migration 0003_odd_gargoyle.sql currently drops
sleep_records, vitals, and movement without preserving data; before those DROP
statements add a migration step that copies rows from sleepypod.db into the new
biometrics.db (the tables defined in biometrics-schema.ts) — e.g. ATTACH the
biometrics.db, create target tables if missing, then INSERT INTO
biometrics.<table> SELECT * FROM main.<table> for sleep_records, vitals, and
movement, verify row counts, and only then run the DROP TABLE on the old main
tables; alternatively, if these are guaranteed empty, update the migration
comment to state explicitly that sleep_records/vitals/movement are brand-new
with no existing data.

---

Nitpick comments:
In `@modules/sleep-detector/main.py`:
- Around line 69-71: The signal handler _on_signal currently defines an unused
parameter named frame which triggers linter warnings; rename the parameter to
_frame (def _on_signal(signum, _frame):) to indicate it's intentionally unused
and silence Ruff, keeping the same behavior of logging the signum and setting
_shutdown via log.info and _shutdown.set() inside the _on_signal function.

In `@src/db/biometrics-schema.ts`:
- Around line 27-38: Replace the client-side default on the
sleepRecords.createdAt column (currently using .$defaultFn(() => new Date()))
with a SQLite DB-level default so inserts from external writers won't violate
the NOT NULL constraint; update the createdAt column definition in the
sqliteTable('sleep_records', ...) schema to use a SQLite DEFAULT that produces
the timestamp (e.g., CURRENT_TIMESTAMP or strftime-based integer seconds) and
then regenerate the migration so the DB schema is updated.

In `@src/hardware/deviceStateSync.ts`:
- Around line 90-95: The current flow clears poweredOnAt and then calls
createSleepRecord(side, enteredBedAt, leftBedAt) so a failure permanently drops
the sleep record; modify the logic in deviceStateSync to persist a “pending
sleep record” marker (e.g., on the deviceState row or a lightweight local queue)
before clearing poweredOnAt and/or retry the createSleepRecord operation on
failure: ensure createSleepRecord failures enqueue a retry (in-memory durable
queue or DB flag) and that a background worker or next startup reads that
queue/flag and retries the biometricsDb insert until success, and clear the
pending marker only after a successful insert; reference createSleepRecord,
poweredOnAt, deviceState row and biometricsDb so the pending marker and retry
handling are added around the existing insert call.

In `@src/modules/types.ts`:
- Around line 21-33: Tighten the Module manifest types by replacing the loose
string[] for the provides and writes properties with explicit unions derived
from the biometrics contract: define (or import) a BiometricsTableName union
like 'vitals' | 'sleep_records' | 'movement' and a BiometricsProvidedField union
of the allowed "table.column" literals (e.g. 'vitals.heartRate', 'vitals.hrv',
etc.), then change the interface properties to writes: BiometricsTableName[] and
provides: BiometricsProvidedField[] (or readonly arrays) so
ModuleManifest.provides and ModuleManifest.writes are constrained to the schema.

In `@src/scheduler/jobManager.ts`:
- Around line 285-298: The executeReboot function currently uses a dynamic
import and child_process.exec; change it to a static top-level import of
execFile from 'child_process' and update the executeReboot implementation to
call execFile('systemctl', ['reboot'], ...) (no shell) and return a Promise that
resolves/rejects as before; remove the await import(...) in executeReboot and
keep the existing error handling (console.error('Reboot command failed:',
error.message) and reject(error)) so the function signature and behavior remain
the same.

In `@src/server/routers/biometrics.ts`:
- Around line 57-69: The three identical .refine blocks enforcing startDate ≤
endDate should be extracted into a shared helper schema to remove duplication:
create a reusable Zod schema or schema-builder function (e.g.,
buildDateRangeSchema or dateRangeRefine) that applies the .refine using
validateDateRange with the same message/path, then replace the inline .refine in
getSleepRecords, getVitals, and getMovement inputs to use that shared schema (or
compose it into each procedure's input) so validation logic is centralized and
reused.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f4ba35f and 81e46c9.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (36)
  • docs/adr/0002-yarn-package-manager.md
  • docs/adr/0003-core-stack.md
  • docs/adr/0006-developer-tooling.md
  • docs/adr/0007-vitest.md
  • docs/adr/0008-conventional-commits.md
  • docs/adr/0009-linguijs.md
  • docs/adr/0011-switch-to-pnpm.md
  • docs/adr/0012-biometrics-module-system.md
  • drizzle.biometrics.config.ts
  • instrumentation.ts
  • modules/piezo-processor/main.py
  • modules/piezo-processor/manifest.json
  • modules/piezo-processor/requirements.txt
  • modules/piezo-processor/sleepypod-piezo-processor.service
  • modules/sleep-detector/main.py
  • modules/sleep-detector/manifest.json
  • modules/sleep-detector/requirements.txt
  • modules/sleep-detector/sleepypod-sleep-detector.service
  • package.json
  • scripts/install
  • src/db/biometrics-migrations/0000_noisy_onslaught.sql
  • src/db/biometrics-migrations/meta/0000_snapshot.json
  • src/db/biometrics-migrations/meta/_journal.json
  • src/db/biometrics-schema.ts
  • src/db/biometrics.ts
  • src/db/index.ts
  • src/db/migrations/0002_flaky_doctor_strange.sql
  • src/db/migrations/0003_odd_gargoyle.sql
  • src/db/migrations/meta/0002_snapshot.json
  • src/db/migrations/meta/0003_snapshot.json
  • src/db/migrations/meta/_journal.json
  • src/db/schema.ts
  • src/hardware/deviceStateSync.ts
  • src/modules/types.ts
  • src/scheduler/jobManager.ts
  • src/server/routers/biometrics.ts
💤 Files with no reviewable changes (5)
  • docs/adr/0008-conventional-commits.md
  • docs/adr/0011-switch-to-pnpm.md
  • docs/adr/0007-vitest.md
  • docs/adr/0002-yarn-package-manager.md
  • docs/adr/0009-linguijs.md
✅ Files skipped from review due to trivial changes (3)
  • docs/adr/0003-core-stack.md
  • src/db/migrations/meta/0003_snapshot.json
  • modules/piezo-processor/sleepypod-piezo-processor.service
🚧 Files skipped from review as they are similar to previous changes (1)
  • instrumentation.ts

Comment thread docs/adr/0012-biometrics-module-system.md Outdated
Comment thread docs/adr/0012-biometrics-module-system.md Outdated
Comment thread drizzle.biometrics.config.ts
Comment thread modules/piezo-processor/main.py
Comment thread modules/sleep-detector/main.py
Comment thread scripts/install
Comment thread src/db/index.ts
Comment thread src/db/migrations/0003_odd_gargoyle.sql Outdated
Comment on lines +1 to +3
DROP TABLE `movement`;--> statement-breakpoint
DROP TABLE `sleep_records`;--> statement-breakpoint
DROP TABLE `vitals`; No newline at end of file
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Dropping biometrics tables without data migration risks data loss.

This migration deletes movement, sleep_records, and vitals from the primary DB without any in-migration to biometrics.db. Upgrades will lose historical biometrics unless data is copied first. Consider a data-migration step (copy rows into biometrics DB) and only drop after a successful backfill/verification. If you want a defensive guard, add IF EXISTS to avoid hard failures on unexpected states.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/db/migrations/0003_odd_gargoyle.sql` around lines 1 - 3, This migration
currently drops the movement, sleep_records, and vitals tables directly; modify
it to first copy all rows from movement, sleep_records, and vitals into the new
biometrics database (e.g., insert/select to biometrics.db tables) inside a safe
transaction or staged job, verify row counts/checksums after the copy, and only
then drop the originals using DROP TABLE IF EXISTS `movement`, `sleep_records`,
`vitals`; ensure the migration script (0003_odd_gargoyle.sql) includes the
export/import steps (or calls a backfill procedure) and a verification step so
the DROP statements run only after successful backfill.

ng added 3 commits February 23, 2026 23:10
…ases

piezo-processor: HRV was computed from the 30s HR buffer instead of the
dedicated 5-minute HRV buffer (_hrv_buf). Fix by splitting the two
compute_heart_rate_hrv calls so HR uses _hr_buf and HRV uses _hrv_buf.

sleep-detector: when a session times out due to absence, _interval_start
already points to the first absent sample so closing at _last_present_ts
could produce absent intervals with end < start. Close at _interval_start
when available. Also guard the final absent-interval append so it only
emits when left_ts > interval_start.
- src/db/index.ts: fix misleading JSDoc/log for closeDatabase() — it only
  closes the main DB; note closeBiometricsDatabase() must be called separately
- src/db/migrations/0003: add IF EXISTS to DROP TABLE statements (defensive)
- drizzle.biometrics.config.ts: align fallback URL to biometrics.dev.db to
  match runtime default (was biometrics.db — mismatched tooling vs runtime)
- scripts/install: guard against missing python3-venv on Debian/Ubuntu minimal
  images by checking and installing python3-venv before python3 -m venv
- docs/adr/0012: add language identifiers to fenced code blocks (markdownlint MD040)
Greenfield project — no live deployments. Replace 4 incremental migrations
(create tables, recreate side_settings, add powered_on_at, drop biometrics
tables) with one clean migration reflecting the current schema.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
scripts/install (2)

190-217: ⚠️ Potential issue | 🟠 Major

BIOMETRICS_DATABASE_URL is missing from the systemd service environment.

The .env file is configured with BIOMETRICS_DATABASE_URL, but the systemd unit at line 202-203 only sets DATABASE_URL and DAC_SOCK_PATH. If the Next.js app doesn't load .env when started via systemd (e.g., when NODE_ENV=production), the biometrics DB connection will fall back to a hardcoded default, which may differ from the installed path.

Proposed fix
 Environment="DATABASE_URL=file:$DATA_DIR/sleepypod.db"
+Environment="BIOMETRICS_DATABASE_URL=file:$DATA_DIR/biometrics.db"
 Environment="DAC_SOCK_PATH=$DAC_SOCK_PATH"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/install` around lines 190 - 217, The systemd unit for
sleepypod.service is missing the BIOMETRICS_DATABASE_URL environment, so add an
Environment entry for BIOMETRICS_DATABASE_URL in the [Service] block (alongside
Environment="DATABASE_URL=..." and Environment="DAC_SOCK_PATH=...") so systemd
receives the same biometrics DB path the app expects; update the template that
writes sleepypod.service (the heredoc that generates
/etc/systemd/system/sleepypod.service) to include
Environment="BIOMETRICS_DATABASE_URL=$BIOMETRICS_DATABASE_URL" or
Environment="BIOMETRICS_DATABASE_URL=file:$DATA_DIR/biometrics.db" as
appropriate so the service runs with the correct BIOMETRICS_DATABASE_URL.

340-378: ⚠️ Potential issue | 🟠 Major

sp-update does not handle biometrics DB backup, migration, or module updates.

The update script backs up sleepypod.db (line 335-338), runs pnpm db:generate and pnpm db:migrate (lines 348-349), but does not:

  1. Back up biometrics.db
  2. Run pnpm db:biometrics:generate and pnpm db:biometrics:migrate
  3. Re-install or update biometrics modules (piezo-processor, sleep-detector)

The rollback path also only restores sleepypod.db. A schema update to biometrics tables would be silently skipped, and a failed rollback would leave biometrics.db in an inconsistent state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/install` around lines 340 - 378, The update script in scripts/install
currently only handles sleepypod.db and misses biometrics DB and modules; modify
the update path to also backup "$DATA_DIR/biometrics.db" before pulling, run
"pnpm db:biometrics:generate" and "pnpm db:biometrics:migrate" after the other
DB steps, and ensure biometrics native modules are rebuilt/updated (e.g., run
pnpm install/rebuild for piezo-processor and sleep-detector or execute their
build steps like (cd node_modules/piezo-processor && npm run build-release) and
same for sleep-detector) before starting the service; likewise extend the
rollback path to restore "$DATA_DIR/biometrics.db.bak" (like sleepypod.db) and
re-run the module install/build steps and pnpm build so a failed update doesn’t
leave biometrics.db or native modules inconsistent (reference variables
PREV_COMMIT and DATA_DIR and commands pnpm db:biometrics:generate, pnpm
db:biometrics:migrate, piezo-processor, sleep-detector).
♻️ Duplicate comments (2)
modules/piezo-processor/main.py (2)

97-113: ⚠️ Potential issue | 🔴 Critical

Same ON CONFLICT(component) issue as in sleep-detector.

The system_health table lacks a UNIQUE constraint on component, so the upsert at lines 103-108 will fail at runtime. See the related comment on the ADR and sleep-detector module.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modules/piezo-processor/main.py` around lines 97 - 113, The upsert in
report_health uses "ON CONFLICT(component)" but the system_health table has no
UNIQUE constraint on component, so the query will fail; fix by either ensuring
the schema defines component as UNIQUE (or PRIMARY KEY) in the system_health
table creation/migration, or change the write logic in report_health to use a
supported pattern (e.g., INSERT OR REPLACE or update-if-exists via SELECT then
UPDATE) that matches the existing schema; update the schema migration or the
report_health function accordingly and keep references to the ON
CONFLICT(component) clause and the system_health table to locate the change.

293-300: ⚠️ Potential issue | 🟡 Minor

Same exception-path report_health overwrite and file handle leak as in sleep-detector.

On exception, report_health("down", str(e)) at line 295 is overwritten by report_health("down", "piezo-processor stopped") at line 299. Also, RawFileFollower._file is not closed on shutdown.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modules/piezo-processor/main.py` around lines 293 - 300, The exception-path
currently overwrites the original error health message and leaks an open file
handle; capture the exception message into a variable (e.g. err_msg) in the
except block and call report_health("down", err_msg) after cleanup (or ensure
you don't overwrite it in the finally), and in the finally block ensure db_conn
is closed only if not None and close any open RawFileFollower._file (check the
RawFileFollower instance(s) and call its ._file.close() if present and not
already closed) so the original error reason is preserved and file handles are
released; update references in main loop exception handling and the finally
cleanup to use these names (report_health, db_conn.close,
RawFileFollower._file).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/adr/0012-biometrics-module-system.md`:
- Around line 153-157: The system_health table lacks a UNIQUE constraint on
component but the modules (sleep-detector, piezo-processor) use ON
CONFLICT(component) DO UPDATE, which will fail; fix by updating the schema in
migration 0000_fine_frightful_four.sql to add a UNIQUE constraint or index on
system_health.component (or alternatively change the modules' health-write logic
to perform a SELECT for the component and then conditional INSERT or UPDATE
instead of using ON CONFLICT); reference the system_health table and the ON
CONFLICT(component) DO UPDATE usage when making the change so both writers work
without runtime errors.

In `@modules/piezo-processor/main.py`:
- Around line 161-173: In compute_breathing_rate, scipy.signal.welch's nperseg
can become a float when fs is passed as float (from fs * 10), causing a type
error; update the call to welch (in compute_breathing_rate) to pass an integer
nperseg by wrapping the min(...) expression with int(...) (or compute nperseg =
max(1, int(min(len(cleaned), fs * 10))) and pass that) so nperseg is always an
int and at least 1.

In `@modules/sleep-detector/main.py`:
- Around line 120-135: The upsert uses ON CONFLICT(component) but the
system_health table has no UNIQUE constraint on component, so this will fail;
update the report_health function in modules/sleep-detector/main.py (and mirror
the same change in modules/piezo-processor/main.py) to perform a safe upsert
without relying on a UNIQUE constraint: open the DB connection, run an UPDATE
system_health SET status=?, message=?, last_checked=? WHERE
component='sleep-detector', check the number of rows affected, and if zero run a
plain INSERT INTO system_health (component, status, message, last_checked)
VALUES ('sleep-detector', ?, ?, ?); keep the existing exception handling and
close the connection.
- Around line 261-293: RawFileFollower.read_records currently exits its loop
when _shutdown is set but never closes an open self._file; add cleanup so any
open file is closed when read_records returns (e.g., wrap the while loop in
try/finally or ensure a final check after the loop) and also consider providing
an explicit close() method on RawFileFollower that closes self._file and resets
self._path so callers can trigger orderly shutdown; reference RawFileFollower,
read_records, self._file, and _shutdown when applying the fix.
- Around line 175-192: The update method currently appends a zero-length absent
interval when the first present=True sample starts a new session; inside update
(function update) after you set self._session_start and self._interval_start for
a new session, set self._was_present = True (and optionally
self._last_present_ts = ts) so the subsequent check that appends to
self._absent_intervals is skipped, or alternatively guard the absent-interval
append with a condition that it only runs if the session was already started
(e.g., skip when you just opened the session).
- Around line 322-329: The finally block currently always calls
report_health("down", "sleep-detector stopped") which overwrites the real error
reported by report_health("down", str(e)) in the exception handler; update the
shutdown logic around report_health so the final call only reports the generic
"sleep-detector stopped" on a clean/normal shutdown and preserves the original
error message on error paths—e.g., introduce a local flag or variable (e.g.,
shutdown_reason or had_exception) set in the except block and checked in the
finally block, or set the exit reason before sys.exit, and only call
report_health("down", "sleep-detector stopped") from finally when no exception
occurred; touch symbols: report_health, sys.exit, and the try/except/finally
surrounding db_conn.close.

---

Outside diff comments:
In `@scripts/install`:
- Around line 190-217: The systemd unit for sleepypod.service is missing the
BIOMETRICS_DATABASE_URL environment, so add an Environment entry for
BIOMETRICS_DATABASE_URL in the [Service] block (alongside
Environment="DATABASE_URL=..." and Environment="DAC_SOCK_PATH=...") so systemd
receives the same biometrics DB path the app expects; update the template that
writes sleepypod.service (the heredoc that generates
/etc/systemd/system/sleepypod.service) to include
Environment="BIOMETRICS_DATABASE_URL=$BIOMETRICS_DATABASE_URL" or
Environment="BIOMETRICS_DATABASE_URL=file:$DATA_DIR/biometrics.db" as
appropriate so the service runs with the correct BIOMETRICS_DATABASE_URL.
- Around line 340-378: The update script in scripts/install currently only
handles sleepypod.db and misses biometrics DB and modules; modify the update
path to also backup "$DATA_DIR/biometrics.db" before pulling, run "pnpm
db:biometrics:generate" and "pnpm db:biometrics:migrate" after the other DB
steps, and ensure biometrics native modules are rebuilt/updated (e.g., run pnpm
install/rebuild for piezo-processor and sleep-detector or execute their build
steps like (cd node_modules/piezo-processor && npm run build-release) and same
for sleep-detector) before starting the service; likewise extend the rollback
path to restore "$DATA_DIR/biometrics.db.bak" (like sleepypod.db) and re-run the
module install/build steps and pnpm build so a failed update doesn’t leave
biometrics.db or native modules inconsistent (reference variables PREV_COMMIT
and DATA_DIR and commands pnpm db:biometrics:generate, pnpm
db:biometrics:migrate, piezo-processor, sleep-detector).

---

Duplicate comments:
In `@modules/piezo-processor/main.py`:
- Around line 97-113: The upsert in report_health uses "ON CONFLICT(component)"
but the system_health table has no UNIQUE constraint on component, so the query
will fail; fix by either ensuring the schema defines component as UNIQUE (or
PRIMARY KEY) in the system_health table creation/migration, or change the write
logic in report_health to use a supported pattern (e.g., INSERT OR REPLACE or
update-if-exists via SELECT then UPDATE) that matches the existing schema;
update the schema migration or the report_health function accordingly and keep
references to the ON CONFLICT(component) clause and the system_health table to
locate the change.
- Around line 293-300: The exception-path currently overwrites the original
error health message and leaks an open file handle; capture the exception
message into a variable (e.g. err_msg) in the except block and call
report_health("down", err_msg) after cleanup (or ensure you don't overwrite it
in the finally), and in the finally block ensure db_conn is closed only if not
None and close any open RawFileFollower._file (check the RawFileFollower
instance(s) and call its ._file.close() if present and not already closed) so
the original error reason is preserved and file handles are released; update
references in main loop exception handling and the finally cleanup to use these
names (report_health, db_conn.close, RawFileFollower._file).

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 81e46c9 and 4ac7016.

📒 Files selected for processing (11)
  • docs/adr/0012-biometrics-module-system.md
  • drizzle.biometrics.config.ts
  • modules/piezo-processor/main.py
  • modules/sleep-detector/main.py
  • scripts/install
  • src/db/index.ts
  • src/db/migrations/0000_fine_frightful_four.sql
  • src/db/migrations/0001_little_the_twelve.sql
  • src/db/migrations/meta/0000_snapshot.json
  • src/db/migrations/meta/0001_snapshot.json
  • src/db/migrations/meta/_journal.json
💤 Files with no reviewable changes (2)
  • src/db/migrations/meta/0001_snapshot.json
  • src/db/migrations/0001_little_the_twelve.sql
🚧 Files skipped from review as they are similar to previous changes (2)
  • drizzle.biometrics.config.ts
  • src/db/migrations/meta/_journal.json

Comment on lines +153 to +157
Each module:
1. Tails `/persistent/*.RAW` for new CBOR sensor data
2. Processes its relevant signal type
3. Writes results to `biometrics.db` using transactions
4. Writes its health status to `system_health` table in `sleepypod.db`
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify system_health schema for UNIQUE constraint on component
echo "=== Migration SQL ==="
fd "0000_fine_frightful_four.sql" --exec cat {}

echo ""
echo "=== Schema definition ==="
ast-grep --pattern 'export const systemHealth = sqliteTable($$$)'

echo ""
echo "=== Biometrics migration for system_health ==="
fd -e sql --path-separator / -p biometrics-migration --exec cat {}

echo ""
echo "=== All references to system_health ==="
rg -n "system_health" --type-add 'ts:*.ts' --type-add 'py:*.py' -t ts -t py -t sql

Repository: sleepypod/core

Length of output: 5942


🏁 Script executed:

# Search for Python module implementations
echo "=== Python modules using ON CONFLICT ==="
rg -n "ON CONFLICT.*component" -t py

echo ""
echo "=== Find all Python files in biometrics module ==="
fd -e py --path "*biometric*" | head -20

echo ""
echo "=== Check ADR file for context ==="
cat -n docs/adr/0012-biometrics-module-system.md | sed -n '150,160p'

Repository: sleepypod/core

Length of output: 996


🏁 Script executed:

# Get full INSERT statements from both Python modules
echo "=== sleep-detector INSERT statement ==="
sed -n '125,130p' modules/sleep-detector/main.py

echo ""
echo "=== piezo-processor INSERT statement ==="
sed -n '98,110p' modules/piezo-processor/main.py

echo ""
echo "=== Check for any UNIQUE index on system_health.component ==="
rg -n "CREATE.*INDEX" -t sql | grep -i "system_health\|component"

Repository: sleepypod/core

Length of output: 1154


Add UNIQUE constraint on system_health.component or refactor module health-write logic.

The system_health table in sleepypod.db (defined in migration 0000_fine_frightful_four.sql) has no UNIQUE constraint on the component column—only an auto-incrementing id primary key. However, both Python modules (sleep-detector and piezo-processor) use ON CONFLICT(component) DO UPDATE SET... when writing health status. SQLite requires a UNIQUE constraint on any column referenced in an ON CONFLICT(...) clause. Without it, these inserts will fail at runtime.

Add a UNIQUE constraint/index on system_health.component in the schema, or refactor the module health-write logic to use SELECT + conditional UPDATE/INSERT.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/adr/0012-biometrics-module-system.md` around lines 153 - 157, The
system_health table lacks a UNIQUE constraint on component but the modules
(sleep-detector, piezo-processor) use ON CONFLICT(component) DO UPDATE, which
will fail; fix by updating the schema in migration 0000_fine_frightful_four.sql
to add a UNIQUE constraint or index on system_health.component (or alternatively
change the modules' health-write logic to perform a SELECT for the component and
then conditional INSERT or UPDATE instead of using ON CONFLICT); reference the
system_health table and the ON CONFLICT(component) DO UPDATE usage when making
the change so both writers work without runtime errors.

Comment thread modules/piezo-processor/main.py
Comment on lines +120 to +135
def report_health(status: str, message: str) -> None:
try:
conn = sqlite3.connect(str(SLEEPYPOD_DB), timeout=2.0)
with conn:
conn.execute(
"""INSERT INTO system_health (component, status, message, last_checked)
VALUES ('sleep-detector', ?, ?, ?)
ON CONFLICT(component) DO UPDATE SET
status=excluded.status,
message=excluded.message,
last_checked=excluded.last_checked""",
(status, message, int(time.time())),
)
conn.close()
except Exception as e:
log.warning("Could not write health status: %s", e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

ON CONFLICT(component) requires a UNIQUE constraint that doesn't exist.

The system_health table (defined in the main migration) has no UNIQUE constraint on component — only an auto-incrementing id as PK. SQLite's ON CONFLICT(component) requires component to be the target of a UNIQUE index or constraint. This will fail at runtime.

This is the same issue flagged in the ADR review and also affects modules/piezo-processor/main.py report_health.

🧰 Tools
🪛 Ruff (0.15.2)

[warning] 134-134: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modules/sleep-detector/main.py` around lines 120 - 135, The upsert uses ON
CONFLICT(component) but the system_health table has no UNIQUE constraint on
component, so this will fail; update the report_health function in
modules/sleep-detector/main.py (and mirror the same change in
modules/piezo-processor/main.py) to perform a safe upsert without relying on a
UNIQUE constraint: open the DB connection, run an UPDATE system_health SET
status=?, message=?, last_checked=? WHERE component='sleep-detector', check the
number of rows affected, and if zero run a plain INSERT INTO system_health
(component, status, message, last_checked) VALUES ('sleep-detector', ?, ?, ?);
keep the existing exception handling and close the connection.

Comment thread modules/sleep-detector/main.py
Comment thread modules/sleep-detector/main.py
Comment thread modules/sleep-detector/main.py
ng added 3 commits February 23, 2026 23:25
Both Python modules use ON CONFLICT(component) DO UPDATE to upsert health
status. SQLite requires a UNIQUE constraint on the conflict target — without
it the INSERT fails at runtime. Add .unique() to the column definition and
regenerate the migration (squashing into the single initial schema).
… and sleep-detector

piezo-processor:
- Cast nperseg to int before passing to scipy.signal.welch — fs is typed
  as float so fs*10 can evaluate to a float, which scipy rejects
- Move report_health("stopped") out of finally so it only runs on clean
  shutdown, not after sys.exit(1) (which would overwrite the real error)

sleep-detector:
- Fix zero-duration absent interval on session start: set _was_present=True
  inside the new-session block and change the return-from-absence check to
  elif so it cannot fire on the same sample that opened the session
- Close RawFileFollower file handle after the read loop exits on shutdown
- Move report_health("stopped") out of finally for the same reason as above
@ng
Copy link
Copy Markdown
Contributor Author

ng commented Feb 24, 2026

@coderabbitai All feedback has been addressed. Four comments are still showing as unresolved but are stale against earlier commits:

  • arrow-parens (): executeReboot was rewritten as async/await — the .catch(error => { chain no longer exists
  • data migration (): file was deleted — all migrations were squashed into a single clean 0000_zippy_mephistopheles.sql
  • system_health UNIQUE constraint (ADR + sleep-detector): .unique() added to the Drizzle schema column, CREATE UNIQUE INDEX system_health_component_unique is present in the migration — ON CONFLICT(component) upserts in both modules will work correctly

Please re-review the latest commits.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 24, 2026

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Check the executeReboot implementation
echo "=== executeReboot implementation ==="
ast-grep --pattern $'executeReboot() {
  $$$
}'

echo -e "\n=== List migration files ==="
fd -e sql . src/db/migrations/

echo -e "\n=== Check system_health schema definition ==="
rg -A 5 "system_health" src/db/schema.ts

echo -e "\n=== Check system_health in migrations ==="
fd -e sql . src/db/migrations/ -x cat {}

Length of output: 4027


🏁 Script executed:

#!/bin/bash

# Search for executeReboot implementation with context
echo "=== executeReboot implementation ==="
rg -A 15 "executeReboot\(" src/scheduler/jobManager.ts

Length of output: 1180


@ng, I've verified all three fixes:

  1. executeReboot async/await: Confirmed—executeReboot() is now async and uses await import('child_process') with a Promise constructor. No .catch(error => { chains remain.

  2. Migrations squashed: Confirmed—only 0000_zippy_mephistopheles.sql exists in src/db/migrations/. The incremental migrations have been consolidated.

  3. system_health UNIQUE constraint: Confirmed—the schema has component: text('component').notNull().unique() and the migration includes CREATE UNIQUE INDEX system_health_component_unique ON system_health (component). The ON CONFLICT(component) upserts in both Python modules will work correctly.

All feedback has been addressed. Triggering a fresh review of the latest commits now.

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@github-actions
Copy link
Copy Markdown

🎉 This PR is included in version 1.1.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant