Skip to content

fix(subconscious): seed defaults into per-user workspace + fix Intelligence page stale log#462

Merged
Al629176 merged 4 commits intotinyhumansai:mainfrom
sanil-23:fix/subconscious-seed-on-startup
Apr 9, 2026
Merged

fix(subconscious): seed defaults into per-user workspace + fix Intelligence page stale log#462
Al629176 merged 4 commits intotinyhumansai:mainfrom
sanil-23:fix/subconscious-seed-on-startup

Conversation

@sanil-23
Copy link
Copy Markdown
Contributor

@sanil-23 sanil-23 commented Apr 9, 2026

Summary

  • Subconscious default system tasks now seed into the per-user workspace (~/.openhuman/users/<id>/workspace/) instead of the pre-login global workspace (~/.openhuman/workspace/), so they actually show up in the UI after login.
  • Engine bootstrap + heartbeat loop are deferred until active_user.toml exists (either at startup for an existing session or after a fresh login), and torn down on logout so an account switch rebuilds against the new user.
  • Subconscious RPC handlers now use the shared load_config_with_timeout wrapper (matching the 28 other domain schemas.rs files) instead of raw Config::load_or_init, bounding the config load path at 30s.
  • useSubconscious poll loop is guarded against wedging: each of the 4 parallel RPCs is wrapped in a 2.5s client-side timeout, and the in-flight ref is cleared on unmount.

Problem

After logging in on a fresh install, the Intelligence page showed an empty subconscious task list, even though the backend reported heartbeat.enabled = true. Investigation uncovered two independent bugs that combined to produce the symptom.

1. Engine seeded into the wrong workspace. get_or_init_engine() caches a SubconsciousEngine in a OnceLock, constructed lazily with whatever config.workspace_dir was active at the moment of first call. Config::load_or_init() resolves workspace_dir from active_user.toml (see config/schema/load.rs::resolve_runtime_config_dirs), which does not exist until after login. The eager init at core/jsonrpc.rs startup therefore fired against the pre-login global default, and the constructor's seed_default_tasks() inserted the 3 system defaults into ~/.openhuman/workspace/subconscious/subconscious.db. After login, active_user.toml was written and subsequent handler calls re-resolved to ~/.openhuman/users/<id>/workspace/subconscious/subconscious.db — a fresh, empty DB. The cached engine kept ticking against the pre-login global DB while the handlers read from the per-user one, and the UI correctly returned [].

Verified with sqlite3:

~/.openhuman/workspace/subconscious/subconscious.db          -> 3 tasks (global, pre-login)
~/.openhuman/users/<id>/workspace/subconscious/subconscious.db -> 0 tasks (per-user, post-login)

2. Intelligence page froze on a stale snapshot while the backend kept ticking. Even after the seeding was fixed, the activity log stopped updating in the UI while curl showed the backend processing new ticks. Two compounding issues:

  • Backend: every subconscious RPC handler (status, tasks_list, log_list, escalations_list) called raw Config::load_or_init() at the top. That was the only outlier in the entire JSON-RPC surface — 28 other domain schemas.rs files use the shared load_config_with_timeout() wrapper for a reason. load_or_init constructs a fresh SecretStore per call and runs a chain of decrypt_optional_secret calls that may IPC to the OS keychain. Under the 3s poll x 4 parallel RPCs x ~7 decrypts each, the unbounded load path can pile up.
  • Frontend: useSubconscious.refresh() uses fetchingRef as an in-flight guard, only cleared inside the finally block of an await Promise.all(...). With no per-RPC timeout on the client either, a single slow call would leave the ref stuck true, and every subsequent 3s setInterval tick would silently early-return. The poller kept firing but every call was a no-op; the UI froze on whatever snapshot it last successfully fetched.

Solution

Backend — defer engine bootstrap until login (src/openhuman/subconscious/global.rs)

  • New bootstrap_after_login(): idempotent per-process via a BOOTSTRAPPED: AtomicBool. Loads config (now that workspace_dir resolves correctly), builds the engine (which runs seed_default_tasks against the per-user DB), and spawns the heartbeat loop. Tracks the heartbeat JoinHandle in a static slot so it can be aborted cleanly later — the previous bare tokio::spawn detached the handle and made the loop uncancellable.
  • New reset_engine_for_user_switch(): aborts the heartbeat handle, clears the engine Option inside the OnceLock, and resets BOOTSTRAPPED to false. Without this, logout leaves the cached engine pinned to the previous user's workspace_dir and the next login would tick against the wrong DB.

Backend — startup path (src/core/jsonrpc.rs)

Replaced the unconditional eager init with a conditional one:

  • If config.heartbeat.enabled == false -> same log as before, no change.
  • Else check read_active_user_id(default_root_openhuman_dir()) — if present (user already logged in from a previous session), kick the bootstrap now so the heartbeat starts without waiting for re-authentication.
  • Otherwise log "bootstrap deferred — waiting for login" and exit the block. The login RPC will trigger it later.

Backend — login + logout hooks (src/openhuman/credentials/ops.rs)

  • store_session now calls bootstrap_after_login() after session stored, so a fresh login triggers seeding against the per-user workspace it just created. Bootstrap failures are non-fatal (session already stored, we only warn and push a log entry).
  • clear_session now calls reset_engine_for_user_switch() after clearing active_user.toml, tearing down the engine + heartbeat so a subsequent login rebuilds them against whichever user signs in next.

Backend — config load timeout (src/openhuman/subconscious/schemas.rs)

One-line change to the local load_config() helper: delegate to crate::openhuman::config::load_config_with_timeout() instead of calling Config::load_or_init() directly. Brings the subconscious handlers in line with the 28 other domain schemas.rs files and puts the config load path under the shared 30s bound.

Frontend — poll guard (app/src/hooks/useSubconscious.ts)

  • New withTimeout<T>(promise, ms = 2500) helper that races each of the 4 parallel RPCs in refresh() against a 2.5s timeout (strictly less than the 3s poll interval so slow calls cannot stack across ticks). Resolves null on timeout, matching the existing .catch(() => null) contract so downstream setState logic is unchanged.
  • useEffect cleanup now clears fetchingRef.current = false on unmount so a late-returning request or a React Strict Mode double-mount in dev cannot leave the ref stuck true for the next mount.

Submission Checklist

  • Unit tests — Not added in this PR. The seed path itself has existing coverage in src/openhuman/subconscious/integration_test.rs (seed_then_query_tasks, engine_construction_seeds_default_tasks, seed_default_tasks_creates_system_tasks) and those still pass. Recommended follow-up coverage: bootstrap_after_login idempotency (second call is a no-op), reset_engine_for_user_switch clears both engine and BOOTSTRAPPED, useSubconscious refresh no longer wedges when one RPC times out.
  • E2E / integration — Not added. A good regression test lives in tests/json_rpc_e2e.rs: start the sidecar with no active_user.toml, call subconscious.tasks_list and expect [], write active_user.toml + call the session-verify RPC, then call subconscious.tasks_list again and expect the 3 seeded tasks in the per-user DB. Flagging for follow-up.
  • N/A — N/A does not apply; tests would be valuable and are called out as a follow-up above.
  • Doc comments — New helpers in global.rs have module + function-level docs explaining the lifecycle gating (bootstrap_after_login, reset_engine_for_user_switch), the BOOTSTRAPPED invariant, and why the OnceLock<Mutex<Option<JoinHandle<()>>>> shape is necessary.
  • Inline comments — Added at every judgement call: the login-gate check in jsonrpc.rs explains why we bootstrap-on-restart if active_user.toml already exists; the credentials/ops.rs hook explains the non-fatal handling; the schemas.rs helper body explains why raw load_or_init is dangerous and links to the 28-file convention; the useSubconscious.ts timeout wrapper explains the 2.5s < 3s poll invariant and the ref-clear on unmount.

Impact

  • Runtime: desktop only — Rust sidecar + React frontend. No mobile / web / CLI changes.
  • Performance: neutral on the happy path. Slightly more work in store_session (one extra idempotent bootstrap call) and clear_session (one reset call). The 30s backend bound is a ceiling, not a regression — under normal conditions load_config_with_timeout completes in well under 50ms. The 2.5s frontend bound ensures the poll loop is never pinned beyond a single interval regardless of backend latency.
  • Migration: none. No schema changes, no config format changes. The fix is purely in the bootstrap ordering + timeout boundaries.
  • Compatibility: seed_default_tasks() is idempotent by title (store.rs test seed_default_tasks_creates_system_tasks confirms second call returns 0). Users with existing custom tasks are untouched. Users who were previously on the buggy version and had the 3 defaults seeded into their pre-login global workspace will on next launch see them seeded into their per-user workspace instead; the stale global entries are left behind harmlessly.

Related

  • Issue(s): none filed — surfaced during local debugging while the Intelligence page showed an empty activity log after login on a clean install.
  • Follow-up PR(s)/TODOs:
    • Add the integration test scenarios listed in the checklist above (bootstrap_after_login idempotency, pre-login -> post-login transition E2E).
    • Cache SecretStore + decrypted config in-process so repeated load_or_init calls do not re-do the keychain work every RPC. This is the deeper root cause of the poll pileup; the current PR bounds the symptom but does not eliminate the waste.
    • Revisit the heartbeat loop cadence semantics — interval_minutes: 5 currently behaves as a minimum gap between ticks (sleep = 5 min + tick duration), so observed cycles run 9-12 minutes when tick work is heavy. That is a separate semantic / docs question worth its own thread.

Generated with Claude Code (claude.com/claude-code)

Summary by CodeRabbit

  • New Features

    • Subconscious engine now bootstraps automatically after login and is cleanly reset on account switches.
  • Bug Fixes

    • RPC calls are now race-bounded with unified timeout behavior to avoid stuck fetches.
    • Polling/fetch state is explicitly cleaned up on unmount.
    • Startup now skips or accelerates subconscious bootstrapping based on login state.
  • Tests

    • Added integration test ensuring default task seeding is idempotent.

sanil-23 and others added 3 commits April 9, 2026 18:35
The subconscious engine was only constructed lazily on the first
engine-routed RPC (trigger, tasks_add, status). Because
handle_tasks_list bypasses the engine and reads the store directly,
a fresh install showed an empty Subconscious panel until the user
clicked "Run now", even though SubconsciousEngine::new() seeds the
3 default system tasks on construction.

Separately, HeartbeatEngine::run() — the periodic tick loop — was
never spawned in production code. The only callers of HeartbeatEngine
were tests, so ticks never fired automatically; users had to trigger
each evaluation manually.

Both issues are fixed together in run_server_inner, following the
existing start_if_enabled pattern used by voice, screen_intelligence,
and autocomplete:

1. Call get_or_init_engine() at startup to construct the
   SubconsciousEngine eagerly, which runs seed_default_tasks via
   from_heartbeat_config. Construction is idempotent via OnceLock;
   seeding is idempotent by title match, so repeat startups do not
   duplicate the defaults.

2. Construct HeartbeatEngine with the heartbeat config and
   workspace_dir, then tokio::spawn heartbeat.run() so the periodic
   tick loop runs for the process lifetime. The loop re-acquires
   the shared engine via get_or_init_engine() on each tick.

Guarded by config.heartbeat.enabled so users who disable the
heartbeat get neither startup seeding nor the background loop.

Add engine_construction_seeds_default_tasks integration test that
locks in the invariant: constructing SubconsciousEngine on a fresh
workspace_dir must leave the 3 default system tasks in the store,
with no tick, trigger, or explicit seed call. Also asserts that
reconstructing the engine on the same workspace does not duplicate
the defaults.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Default system tasks seeded at sidecar startup into the pre-login global
workspace (`~/.openhuman/workspace/`) instead of the per-user workspace
(`~/.openhuman/users/<id>/workspace/`) the UI reads from after login.

The engine singleton is built lazily via `get_or_init_engine()` and
cached in a `OnceLock`. `Config::load_or_init` resolves `workspace_dir`
from `active_user.toml` — which does not exist until after login. When
the engine was constructed on startup it therefore seeded into the
global default, then the frozen singleton kept pointing at that path
for the rest of the session while RPC handlers like `tasks_list`
re-loaded config per call and read from the correct per-user path,
silently returning an empty list.

Fix:

- `subconscious/global.rs`: add `bootstrap_after_login()` (idempotent
  via `BOOTSTRAPPED: AtomicBool`) which builds the engine against the
  now-correct per-user workspace and spawns the heartbeat loop. Track
  the heartbeat `JoinHandle` in a static so it can be aborted cleanly.
  Add `reset_engine_for_user_switch()` that aborts the heartbeat,
  clears the engine option, and resets the bootstrap flag.
- `core/jsonrpc.rs`: replace the unconditional eager init on startup
  with a conditional one that only bootstraps if `active_user.toml`
  already exists (so a user logged in from a previous session still
  gets the engine up immediately after restart).
- `credentials/ops.rs`: call `bootstrap_after_login()` at the end of
  `verify_and_store_session` so a fresh login triggers seeding against
  the per-user workspace. Call `reset_engine_for_user_switch()` in
  `clear_session` so logout tears down the engine + heartbeat loop and
  a subsequent login rebuilds them against the new user.

Verified locally: sidecar restart with no `active_user.toml` logs
"bootstrap deferred — waiting for login"; post-login logs "seeded 3
tasks on init" + "heartbeat periodic loop spawned"; and
`subconscious.tasks_list` returns the 3 system defaults from the
per-user DB.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two related fixes for the Intelligence page freezing on a stale
subconscious activity-log snapshot while ticks kept progressing in the
sidecar.

Root cause (backend): the subconscious RPC handlers were the only
outlier in the entire JSON-RPC surface that called the raw
`Config::load_or_init()` instead of the shared
`load_config_with_timeout()` wrapper that every other domain schemas.rs
uses (cron, webhooks, voice, team, skills, service, referral, doctor,
…). `load_or_init` constructs a fresh `SecretStore` and runs a chain
of `decrypt_optional_secret` calls on every invocation, which may IPC
to the OS keychain — slow, unbounded, no caching. Under the Intelligence
page's 3-second poll (4 parallel RPCs × ~7 keychain round-trips each =
~28 keychain calls every 3s), this pileup was enough to pin the
frontend's `Promise.all` past the poll interval.

Root cause (frontend): `useSubconscious.refresh()` uses `fetchingRef`
as an in-flight guard. The ref is only cleared inside the `finally`
block that runs after `Promise.all` settles. With no per-RPC timeout
on the client side either, a single slow backend call would leave the
ref stuck `true`, and every subsequent 3s `setInterval` tick would
silently early-return at the top of `refresh`. The poller kept firing,
but every call was a no-op — so the UI froze on whatever snapshot it
last successfully fetched, even though the backend was still ticking
through new decisions.

Backend fix (`src/openhuman/subconscious/schemas.rs`):

  - Replace the local `load_config()` helper body to delegate to
    `crate::openhuman::config::load_config_with_timeout()`. Matches the
    28 other domain schemas.rs files and brings subconscious handlers
    under the same 30s bound used everywhere else.

Frontend fix (`app/src/hooks/useSubconscious.ts`):

  - Add a `withTimeout` helper (2.5s per-RPC, strictly less than the
    3s poll interval) that races each of the 4 parallel RPCs against
    a timeout and resolves `null` on timeout — matching the existing
    `.catch(() => null)` contract so downstream setState logic is
    unchanged.
  - Clear `fetchingRef.current = false` in the useEffect cleanup so a
    late-returning request or a React Strict Mode double-mount in dev
    can't leave the ref stuck `true` for the next mount.

Defense in depth: the backend bound prevents a permanent hang and
matches repo conventions, while the frontend bound guarantees the 3s
poll loop can never be pinned beyond one tick regardless of
server-side latency. Verified locally — `cargo check` clean,
`tsc --noEmit` clean, all 18 pre-existing warnings in unrelated modules.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 9, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 96dd8181-6abe-4e87-a363-5f37a46075f1

📥 Commits

Reviewing files that changed from the base of the PR and between 5545f8f and 629c346.

📒 Files selected for processing (1)
  • src/core/jsonrpc.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/core/jsonrpc.rs

📝 Walkthrough

Walkthrough

Adds post-login bootstrapping and teardown for the SubconsciousEngine, changes subconscious config loading, updates frontend hook RPC calls to use a timeout wrapper and polling cleanup, and adds an integration test verifying idempotent seeding of default tasks.

Changes

Cohort / File(s) Summary
Frontend Hook Timing
app/src/hooks/useSubconscious.ts
Added RPC_TIMEOUT_MS and withTimeout<T>(); wrapped each concurrent RPC call with the timeout helper (normalizes timeout/rejection to null). Added polling cleanup to reset fetchingRef.current on unmount.
Backend Bootstrap & Lifecycle
src/core/jsonrpc.rs, src/openhuman/credentials/ops.rs, src/openhuman/subconscious/global.rs
Introduced bootstrap_after_login() and reset_engine_for_user_switch(); conditional heartbeat bootstrapping at startup based on config and existing session; post-login bootstrap called after session store; teardown aborts heartbeat task and clears global engine for user switches.
Config Loading
src/openhuman/subconscious/schemas.rs
load_config() now calls crate::openhuman::config::load_config_with_timeout().await instead of Config::load_or_init().await, changing how timeouts/errors are surfaced for subconscious RPC handlers.
Integration Testing
src/openhuman/subconscious/integration_test.rs
Added engine_construction_seeds_default_tasks to assert engine initialization seeds exactly 3 system tasks with pending recurrence and that seeding is idempotent across re-initialization.

Sequence Diagram(s)

sequenceDiagram
    participant Frontend as Frontend App
    participant Hook as useSubconscious Hook
    participant RPC as JSON-RPC Server
    participant Engine as SubconsciousEngine
    participant DB as Workspace DB

    Frontend->>Hook: trigger refresh()
    Hook->>RPC: subconsciousTasksList (withTimeout)
    Hook->>RPC: subconsciousEscalationsList (withTimeout)
    Hook->>RPC: subconsciousLogList (withTimeout)
    Hook->>RPC: subconsciousStatus (withTimeout)

    par concurrent RPCs
        RPC->>Engine: request data
        Engine->>DB: query store
        DB-->>Engine: results
        Engine-->>RPC: response
    end

    alt timeout or rejection
        Hook-->>Hook: withTimeout resolves to null
    else success
        RPC-->>Hook: data responses
    end

    Hook->>Frontend: update state (if results present)
Loading
sequenceDiagram
    participant User as User
    participant Client as Client/UI
    participant Session as Session Manager
    participant Core as Core Server
    participant SubEngine as SubconsciousEngine
    participant DB as Workspace DB

    User->>Client: login
    Client->>Session: store_session()
    Session->>DB: persist session
    Session->>SubEngine: bootstrap_after_login()

    alt heartbeat.enabled == true
        SubEngine->>SubEngine: load config
        SubEngine->>DB: check/seed tasks
        SubEngine->>SubEngine: spawn heartbeat (store JoinHandle)
    else heartbeat disabled
        SubEngine-->>Session: log skipped
    end

    User->>Client: logout / switch
    Client->>Session: clear_session()
    Session->>SubEngine: reset_engine_for_user_switch()
    SubEngine->>SubEngine: abort heartbeat JoinHandle
    SubEngine->>SubEngine: clear engine & BOOTSTRAPPED
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • PR #268: Introduced the subconscious engine and initial bootstrap wiring; this PR extends that lifecycle with post-login bootstrap and teardown.
  • PR #437: Prior changes to app/src/hooks/useSubconscious.ts that added the hook and polling/refresh logic; this PR modifies that hook’s RPC calls and cleanup.

Poem

🐰 I hopped in the code, heartbeats set to run,
After login I seed tasks—three, not one,
Timeouts watch RPCs so they don't stall,
On logout I tidy, abort, clear all.
A tidy burrow, ready for the next fun.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main changes: seeding defaults into per-user workspace and fixing the Intelligence page stale log issue.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
app/src/hooks/useSubconscious.ts (1)

224-229: Consider adding a timeout rejection path for observability.

Currently, timeouts are silently swallowed like any other error. If you want to distinguish between RPC failures and timeouts for debugging, you could log or track timeouts separately. However, given the existing .catch(() => null) contract (per learnings, intentional for resilience), this is fine as-is for the current use case.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/hooks/useSubconscious.ts` around lines 224 - 229, The timeout
currently resolves to null and is indistinguishable from other failures;
introduce a distinct timeout rejection so timeouts can be observed: add a
TimeoutError class (e.g., class TimeoutError extends Error) and change
withTimeout to race the original promise against a timeout promise that rejects
with new TimeoutError(), keeping RPC_TIMEOUT_MS as the default; update callers
of withTimeout (if any rely on the null-only contract) to catch TimeoutError
separately or map it to null while logging/tracking the timeout for
observability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/core/jsonrpc.rs`:
- Around line 706-744: The formatting in the async startup block around
config.heartbeat.enabled and the chained calls to
crate::openhuman::config::default_root_openhuman_dir().ok().and_then(...) plus
the await for
crate::openhuman::subconscious::global::bootstrap_after_login().await and the
log::info!/log::warn! macro calls is misformatted and failing cargo fmt; run
cargo fmt (or rustfmt) and ensure the chained method calls are line-broken
consistently and the log macros use standard spacing/parentheses so the block
compiles and passes cargo fmt --check, keeping the logic around checking
config.heartbeat.enabled, already_logged_in, default_root_openhuman_dir,
read_active_user_id, and bootstrap_after_login unchanged.

---

Nitpick comments:
In `@app/src/hooks/useSubconscious.ts`:
- Around line 224-229: The timeout currently resolves to null and is
indistinguishable from other failures; introduce a distinct timeout rejection so
timeouts can be observed: add a TimeoutError class (e.g., class TimeoutError
extends Error) and change withTimeout to race the original promise against a
timeout promise that rejects with new TimeoutError(), keeping RPC_TIMEOUT_MS as
the default; update callers of withTimeout (if any rely on the null-only
contract) to catch TimeoutError separately or map it to null while
logging/tracking the timeout for observability.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bb125aec-749f-4999-a80b-e9988f790680

📥 Commits

Reviewing files that changed from the base of the PR and between 3a2e4b1 and 5545f8f.

📒 Files selected for processing (6)
  • app/src/hooks/useSubconscious.ts
  • src/core/jsonrpc.rs
  • src/openhuman/credentials/ops.rs
  • src/openhuman/subconscious/global.rs
  • src/openhuman/subconscious/integration_test.rs
  • src/openhuman/subconscious/schemas.rs

CI ran `cargo fmt --all -- --check` and flagged the conditional
bootstrap block in `run_server_inner` — `let already_logged_in`
should fold onto one line, the `.and_then` closure body should
inline, the `match ... .await` chain should fold, and the short
log!() calls should not break across lines. No behavior change.

Fixes three jobs on PR tinyhumansai#462 that were all failing at the same
`cargo fmt --all -- --check` step (Rust Quality, Rust Tests,
Type Check TypeScript — the last one chains cargo fmt after
its prettier check).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Al629176 Al629176 merged commit 9764a87 into tinyhumansai:main Apr 9, 2026
7 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants