feat: VmsanContext composition root with hooks, plugins, and logger#23
feat: VmsanContext composition root with hooks, plugins, and logger#23
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughAdds a Vmsan runtime context and factory (createVmsan), a plugin/hook system, typed logger utilities, an in-memory state store, and refactors VMService and CLI commands to use the new context. Also includes minor docs/style and package.json script/dependency edits. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI (commands)
participant Factory as createVmsan()
participant VM as VMService
participant Store as VmStateStore
participant Hooks as VmsanHooks
participant Logger as VmsanLogger
CLI->>Factory: await createVmsan(options)
Factory->>Store: instantiate or resolve store
Factory->>Logger: create or resolve logger
Factory->>Hooks: create hookable instance
Factory-->>CLI: return VMService (ctx wired)
CLI->>VM: VMService.create(CreateVmOptions)
VM->>Hooks: emit "vm:beforeCreate"
VM->>Store: allocateNetworkSlot / save state
VM->>Logger: info/debug about creation steps
VM->>Hooks: emit "network:afterSetup"
VM->>Store: update state (running/started)
VM->>Hooks: emit "vm:afterCreate"
VM-->>CLI: CreateVmResult (vmId, pid, state)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
src/services/vm.ts (1)
721-727: Tighten policy type at the service boundaryLine 723 accepts
policy: string; this weakens compile-time validation for policy values.✅ Small type-safety improvement
async updateNetworkPolicy( vmId: string, - policy: string, + policy: NetworkPolicy, domains: string[], allowedCidrs: string[], deniedCidrs: string[], ): Promise<UpdatePolicyResult> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/vm.ts` around lines 721 - 727, The updateNetworkPolicy signature currently accepts policy: string which weakens type safety; change it to a stricter type (e.g., a Policy enum or union type such as Policy or VMPolicy) and update the function signature of updateNetworkPolicy(vmId: string, policy: Policy, ...). Import or define the policy type next to existing VM types (or reuse an existing Policy/VMPolicy type) and update any callers to pass that typed value so TypeScript enforces allowed policy values at the service boundary; keep the return type UpdatePolicyResult unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/list.ts`:
- Around line 27-31: Wrap the createVmsan() call inside async run() with a
try/catch so initialization failures are handled locally: call createVmsan() in
a try block and in the catch use the command logger (cmdLog) to record a
descriptive error including the caught error object, then exit the run early
(return or propagate a controlled error) instead of letting the rejection bypass
CLI error/reporting; locate this change around the async run function where
createVmsan() and vmsan.list() are used.
In `@src/services/vm.ts`:
- Around line 712-714: The catch blocks that currently return { vmId, success:
false, error: err instanceof VmsanError ? err : undefined } (around the vm
stop/remove code) must also emit the lifecycle error event before returning so
plugins receive failure notifications; locate the catch in the stop/remove
functions (references: vmId, VmsanError) and call the same event emitter used by
create/start (emit "vm:error" with the vmId and error payload) immediately prior
to the return, ensuring the emitted error mirrors what create/start emit for
consistency (also apply the same change to the other catch at the 844–846 area).
- Around line 304-325: The restart path must honor the same create-time
isolation flags: update the restart logic that calls jailer.spawn (where
newPidNs is currently hardcoded true and cgroup/seccomp are hardcoded) to reuse
the same computations as the creation path (reuse seccompFilter from
ensureSeccompFilter or undefined when opts.disableSeccomp, compute cgroup
exactly like the creation block using opts.disableCgroup and
vcpus/memMib/CGROUP_VMM_OVERHEAD_MIB, and set newPidNs to !opts.disablePidNs).
Refactor the duplicated spawn preparation into a small helper or shared
variables (seccompFilter, cgroup, newPidNs) so both the create path and the
restart path call jailer.spawn with identical semantics.
- Around line 461-498: The cleanup code building vmRootCandidates from
state.chrootDir and dirname(dirname(state.apiSocket)) (used in vmRootCandidates,
removeStaleDevTrees, removeStaleDeviceNodes and the unlinkSync(rmSync) calls)
must validate these resolved paths are inside an expected managed root before
performing destructive ops; implement checks to reject empty, root ("/") or
paths that escape the managed root by resolving real paths (realpathSync) and
ensuring each candidate startsWith or isPathInside the configured managedRoot
(or whitelist) and only then call unlinkSync/rmSync, otherwise skip and log a
warning; also validate state.apiSocket/state.chrootDir are non-null and
canonicalize them before use to prevent directory traversal/tampering.
- Line 731: The code currently calls fileLock.run with an async callback, which
releases the lock immediately because run is synchronous; replace that call with
fileLock.runAsync so the lock is held until the async callback finishes—i.e.,
locate the call to fileLock.run(...) and change it to fileLock.runAsync(...)
ensuring the async function passed remains unchanged so the lock semantics are
correct.
---
Nitpick comments:
In `@src/services/vm.ts`:
- Around line 721-727: The updateNetworkPolicy signature currently accepts
policy: string which weakens type safety; change it to a stricter type (e.g., a
Policy enum or union type such as Policy or VMPolicy) and update the function
signature of updateNetworkPolicy(vmId: string, policy: Policy, ...). Import or
define the policy type next to existing VM types (or reuse an existing
Policy/VMPolicy type) and update any callers to pass that typed value so
TypeScript enforces allowed policy values at the service boundary; keep the
return type UpdatePolicyResult unchanged.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (18)
docs/app/app.config.tsdocs/app/assets/css/main.cssdocs/nuxt.config.tspackage.jsonsrc/commands/create.tssrc/commands/list.tssrc/commands/network.tssrc/commands/remove.tssrc/commands/start.tssrc/commands/stop.tssrc/context.tssrc/hooks.tssrc/index.tssrc/lib/vm-state.tssrc/plugin.tssrc/services/vm.tssrc/stores/memory.tssrc/vmsan-logger.ts
| const seccompFilter = opts.disableSeccomp ? undefined : ensureSeccompFilter(paths); | ||
| if (seccompFilter) { | ||
| logger.debug(`Seccomp filter: ${seccompFilter}`); | ||
| } | ||
|
|
||
| const cgroup: CgroupConfig | undefined = opts.disableCgroup | ||
| ? undefined | ||
| : { | ||
| cpuQuotaUs: vcpus * 100000, | ||
| cpuPeriodUs: 100000, | ||
| memoryBytes: (memMib + CGROUP_VMM_OVERHEAD_MIB) * 1024 * 1024, | ||
| }; | ||
|
|
||
| jailer.spawn({ | ||
| firecrackerBin, | ||
| jailerBin, | ||
| chrootBase: jailerPaths.chrootBase, | ||
| seccompFilter: seccompFilter ?? undefined, | ||
| newPidNs: !opts.disablePidNs, | ||
| cgroup, | ||
| netns: netnsName, | ||
| }); |
There was a problem hiding this comment.
Restart path ignores create-time isolation flags
Line 304 applies disableSeccomp/disablePidNs/disableCgroup during create, but Line 554 and Line 539 hardcode restart behavior (newPidNs: true, always-on cgroup, no seccomp filter). This can change runtime guarantees and break restart on hosts needing disabled cgroups/namespaces.
♻️ Proposed direction
// Persist these flags in VmState at create-time (requires VmState/buildInitialVmState updates)
+disableSeccomp: !!opts.disableSeccomp,
+disablePidNs: !!opts.disablePidNs,
+disableCgroup: !!opts.disableCgroup,
// Reuse persisted flags on start
-const cgroup: CgroupConfig = {
+const cgroup: CgroupConfig | undefined = state.disableCgroup
+ ? undefined
+ : {
cpuQuotaUs: state.vcpuCount * 100000,
cpuPeriodUs: 100000,
memoryBytes: (state.memSizeMib + CGROUP_VMM_OVERHEAD_MIB) * 1024 * 1024,
-};
+ };
+const seccompFilter = state.disableSeccomp ? undefined : ensureSeccompFilter(paths);
jailer.spawn({
firecrackerBin,
jailerBin,
chrootBase: jailer.paths.chrootBase,
- newPidNs: true,
+ seccompFilter,
+ newPidNs: !state.disablePidNs,
cgroup,
netns: state.network.netnsName,
});Also applies to: 539-557
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/vm.ts` around lines 304 - 325, The restart path must honor the
same create-time isolation flags: update the restart logic that calls
jailer.spawn (where newPidNs is currently hardcoded true and cgroup/seccomp are
hardcoded) to reuse the same computations as the creation path (reuse
seccompFilter from ensureSeccompFilter or undefined when opts.disableSeccomp,
compute cgroup exactly like the creation block using opts.disableCgroup and
vcpus/memMib/CGROUP_VMM_OVERHEAD_MIB, and set newPidNs to !opts.disablePidNs).
Refactor the duplicated spawn preparation into a small helper or shared
variables (seccompFilter, cgroup, newPidNs) so both the create path and the
restart path call jailer.spawn with identical semantics.
| const vmRootCandidates = Array.from( | ||
| new Set([ | ||
| join(state.chrootDir, "root"), | ||
| state.chrootDir, | ||
| dirname(dirname(state.apiSocket)), | ||
| ]), | ||
| ); | ||
| for (const rootDir of vmRootCandidates) { | ||
| const staleFirecrackerBin = join(rootDir, "firecracker"); | ||
| if (existsSync(staleFirecrackerBin)) { | ||
| unlinkSync(staleFirecrackerBin); | ||
| } | ||
| rmSync(join(rootDir, "firecracker.pid"), { force: true }); | ||
| } | ||
|
|
||
| const socketPath = state.apiSocket; | ||
| if (existsSync(socketPath)) { | ||
| unlinkSync(socketPath); | ||
| } | ||
|
|
||
| const removeStaleDevTrees = (): void => { | ||
| for (const rootDir of vmRootCandidates) { | ||
| const devDir = join(rootDir, "dev"); | ||
| if (existsSync(devDir)) { | ||
| rmSync(devDir, { recursive: true, force: true }); | ||
| } | ||
| } | ||
| }; | ||
| const removeStaleDeviceNodes = (): void => { | ||
| const staleNodes = ["dev/net/tun", "dev/kvm", "dev/userfaultfd", "dev/urandom"]; | ||
| for (const rootDir of vmRootCandidates) { | ||
| for (const rel of staleNodes) { | ||
| const nodePath = join(rootDir, rel); | ||
| if (existsSync(nodePath)) { | ||
| rmSync(nodePath, { recursive: true, force: true }); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Guard destructive file operations with managed-path validation
Line 461 and Line 823 derive deletion targets from persisted state (state.chrootDir, state.apiSocket) and then call unlinkSync/rmSync. If state is tampered/corrupted, this can target arbitrary host paths.
🛡️ Proposed fix
-import { dirname, join } from "node:path";
+import { dirname, isAbsolute, join, relative, resolve } from "node:path";
export class VMService {
+ private assertManagedPath(targetPath: string): string {
+ const base = resolve(this.paths.jailerBaseDir);
+ const target = resolve(targetPath);
+ const rel = relative(base, target);
+ if (rel.startsWith("..") || isAbsolute(rel)) {
+ throw new VmsanError(`Refusing to operate on unmanaged path: ${targetPath}`);
+ }
+ return target;
+ }
async start(vmId: string): Promise<StartVmResult> {
@@
- const vmRootCandidates = Array.from(
+ const safeChrootDir = this.assertManagedPath(state.chrootDir);
+ const safeSocketPath = this.assertManagedPath(state.apiSocket);
+ const vmRootCandidates = Array.from(
new Set([
- join(state.chrootDir, "root"),
- state.chrootDir,
- dirname(dirname(state.apiSocket)),
+ join(safeChrootDir, "root"),
+ safeChrootDir,
+ dirname(dirname(safeSocketPath)),
]),
);
@@
- const socketPath = state.apiSocket;
+ const socketPath = safeSocketPath;
@@
async remove(vmId: string, opts?: { force?: boolean }): Promise<StopResult> {
@@
if (state.chrootDir) {
- const vmJailerDir = dirname(state.chrootDir);
+ const safeChrootDir = this.assertManagedPath(state.chrootDir);
+ const vmJailerDir = dirname(safeChrootDir);
try {
- rmSync(state.chrootDir, { recursive: true, force: true });
+ rmSync(safeChrootDir, { recursive: true, force: true });
} catch {Also applies to: 823-832
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/vm.ts` around lines 461 - 498, The cleanup code building
vmRootCandidates from state.chrootDir and dirname(dirname(state.apiSocket))
(used in vmRootCandidates, removeStaleDevTrees, removeStaleDeviceNodes and the
unlinkSync(rmSync) calls) must validate these resolved paths are inside an
expected managed root before performing destructive ops; implement checks to
reject empty, root ("/") or paths that escape the managed root by resolving real
paths (realpathSync) and ensuring each candidate startsWith or isPathInside the
configured managedRoot (or whitelist) and only then call unlinkSync/rmSync,
otherwise skip and log a warning; also validate state.apiSocket/state.chrootDir
are non-null and canonicalize them before use to prevent directory
traversal/tampering.
There was a problem hiding this comment.
♻️ Duplicate comments (2)
src/services/vm.ts (2)
446-464:⚠️ Potential issue | 🔴 CriticalGuard destructive filesystem cleanup with managed-path validation.
Line 446–464 and Line 807–816 build deletion targets from persisted state and call
unlinkSync/rmSyncdirectly. Ifstate.chrootDir/state.apiSocketis corrupted or tampered, cleanup can delete arbitrary host paths.🛡️ Proposed fix
-import { dirname, join } from "node:path"; +import { dirname, isAbsolute, join, relative, resolve } from "node:path"; export class VMService { + private assertManagedPath(targetPath: string): string { + const base = resolve(this.paths.jailerBaseDir); + const target = resolve(targetPath); + const rel = relative(base, target); + if (!targetPath || target === "/" || rel.startsWith("..") || isAbsolute(rel)) { + throw new VmsanError(`Refusing to operate on unmanaged path: ${targetPath}`); + } + return target; + } async start(vmId: string): Promise<StartVmResult> { @@ - const vmRootCandidates = Array.from( + const safeChrootDir = this.assertManagedPath(state.chrootDir); + const safeSocketPath = this.assertManagedPath(state.apiSocket); + const vmRootCandidates = Array.from( new Set([ - join(state.chrootDir, "root"), - state.chrootDir, - dirname(dirname(state.apiSocket)), + join(safeChrootDir, "root"), + safeChrootDir, + dirname(dirname(safeSocketPath)), ]), ); @@ - const socketPath = state.apiSocket; + const socketPath = safeSocketPath; async remove(vmId: string, opts?: { force?: boolean }): Promise<StopResult> { @@ - const vmJailerDir = dirname(state.chrootDir); + const safeChrootDir = this.assertManagedPath(state.chrootDir); + const vmJailerDir = dirname(safeChrootDir); try { - rmSync(state.chrootDir, { recursive: true, force: true }); + rmSync(safeChrootDir, { recursive: true, force: true }); } catch {}Also applies to: 466-483, 807-816
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/vm.ts` around lines 446 - 464, The cleanup code that unlinks files (symbols: vmRootCandidates, staleFirecrackerBin, socketPath) uses persisted state (state.chrootDir, state.apiSocket) directly and can delete arbitrary host paths if those values are corrupted; before calling unlinkSync/rmSync, resolve each candidate with realpath/resolve and verify it is inside the managed VM root (e.g. ensure realpath(candidate).startsWith(realpath(managedRoot))) and reject/skip any path that fails validation (also handle symlink escapes by using fs.realpathSync). Apply the same validation for socketPath and log/throw and skip deletion when a target is outside the expected chroot/api directory to prevent destructive cleanup.
293-313:⚠️ Potential issue | 🟠 MajorRestart path still diverges from create-time isolation flags.
Line 293–313 honors
disableSeccomp/disablePidNs/disableCgroup, but Line 524–542 restarts withnewPidNs: true, always-on cgroup, and no seccomp filter. This can change runtime guarantees and cause restart failures on hosts that required those create-time flags.♻️ Proposed fix
// During create: persist effective isolation flags in VM state this.store.save(buildInitialVmState({ ... + disableSeccomp: !!opts.disableSeccomp, + disablePidNs: !!opts.disablePidNs, + disableCgroup: !!opts.disableCgroup, })); // During start: reuse persisted flags -const cgroup: CgroupConfig = { - cpuQuotaUs: state.vcpuCount * 100000, - cpuPeriodUs: 100000, - memoryBytes: (state.memSizeMib + CGROUP_VMM_OVERHEAD_MIB) * 1024 * 1024, -}; +const seccompFilter = state.disableSeccomp ? undefined : ensureSeccompFilter(paths); +const cgroup: CgroupConfig | undefined = state.disableCgroup + ? undefined + : { + cpuQuotaUs: state.vcpuCount * 100000, + cpuPeriodUs: 100000, + memoryBytes: (state.memSizeMib + CGROUP_VMM_OVERHEAD_MIB) * 1024 * 1024, + }; jailer.spawn({ firecrackerBin, jailerBin, chrootBase: jailer.paths.chrootBase, - newPidNs: true, + seccompFilter, + newPidNs: !state.disablePidNs, cgroup, netns: state.network.netnsName, });Also applies to: 524-542
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/vm.ts` around lines 293 - 313, The restart path diverges from the create-time isolation flags: replicate the same logic that computes seccompFilter (ensureSeccompFilter/opts.disableSeccomp), cgroup (opts.disableCgroup and computed cpu/memory values), and newPidNs (!opts.disablePidNs) for the restart's jailer.spawn call instead of hardcoding newPidNs: true / always-on cgroup / no seccomp; either reuse the seccompFilter and cgroup variables defined around seccompFilter/cgroup or extract that computation into a shared helper (e.g., computeIsolationConfig) and call it from both the create and restart code paths so jailer.spawn uses identical seccompFilter, cgroup, and newPidNs settings on restart.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/services/vm.ts`:
- Around line 446-464: The cleanup code that unlinks files (symbols:
vmRootCandidates, staleFirecrackerBin, socketPath) uses persisted state
(state.chrootDir, state.apiSocket) directly and can delete arbitrary host paths
if those values are corrupted; before calling unlinkSync/rmSync, resolve each
candidate with realpath/resolve and verify it is inside the managed VM root
(e.g. ensure realpath(candidate).startsWith(realpath(managedRoot))) and
reject/skip any path that fails validation (also handle symlink escapes by using
fs.realpathSync). Apply the same validation for socketPath and log/throw and
skip deletion when a target is outside the expected chroot/api directory to
prevent destructive cleanup.
- Around line 293-313: The restart path diverges from the create-time isolation
flags: replicate the same logic that computes seccompFilter
(ensureSeccompFilter/opts.disableSeccomp), cgroup (opts.disableCgroup and
computed cpu/memory values), and newPidNs (!opts.disablePidNs) for the restart's
jailer.spawn call instead of hardcoding newPidNs: true / always-on cgroup / no
seccomp; either reuse the seccompFilter and cgroup variables defined around
seccompFilter/cgroup or extract that computation into a shared helper (e.g.,
computeIsolationConfig) and call it from both the create and restart code paths
so jailer.spawn uses identical seccompFilter, cgroup, and newPidNs settings on
restart.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
package.jsonsrc/commands/list.tssrc/services/vm.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- package.json
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/services/vm.ts (2)
427-439:⚠️ Potential issue | 🔴 CriticalValidate managed paths before destructive cleanup operations.
Line 427-Line 456 and Line 762 use persisted
state.chrootDir/state.apiSocketto drivermSync/cleanup without managed-root validation. If state is tampered, this can target arbitrary host paths.🛡️ Suggested direction
+import { isAbsolute, relative, resolve } from "node:path"; + +private assertManagedPath(targetPath: string): string { + const base = resolve(this.paths.jailerBaseDir); + const target = resolve(targetPath); + const rel = relative(base, target); + if (!targetPath || target === "/" || rel.startsWith("..") || isAbsolute(rel)) { + throw new VmsanError(`Refusing to operate on unmanaged path: ${targetPath}`); + } + return target; +}Use
assertManagedPath(...)forstate.chrootDir,state.apiSocket, and all derived cleanup candidates beforermSync/cleanupChroot.Also applies to: 443-456, 762-763
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/vm.ts` around lines 427 - 439, The cleanup uses persisted state paths (state.chrootDir, state.apiSocket) to build vmRootCandidates and then calls rmSync in removeStaleFirecrackerFiles (and calls cleanupChroot elsewhere) without validating them; call assertManagedPath(...) on state.chrootDir, state.apiSocket and on each derived path in vmRootCandidates before performing any rmSync or cleanupChroot operations to ensure they are within the managed root, and bail/throw if validation fails so no rmSync runs on unvalidated/tainted paths.
495-509:⚠️ Potential issue | 🟠 MajorRestart path still diverges from create-time isolation settings.
Line 506 hardcodes
newPidNs: true, Line 495 always builds cgroup, and restart does not apply seccomp selection used in create. This can change runtime guarantees and break restart on constrained hosts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/vm.ts` around lines 495 - 509, The restart path is overriding create-time isolation: stop hardcoding newPidNs: true and always rebuilding cgroup; instead have spawnAndWait respect the original isolation stored in the VM state (use the result of buildCgroupConfig only when creating and reuse stored cgroup config on restart), and apply the same seccomp profile selection used at create time when calling jailer.spawn; update spawnAndWait to read the VM state's saved flags (e.g., savedNewPidNs, savedCgroupConfig, savedSeccompProfile) and pass those to jailer.spawn (preserve use of buildCgroupConfig for first-boot but reuse saved values for restarts).
🧹 Nitpick comments (1)
src/stores/memory.ts (1)
20-24: Preserve clone isolation inupdate().Line 23 assigns
updatesdirectly, which can retain external nested object references. Cloneupdatesbefore merge to keep in-memory state isolated from caller mutations.♻️ Suggested change
update(id: string, updates: Partial<VmState>): void { const state = this.states.get(id); if (!state) throw vmStateNotFoundError(id); - Object.assign(state, updates); + Object.assign(state, structuredClone(updates)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/stores/memory.ts` around lines 20 - 24, The update method in memory store (update(id: string, updates: Partial<VmState>)) currently merges the caller-supplied updates by reference, risking shared nested objects; create a deep clone of the updates before merging to preserve isolation (e.g., use structuredClone(updates) or a project deepClone utility) and then Object.assign(state, clonedUpdates) after verifying the state exists via this.states.get(id) and vmStateNotFoundError(id).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/start.ts`:
- Around line 28-33: When a command returns result.success === false but
result.error is undefined, the code sets process.exitCode and emits cmdLog but
never routes through handleCommandError; update the failure branch in
startCommand handling (the block checking result.success) to call
handleCommandError with a meaningful Error (or the existing result.error if
present) before setting process.exitCode and calling cmdLog.emit(), so all
failed commands consistently go through handleCommandError for centralized
logging/cleanup.
---
Duplicate comments:
In `@src/services/vm.ts`:
- Around line 427-439: The cleanup uses persisted state paths (state.chrootDir,
state.apiSocket) to build vmRootCandidates and then calls rmSync in
removeStaleFirecrackerFiles (and calls cleanupChroot elsewhere) without
validating them; call assertManagedPath(...) on state.chrootDir, state.apiSocket
and on each derived path in vmRootCandidates before performing any rmSync or
cleanupChroot operations to ensure they are within the managed root, and
bail/throw if validation fails so no rmSync runs on unvalidated/tainted paths.
- Around line 495-509: The restart path is overriding create-time isolation:
stop hardcoding newPidNs: true and always rebuilding cgroup; instead have
spawnAndWait respect the original isolation stored in the VM state (use the
result of buildCgroupConfig only when creating and reuse stored cgroup config on
restart), and apply the same seccomp profile selection used at create time when
calling jailer.spawn; update spawnAndWait to read the VM state's saved flags
(e.g., savedNewPidNs, savedCgroupConfig, savedSeccompProfile) and pass those to
jailer.spawn (preserve use of buildCgroupConfig for first-boot but reuse saved
values for restarts).
---
Nitpick comments:
In `@src/stores/memory.ts`:
- Around line 20-24: The update method in memory store (update(id: string,
updates: Partial<VmState>)) currently merges the caller-supplied updates by
reference, risking shared nested objects; create a deep clone of the updates
before merging to preserve isolation (e.g., use structuredClone(updates) or a
project deepClone utility) and then Object.assign(state, clonedUpdates) after
verifying the state exists via this.states.get(id) and vmStateNotFoundError(id).
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
src/commands/start.tssrc/hooks.tssrc/index.tssrc/lib/utils.tssrc/lib/vm-state.tssrc/services/vm.tssrc/stores/memory.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/index.ts
| if (!result.success) { | ||
| if (result.error) throw result.error; | ||
| process.exitCode = 1; | ||
| cmdLog.emit(); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Avoid silent failure when result.success === false.
If result.error is absent, the command currently exits without routing through handleCommandError, which can hide root cause details.
🔧 Suggested change
if (!result.success) {
- if (result.error) throw result.error;
- process.exitCode = 1;
- cmdLog.emit();
- return;
+ throw result.error ?? new Error(`Failed to start VM: ${vmId}`);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!result.success) { | |
| if (result.error) throw result.error; | |
| process.exitCode = 1; | |
| cmdLog.emit(); | |
| return; | |
| } | |
| if (!result.success) { | |
| throw result.error ?? new Error(`Failed to start VM: ${vmId}`); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/commands/start.ts` around lines 28 - 33, When a command returns
result.success === false but result.error is undefined, the code sets
process.exitCode and emits cmdLog but never routes through handleCommandError;
update the failure branch in startCommand handling (the block checking
result.success) to call handleCommandError with a meaningful Error (or the
existing result.error if present) before setting process.exitCode and calling
cmdLog.emit(), so all failed commands consistently go through handleCommandError
for centralized logging/cleanup.
Summary
VmsanContextcomposition root (createVmsan()) with injectable paths, store, hooks, logger, and pluginshookable(12 events:vm:beforeCreate,vm:afterCreate,vm:beforeStart,vm:afterStart,vm:beforeStop,vm:afterStop,vm:beforeRemove,vm:afterRemove,vm:error,network:afterSetup,network:afterTeardown,network:policyChange)VmsanPlugininterface withdefinePlugin()helperVmsanLoggerabstraction withcreateDefaultLogger()(consola) andcreateSilentLogger()MemoryVmStateStorefor testing/embeddinggetActiveTapSlots()as public functioncreate()andstart()orchestration from CLI commands intoVMServiceconst vmsan = await createVmsan()*.mdfrom oxfmt in lint/fmt scriptsTest plan
bun run typecheckpassesbun run buildpassesbun run lintpasses (only pre-existing warnings)sudo vmsan create --vcpus 1 --mem 128works identicallysudo vmsan start/stop/removelifecycle worksimport { createVmsan } from "vmsan"and call.create()Summary by CodeRabbit
New Features
Refactor
Chores
Style