You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when writing end-to-end tests for code that spawns new Bun processes (e.g., using Bun.spawn), the built-in coverage tool does not capture coverage data for code running in those child processes. This makes it difficult to track or ensure comprehensive test coverage in real-world scenarios where applications and CLI utilities run as separate processes.
Additionally, debugging breakpoints set on lines evaluated by the child processes don't pause evaluation (e.g. in the VSCode extension's Debug File feature).
What is the feature you are proposing to solve the problem?
Add support for debugging, instrumenting, and collecting code coverage across multiple Bun processes during test runs. Ideally, this would involve Bun automatically propagating coverage instrumentation to any child processes spawned within a test file, then aggregating the resulting coverage data.
Key points:
Allow bun test --coverage to detect when a test spawns a Bun subprocess, enabling coverage instrumentation in the child process.
Merge coverage results from the child processes back into the main coverage report automatically.
Provide an environment variable or configuration setting so advanced users can customize or disable coverage propagation for child processes if needed. (In Node.js, this is NODE_V8_COVERAGE.)
What alternatives have you considered?
Refactoring to a single process: Restructuring code so that test logic and CLI code run in the same Bun process can solve coverage gaps, but it isn’t always practical. CLI programs often need genuine process isolation to properly simulate various environments.
Using third-party coverage tools: Tools like NYC or c8 can instrument code in a child process, but that requires extra setup, merges, and tooling outside Bun’s built-in test/coverage system. This increases complexity and maintenance burden.
Manual instrumentation or custom scripts: Developers can manually track coverage in each subprocess and merge the results, but this is complicated and not user-friendly. Automatic support from Bun would be far more convenient.
The text was updated successfully, but these errors were encountered:
bitjson
changed the title
Provide Coverage Instrumentation for Child Processes (Bun.spawn)
Debugging and Coverage Instrumentation for Child Processes (Bun.spawn)
Mar 3, 2025
This doesn't allow for debugging inside the child processes, but here's my current workaround for collecting test coverage while testing a CLI with various .env variables:
import{spawn,which}from'bun';import{describe,expect,it}from'bun:test';import{resolve}from'path';letinstance=0;construnAgent=(env: Record<string,string|undefined>)=>spawn([which('bun')asstring,`--env-file=${resolve(import.meta.dir,'.env')}`,'test',
...(process.env.npm_lifecycle_script?.includes('--coverage')
? ['--coverage']
: []),'--coverage-reporter=lcov',`--coverage-dir=coverage/spawn-${instance++}`,resolve(import.meta.dir,'index.ts'),],{
env,stderr: 'pipe',stdout: 'pipe',},);// Test the CLI in various configurations, e.g.:describe('CHAINGRAPH_GENESIS_BLOCKS',()=>{it('Crashes on missing',async()=>{constproc=runAgent({CHAINGRAPH_GENESIS_BLOCKS: ''});expect(awaitproc.exited).toBe(1);expect(awaitnewResponse(proc.stderr).text()).toContain('Improperly formatted \'CHAINGRAPH_GENESIS_BLOCKS\' environment variable. The encoded block for network "" could not be decoded. Invalid segment: ""',);});it('Crashes on invalid network magic',async()=>{constproc=runAgent({CHAINGRAPH_GENESIS_BLOCKS: 'e3e1f3e'});expect(awaitproc.exited).toBe(1);expect(awaitnewResponse(proc.stderr).text()).toContain('Improperly formatted \'CHAINGRAPH_GENESIS_BLOCKS\' environment variable. The network magic "e3e1f3e" should be 8 hex characters.',);});it('Crashes on invalid encoded block',async()=>{constproc=runAgent({CHAINGRAPH_GENESIS_BLOCKS: 'e3e1f3e8:01'});expect(awaitproc.exited).toBe(1);expect(awaitnewResponse(proc.stderr).text()).toContain('Improperly formatted \'CHAINGRAPH_GENESIS_BLOCKS\' environment variable. The encoded block for network "e3e1f3e8" could not be decoded. Invalid segment: "e3e1f3e8:01"',);});});
I'm using this very fast lcov-merge:
cargo install lcov-util
And the package.json script:
"scripts": {
"cov": " bun test --coverage --coverage-dir=coverage/top-level --coverage-reporter=lcov && lcov-merge coverage/*/lcov.info > coverage/lcov.info"
When I bun run cov, each spawned process writes it's lcov.info to coverage/spawn-N, the top level test process writes to coverage/top-level, and they're all merged into coverage/lcov.info.
What is the problem this feature would solve?
Currently, when writing end-to-end tests for code that spawns new Bun processes (e.g., using
Bun.spawn
), the built-in coverage tool does not capture coverage data for code running in those child processes. This makes it difficult to track or ensure comprehensive test coverage in real-world scenarios where applications and CLI utilities run as separate processes.Additionally, debugging breakpoints set on lines evaluated by the child processes don't pause evaluation (e.g. in the VSCode extension's
Debug File
feature).What is the feature you are proposing to solve the problem?
Add support for debugging, instrumenting, and collecting code coverage across multiple Bun processes during test runs. Ideally, this would involve Bun automatically propagating coverage instrumentation to any child processes spawned within a test file, then aggregating the resulting coverage data.
Key points:
bun test --coverage
to detect when a test spawns a Bun subprocess, enabling coverage instrumentation in the child process.NODE_V8_COVERAGE
.)What alternatives have you considered?
Refactoring to a single process: Restructuring code so that test logic and CLI code run in the same Bun process can solve coverage gaps, but it isn’t always practical. CLI programs often need genuine process isolation to properly simulate various environments.
Using third-party coverage tools: Tools like NYC or c8 can instrument code in a child process, but that requires extra setup, merges, and tooling outside Bun’s built-in test/coverage system. This increases complexity and maintenance burden.
Manual instrumentation or custom scripts: Developers can manually track coverage in each subprocess and merge the results, but this is complicated and not user-friendly. Automatic support from Bun would be far more convenient.
The text was updated successfully, but these errors were encountered: