Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Debugging and Coverage Instrumentation for Child Processes (Bun.spawn) #17867

Open
bitjson opened this issue Mar 3, 2025 · 1 comment
Open
Labels
bun:spawn bun:test Something related to the `bun test` runner enhancement New feature or request

Comments

@bitjson
Copy link

bitjson commented Mar 3, 2025

What is the problem this feature would solve?

Currently, when writing end-to-end tests for code that spawns new Bun processes (e.g., using Bun.spawn), the built-in coverage tool does not capture coverage data for code running in those child processes. This makes it difficult to track or ensure comprehensive test coverage in real-world scenarios where applications and CLI utilities run as separate processes.

Additionally, debugging breakpoints set on lines evaluated by the child processes don't pause evaluation (e.g. in the VSCode extension's Debug File feature).

What is the feature you are proposing to solve the problem?

Add support for debugging, instrumenting, and collecting code coverage across multiple Bun processes during test runs. Ideally, this would involve Bun automatically propagating coverage instrumentation to any child processes spawned within a test file, then aggregating the resulting coverage data.

Key points:

  • Allow bun test --coverage to detect when a test spawns a Bun subprocess, enabling coverage instrumentation in the child process.
  • Merge coverage results from the child processes back into the main coverage report automatically.
  • Provide an environment variable or configuration setting so advanced users can customize or disable coverage propagation for child processes if needed. (In Node.js, this is NODE_V8_COVERAGE.)

What alternatives have you considered?

  1. Refactoring to a single process: Restructuring code so that test logic and CLI code run in the same Bun process can solve coverage gaps, but it isn’t always practical. CLI programs often need genuine process isolation to properly simulate various environments.

  2. Using third-party coverage tools: Tools like NYC or c8 can instrument code in a child process, but that requires extra setup, merges, and tooling outside Bun’s built-in test/coverage system. This increases complexity and maintenance burden.

  3. Manual instrumentation or custom scripts: Developers can manually track coverage in each subprocess and merge the results, but this is complicated and not user-friendly. Automatic support from Bun would be far more convenient.

@bitjson bitjson added the enhancement New feature or request label Mar 3, 2025
@bitjson bitjson changed the title Provide Coverage Instrumentation for Child Processes (Bun.spawn) Debugging and Coverage Instrumentation for Child Processes (Bun.spawn) Mar 3, 2025
@bitjson
Copy link
Author

bitjson commented Mar 3, 2025

This doesn't allow for debugging inside the child processes, but here's my current workaround for collecting test coverage while testing a CLI with various .env variables:

import { spawn, which } from 'bun';
import { describe, expect, it } from 'bun:test';
import { resolve } from 'path';

let instance = 0;
const runAgent = (env: Record<string, string | undefined>) =>
  spawn(
    [
      which('bun') as string,
      `--env-file=${resolve(import.meta.dir, '.env')}`,
      'test',
      ...(process.env.npm_lifecycle_script?.includes('--coverage')
        ? ['--coverage']
        : []),
      '--coverage-reporter=lcov',
      `--coverage-dir=coverage/spawn-${instance++}`,
      resolve(import.meta.dir, 'index.ts'),
    ],
    {
      env,
      stderr: 'pipe',
      stdout: 'pipe',
    },
  );

// Test the CLI in various configurations, e.g.:

describe('CHAINGRAPH_GENESIS_BLOCKS', () => {
  it('Crashes on missing', async () => {
    const proc = runAgent({ CHAINGRAPH_GENESIS_BLOCKS: '' });
    expect(await proc.exited).toBe(1);
    expect(await new Response(proc.stderr).text()).toContain(
      'Improperly formatted \'CHAINGRAPH_GENESIS_BLOCKS\' environment variable. The encoded block for network "" could not be decoded. Invalid segment: ""',
    );
  });
  it('Crashes on invalid network magic', async () => {
    const proc = runAgent({ CHAINGRAPH_GENESIS_BLOCKS: 'e3e1f3e' });
    expect(await proc.exited).toBe(1);
    expect(await new Response(proc.stderr).text()).toContain(
      'Improperly formatted \'CHAINGRAPH_GENESIS_BLOCKS\' environment variable. The network magic "e3e1f3e" should be 8 hex characters.',
    );
  });
  it('Crashes on invalid encoded block', async () => {
    const proc = runAgent({ CHAINGRAPH_GENESIS_BLOCKS: 'e3e1f3e8:01' });
    expect(await proc.exited).toBe(1);
    expect(await new Response(proc.stderr).text()).toContain(
      'Improperly formatted \'CHAINGRAPH_GENESIS_BLOCKS\' environment variable. The encoded block for network "e3e1f3e8" could not be decoded. Invalid segment: "e3e1f3e8:01"',
    );
  });
});

I'm using this very fast lcov-merge:

cargo install lcov-util

And the package.json script:

"scripts": {
    "cov": " bun test --coverage --coverage-dir=coverage/top-level --coverage-reporter=lcov && lcov-merge coverage/*/lcov.info > coverage/lcov.info"

When I bun run cov, each spawned process writes it's lcov.info to coverage/spawn-N, the top level test process writes to coverage/top-level, and they're all merged into coverage/lcov.info.

@RiskyMH RiskyMH added bun:test Something related to the `bun test` runner bun:spawn labels Mar 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bun:spawn bun:test Something related to the `bun test` runner enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants