Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 47 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,53 @@ npm install @projectwallace/css-code-coverage

## Usage

### Prerequisites
```ts
import { calculate_coverage } from '@projectwallace/css-code-coverage'

function parse_html(html) {
return new DOMParser().parseFromString(html, 'text/html')
}

let report = calculcate_coverage(coverage_data, parse_html)
```

See [src/index.ts](https://github.com/projectwallace/css-code-coverage/blob/main/src/index.ts) for the data that's returned.

## Collecting CSS Coverage

There are two principal ways of collecting CSS Coverage data:

### Browser devtools

In Edge, Chrome or chromium you can manually collect coverage in the browser's DevTools. In all cases you'll generate coverage data manually and the browser will let you export the data to a JSON file. Note that this JSON contains both JS coverage as well as the CSS coverage. Learn how it works:

- Collect coverage in Microsoft Edge: https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/coverage/
- Collect coevrage in Google Chrome: https://developer.chrome.com/docs/devtools/coverage/

Additionally, DevTools Tips writes about it in their [explainer](https://devtoolstips.org/tips/en/detect-unused-code/).

### Coverage API

You have collected browser coverage data of your CSS. There are several ways to do this:
Both Puppeteer and Playwright provide an API to programmatically get the coverage data, allowing you to put that directly into this library. Here is the gist:

```ts
// Start collecting coverage
await page.coverage.startCSSCoverage()
// Load the page, do all sorts of interactions to increase coverage, etc.
await page.goto('http://example.com')
// Stop the coverage and store the result in a variable to pass along
let coverage = await page.coverage.stopCSSCoverage()

// Now we can process it
import { calculate_coverage } from '@projectwallace/css-code-coverage'

function parse_html(html) {
return new DOMParser().parseFromString(html, 'text/html')
}

let report = calculcate_coverage(coverage, parse_html)
```

1. in the browser devtools in [Edge](https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/coverage/)/[Chrome](https://developer.chrome.com/docs/devtools/coverage/)/chromium
1. Via the `coverage.startCSSCoverage()` API that headless browsers like [Playwright](https://playwright.dev/docs/api/class-coverage#coverage-start-css-coverage) or [Puppeteer](https://pptr.dev/api/puppeteer.coverage.startcsscoverage/) provide.

Either way you end up with one or more JSON files that contain coverage data.
Expand All @@ -50,27 +92,17 @@ for (let file of files) {
}
```

### Bringing it together

```ts
import { calculate_coverage } from '@projectwallace/css-code-coverage'

let report = calculcate_coverage(coverage_data, parse_html)
```

See [src/index.ts](https://github.com/projectwallace/css-code-coverage/blob/main/src/index.ts) for the data that's returned.

### Optional: coverage from `<style>` blocks

Covergae generators also create coverage ranges for `<style>` blocks in HTML. If this applies to your code you should provide a HTML parser that we use to 'scrape' the HTML in case the browser gives us not just plain CSS contents. Depending on where you run this analysis you can use:
Coverage generators also create coverage ranges for `<style>` blocks in HTML. If this applies to your code you should provide a HTML parser that we use to 'scrape' the HTML in case the browser gives us not just plain CSS contents. Depending on where you run this analysis you can use:

1. Browser:
```ts
function parse_html(html) {
return new DOMParser().parseFromString(html, 'text/html')
}
```
1. Node (using [linkedom](https://github.com/WebReflection/linkedom) in this example):
1. Node (using [linkedom](https://github.com/WebReflection/linkedom) in this example, but other parsers could work, too):

```ts
// $ npm install linkedom
Expand Down
58 changes: 58 additions & 0 deletions src/index.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,64 @@ test.describe('from coverage data downloaded directly from the browser as JSON',
]),
)
})

test('calculates chunks', () => {
let result = calculate_coverage(coverage, html_parser)
expect(result.coverage_per_stylesheet.at(0)?.chunks).toEqual([
{ start_line: 1, is_covered: true, end_line: 4, total_lines: 4 },
{ start_line: 4, is_covered: false, end_line: 8, total_lines: 5 },
{ start_line: 8, is_covered: true, end_line: 10, total_lines: 3 },
{ start_line: 10, is_covered: false, end_line: 11, total_lines: 2 },
{ start_line: 11, is_covered: true, end_line: 14, total_lines: 4 },
])
})

test('calculates chunks for fully covered file', () => {
let result = calculate_coverage(
[
{
url: 'https://example.com',
ranges: [
{
start: 0,
end: 19,
},
],
text: 'h1 { color: blue; }',
},
],
html_parser,
)
expect(result.coverage_per_stylesheet.at(0)?.chunks).toEqual([
{
start_line: 1,
is_covered: true,
end_line: 3,
total_lines: 3,
},
])
})

test('calculates chunks for fully uncovered file', () => {
let result = calculate_coverage(
[
{
url: 'https://example.com',
ranges: [],
text: 'h1 { color: blue; }',
},
],
html_parser,
)
expect(result.coverage_per_stylesheet.at(0)?.chunks).toEqual([
{
start_line: 1,
is_covered: false,
end_line: 3,
total_lines: 3,
},
])
})
})

test('handles empty input', () => {
Expand Down
38 changes: 37 additions & 1 deletion src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,12 @@ export type StylesheetCoverage = CoverageData & {
text: string
ranges: Range[]
line_coverage: Uint8Array
chunks: {
is_covered: boolean
start_line: number
end_line: number
total_lines: number
}[]
}

export type CoverageResult = CoverageData & {
Expand Down Expand Up @@ -117,6 +123,36 @@ export function calculate_coverage(coverage: Coverage[], parse_html?: Parser): C
offset = next_offset
}

// Create "chunks" of covered/uncovered lines for easier rendering later on
let chunks = [
{
start_line: 1,
is_covered: line_coverage[0] === 1,
end_line: 0,
total_lines: 0,
},
]

for (let index = 0; index < line_coverage.length; index++) {
let is_covered = line_coverage[index]
if (index > 0 && is_covered !== line_coverage[index - 1]) {
let last_chunk = chunks.at(-1)!
last_chunk.end_line = index
last_chunk.total_lines = index - last_chunk.start_line + 1

chunks.push({
start_line: index,
is_covered: is_covered === 1,
end_line: index,
total_lines: 0,
})
}
}

let last_chunk = chunks.at(-1)!
last_chunk.total_lines = line_coverage.length - last_chunk.start_line + 1
last_chunk.end_line = line_coverage.length

return {
url,
text,
Expand All @@ -130,7 +166,7 @@ export function calculate_coverage(coverage: Coverage[], parse_html?: Parser): C
total_lines: total_file_lines,
covered_lines: file_lines_covered,
uncovered_lines: total_file_lines - file_lines_covered,
// TODO: { is_covered: boolean, start_offset: number, start_line: number, end_offset: number, end_line: number }[]
chunks,
}
})

Expand Down