Skip to content

Commit 0393d43

Browse files
authored
feat: make HTML parser optional argument (#3)
1 parent e939f0d commit 0393d43

File tree

4 files changed

+68
-48
lines changed

4 files changed

+68
-48
lines changed

README.md

Lines changed: 50 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -19,52 +19,33 @@ npm install @projectwallace/css-code-coverage
1919

2020
### Prerequisites
2121

22-
1. You have collected browser coverage data of your CSS. There are several ways to do this:
23-
24-
1. in the browser devtools in [Edge](https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/coverage/)/[Chrome](https://developer.chrome.com/docs/devtools/coverage/)/chromium
25-
1. Via the `coverage.startCSSCoverage()` API that headless browsers like [Playwright](https://playwright.dev/docs/api/class-coverage#coverage-start-css-coverage) or [Puppeteer](https://pptr.dev/api/puppeteer.coverage.startcsscoverage/) provide.
26-
27-
Either way you end up with one or more JSON files that contain coverage data.
28-
29-
```ts
30-
// Read a single JSON or a folder full of JSON files with coverage data
31-
// Coverage data looks like this:
32-
// {
33-
// url: 'https://www.projectwallace.com/style.css',
34-
// text: 'a { color: blue; text-decoration: underline; }', etc.
35-
// ranges: [
36-
// { start: 0, end: 46 }
37-
// ]
38-
// }
39-
import { parse_coverage } from '@projectwallace/css-code-coverage'
40-
41-
let files = await fs.glob('./css-coverage/**/*.json')
42-
let coverage_data = []
43-
44-
for (let file of files) {
45-
let json_content = await fs.readFile(file, 'urf-8')
46-
coverage_data.push(...parse_coverage(json_content))
47-
}
48-
```
49-
50-
1. You provide a HTML parser that we use to 'scrape' the HTML in case the browser gives us not just plain CSS contents. Depending on where you run this analysis you can use:
51-
52-
1. Browser:
53-
```ts
54-
function parse_html(html) {
55-
return new DOMParser().parseFromString(html, 'text/html')
56-
}
57-
```
58-
1. Node (using [linkedom](https://github.com/WebReflection/linkedom) in this example):
59-
60-
```ts
61-
// $ npm install linkedom
62-
import { DOMParser } from 'linkedom'
63-
64-
function parse_html(html: string) {
65-
return new DOMParser().parseFromString(html, 'text/html')
66-
}
67-
```
22+
You have collected browser coverage data of your CSS. There are several ways to do this:
23+
24+
1. in the browser devtools in [Edge](https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/coverage/)/[Chrome](https://developer.chrome.com/docs/devtools/coverage/)/chromium
25+
1. Via the `coverage.startCSSCoverage()` API that headless browsers like [Playwright](https://playwright.dev/docs/api/class-coverage#coverage-start-css-coverage) or [Puppeteer](https://pptr.dev/api/puppeteer.coverage.startcsscoverage/) provide.
26+
27+
Either way you end up with one or more JSON files that contain coverage data.
28+
29+
```ts
30+
// Read a single JSON or a folder full of JSON files with coverage data
31+
// Coverage data looks like this:
32+
// {
33+
// url: 'https://www.projectwallace.com/style.css',
34+
// text: 'a { color: blue; text-decoration: underline; }', etc.
35+
// ranges: [
36+
// { start: 0, end: 46 }
37+
// ]
38+
// }
39+
import { parse_coverage } from '@projectwallace/css-code-coverage'
40+
41+
let files = await fs.glob('./css-coverage/**/*.json')
42+
let coverage_data = []
43+
44+
for (let file of files) {
45+
let json_content = await fs.readFile(file, 'urf-8')
46+
coverage_data.push(...parse_coverage(json_content))
47+
}
48+
```
6849

6950
### Bringing it together
7051

@@ -73,3 +54,26 @@ import { calculate_coverage } from '@projectwallace/css-code-coverage'
7354

7455
let report = calculcate_coverage(coverage_data, parse_html)
7556
```
57+
58+
See [src/index.ts](https://github.com/projectwallace/css-code-coverage/blob/main/src/index.ts) for the data that's returned.
59+
60+
### Optional: coverage from `<style>` blocks
61+
62+
Covergae generators also create coverage ranges for `<style>` blocks in HTML. If this applies to your code you should provide a HTML parser that we use to 'scrape' the HTML in case the browser gives us not just plain CSS contents. Depending on where you run this analysis you can use:
63+
64+
1. Browser:
65+
```ts
66+
function parse_html(html) {
67+
return new DOMParser().parseFromString(html, 'text/html')
68+
}
69+
```
70+
1. Node (using [linkedom](https://github.com/WebReflection/linkedom) in this example):
71+
72+
```ts
73+
// $ npm install linkedom
74+
import { DOMParser } from 'linkedom'
75+
76+
function parse_html(html: string) {
77+
return new DOMParser().parseFromString(html, 'text/html')
78+
}
79+
```

src/filter-entries.test.ts

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,3 +56,14 @@ test('keeps extension-less URL with CSS text (running coverage in vite dev mode)
5656
]
5757
expect(filter_coverage(entries, html_parser)).toEqual(entries)
5858
})
59+
60+
test('skips extension-less URL with HTML text when no parser is provided', () => {
61+
let entries = [
62+
{
63+
url: 'http://example.com',
64+
text: `<html><style>a{color:red;}</style></html>`,
65+
ranges: [{ start: 13, end: 26 }],
66+
},
67+
]
68+
expect(filter_coverage(entries)).toEqual([])
69+
})

src/filter-entries.ts

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ function is_html(text: string): boolean {
77
return /<\/?(html|body|head|div|span|script|style)/i.test(text)
88
}
99

10-
export function filter_coverage(coverage: Coverage[], parse_html: Parser): Coverage[] {
10+
export function filter_coverage(coverage: Coverage[], parse_html?: Parser): Coverage[] {
1111
let result = []
1212

1313
for (let entry of coverage) {
@@ -22,6 +22,11 @@ export function filter_coverage(coverage: Coverage[], parse_html: Parser): Cover
2222
}
2323

2424
if (is_html(entry.text)) {
25+
if (!parse_html) {
26+
// No parser provided, cannot extract CSS from HTML, silently skip this entry
27+
continue
28+
}
29+
2530
let { css, ranges } = remap_html(parse_html, entry.text, entry.ranges)
2631
result.push({
2732
url: entry.url,

src/index.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ function ratio(fraction: number, total: number) {
4444
* 4. Calculate used/unused CSS bytes (fastest path, no inspection of the actual CSS needed)
4545
* 5. Calculate line-coverage, byte-coverage per stylesheet
4646
*/
47-
export function calculate_coverage(coverage: Coverage[], parse_html: Parser): CoverageResult {
47+
export function calculate_coverage(coverage: Coverage[], parse_html?: Parser): CoverageResult {
4848
let total_files_found = coverage.length
4949

5050
if (!is_valid_coverage(coverage)) {

0 commit comments

Comments
 (0)