Skip to content

fix(pack): inlineCss with autoCssModules cause panic#2814

Merged
fireairforce merged 1 commit intonextfrom
library-build-support-autoCssModules
Apr 20, 2026
Merged

fix(pack): inlineCss with autoCssModules cause panic#2814
fireairforce merged 1 commit intonextfrom
library-build-support-autoCssModules

Conversation

@fireairforce
Copy link
Copy Markdown
Member

@fireairforce fireairforce commented Apr 20, 2026

Summary

styles.inlineCss + autoCssModules will cause panic, a case from smallfish widget build.

image

Test Plan

Update test case for both library and client

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces performance optimizations for one-shot build sessions by adding an is_short_session hint to the turbo-tasks engine. This allows the system to use ReadWriteOnShutdown storage mode and conditionally disable dependency tracking when persistent caching is not required, reducing bookkeeping overhead. Additionally, the changes include a fix for CSS module facade analysis to prevent class extraction failures when Analyze references are inlined into JavaScript. New snapshot tests for Less CSS modules have also been added to ensure correct behavior. I have no feedback to provide as there were no review comments to evaluate.

@fireairforce fireairforce force-pushed the library-build-support-autoCssModules branch from 866027b to 3f7aa9b Compare April 20, 2026 07:27
@fireairforce fireairforce changed the title feat(pack): inlineCss with autoCssModules cause panic fix(pack): inlineCss with autoCssModules cause panic Apr 20, 2026
@fireairforce fireairforce enabled auto-merge (squash) April 20, 2026 07:43
@github-actions
Copy link
Copy Markdown

📊 Performance Benchmark Report (with-antd)

Utoopack Performance Report

Report ID: utoopack_performance_report_20260420_074426
Generated: 2026-04-20 07:44:26
Trace File: trace_antd.json (0.6GB, 1.61M spans)
Test Project: examples/with-antd


Executive Summary

Metric Value Assessment
Total Wall Time 11,286.5 ms Baseline
Total Thread Work (de-duped) 24,430.4 ms Non-overlapping busy time
Effective Parallelism 2.2x thread_work / wall_time
Working Threads 5 Threads with actual spans
Thread Utilization 43.3% ⚠️ Suboptimal
Total Spans 1,605,835 All B/E + X events
Meaningful Spans (>= 10us) 371,054 (23.1% of total)
Tracing Noise (< 10us) 1,234,781 (76.9% of total)

Build Phase Timeline

Shows when each build phase is active and how much CPU it consumes.
Self-Time is the time spent exclusively in that phase (excluding children).

Phase Spans Inclusive (ms) Self-Time (ms) Wall Range (ms)
Resolve 82,013 2,585.3 2,047.4 8,706.3
Parse 10,015 1,936.3 1,689.2 11,162.7
Analyze 220,928 15,817.2 10,850.2 10,922.6
Chunk 20,020 1,923.8 1,784.9 9,307.1
Codegen 28,674 3,914.2 2,892.1 9,395.2
Emit 76 69.6 36.0 7,631.9
Other 9,328 1,149.3 972.2 11,286.5

Workload Distribution by Diagnostic Tier

Category Spans Inclusive (ms) % Work Self-Time (ms) % Self
P0: Scheduling & Resolution 309,751 19,144.6 78.4% 13,475.8 55.2%
P1: I/O & Heavy Tasks 3,078 147.1 0.6% 113.5 0.5%
P2: Architecture (Locks/Memory) 0 0.0 0.0% 0.0 0.0%
P3: Asset Pipeline 57,153 7,765.5 31.8% 6,357.3 26.0%
P4: Bridge/Interop 0 0.0 0.0% 0.0 0.0%
Other 1,072 338.7 1.4% 325.4 1.3%

Top 20 Tasks by Self-Time

Self-time is the exclusive duration: time spent in the task itself, not in sub-tasks.
This is the most accurate indicator of where CPU cycles are actually spent.

Self (ms) Inclusive (ms) Count Avg Self (us) P95 Self (ms) Max Self (ms) % Work Task Name Top Caller
5,628.0 6,643.0 119,221 47.2 0.1 18.2 23.0% module write all entrypoints to disk (1%)
3,022.5 4,179.9 36,260 83.4 0.2 194.0 12.4% analyze ecmascript module process module (80%)
1,677.6 2,699.8 17,157 97.8 0.4 46.2 6.9% code generation chunking (1%)
1,392.8 4,159.3 56,575 24.6 0.0 20.3 5.7% process module module (9%)
1,331.9 1,444.3 12,929 103.0 0.2 187.5 5.5% chunking write all entrypoints to disk (0%)
1,324.5 1,386.5 7,078 187.1 0.6 35.9 5.4% parse ecmascript analyze ecmascript module (32%)
1,162.2 1,277.8 45,211 25.7 0.0 9.0 4.8% internal resolving resolving (29%)
887.0 887.0 9,434 94.0 0.3 5.9 3.6% precompute code generation code generation (59%)
870.5 1,292.8 36,158 24.1 0.0 15.6 3.6% resolving module (14%)
715.8 715.8 7,004 102.2 0.4 59.2 2.9% compute async module info chunking (0%)
622.2 783.5 7,722 80.6 0.0 129.6 2.5% write all entrypoints to disk None (0%)
432.8 433.2 6,984 62.0 0.0 41.4 1.8% compute async chunks webpack loader (0%)
327.4 327.4 2,083 157.2 0.4 17.5 1.3% generate source map code generation (96%)
301.8 487.0 579 521.3 2.3 47.3 1.2% parse css module (8%)
297.4 304.4 964 308.5 0.8 20.2 1.2% webpack loader parse css (12%)
62.6 62.6 2,340 26.7 0.0 3.5 0.3% read file parse ecmascript (91%)
53.9 53.9 802 67.2 0.0 13.2 0.2% compute binding usage info write all entrypoints to disk (0%)
32.8 32.8 37 885.3 5.9 10.9 0.1% write file apply effects (100%)
24.6 27.1 534 46.1 0.1 2.4 0.1% async reference write all entrypoints to disk (1%)
21.8 21.8 1,034 21.1 0.0 9.8 0.1% collect mergeable modules compute merged modules (0%)

Critical Path Analysis

The longest sequential dependency chains that determine wall-clock time.
Focus on reducing the depth of these chains to improve parallelism.

Rank Self-Time (ms) Depth Path
1 194.1 2 process module → analyze ecmascript module
2 63.7 2 code generation → generate source map
3 59.7 2 module → parse css
4 47.3 2 code generation → generate source map
5 45.8 2 process module → analyze ecmascript module

Batching Candidates

High-volume tasks dominated by a single parent. If the parent can batch them,
it drastically reduces scheduler overhead.

Task Name Count Top Caller (Attribution) Avg Self P95 Self Total Self
analyze ecmascript module 36,260 process module (80%) 83.4 us 0.15 ms 3,022.5 ms

Duration Distribution

Range Count Percentage
<10us 1,234,781 76.9%
10us-100us 347,785 21.7%
100us-1ms 18,827 1.2%
1ms-10ms 4,248 0.3%
10ms-100ms 190 0.0%
>100ms 4 0.0%

Action Items

  1. [P0] Focus on tasks with the highest Self-Time — these are where CPU cycles are actually spent.
  2. [P0] Use Batching Candidates to identify callers that should use try_join or reduce #[turbo_tasks::function] granularity.
  3. [P1] Check Build Phase Timeline for phases with disproportionate wall range vs. self-time (= serialization).
  4. [P1] Inspect P95 Self (ms) for heavy monolith tasks. Focus on long-tail outliers, not averages.
  5. [P1] Review Critical Paths — reducing the longest chain depth directly improves wall-clock time.
  6. [P2] If Thread Utilization < 60%, investigate scheduling gaps (lock contention or deep dependency chains).

Report generated by Utoopack Performance Analysis Agent

@fireairforce fireairforce merged commit 2c181f9 into next Apr 20, 2026
40 of 58 checks passed
@fireairforce fireairforce deleted the library-build-support-autoCssModules branch April 20, 2026 08:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants