Zero-layout-jitter streaming text renderer for LLM tokens.
A production-grade React/TypeScript library that streams LLM tokens into a jitter-free <canvas> surface. Text measurement and layout are offloaded to a Web Worker via @chenglou/pretext, keeping the main thread free for 60fps interactions.
- π Zero Layout Thrashing β All text measurement happens in a Web Worker. The main thread never calls
measureText(). - π¨ Canvas Rendering β Bypasses DOM layout entirely. No reflows, no forced synchronous layouts, no scrollbar jitter.
- βΏ Fully Accessible β Parallel visually-hidden
aria-liveDOM mirror for screen readers. - π HiDPI / Retina β Automatic
devicePixelRatioscaling with monitor-switching detection. - β‘ Viewport Culling β O(log n) binary search paints only visible lines. Handles 10,000+ lines smoothly.
- π€ Font Sync β Blocks layout until custom fonts are loaded. No flash of wrong font.
- π¦ Tree-Shakeable β ESM + CJS dual output,
sideEffects: false.
npm install zero-jitterZero external runtime dependencies. The text layout engine is vendored. Only peer deps:
{
"react": ">=18.0.0",
"react-dom": ">=18.0.0"
}import { useRef, useEffect } from 'react';
import { ZeroJitter, type ZeroJitterHandle } from 'zero-jitter';
function ChatMessage() {
const ref = useRef<ZeroJitterHandle>(null);
useEffect(() => {
const es = new EventSource('/api/stream');
es.onmessage = (e) => ref.current?.appendText(e.data);
return () => es.close();
}, []);
return <ZeroJitter ref={ref} font="16px Inter" maxHeight={400} />;
}ββ Main Thread βββββββββββββββββββββββββββββββββββββββββββββββ
β β
β SSE tokens β useZeroJitter hook β postMessage β Worker β
β β
β Worker response β CanvasRenderer.paint() β <canvas> β
β β AccessibilityMirror β <div aria-live> β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββ Web Worker Thread βββββββββββββββββββββββββββββββββββββββββ
β β
β Vendored pretext engine: prepareWithSegments() β layout() β
β (Intl.Segmenter, CJK, BiDi, emoji correction) β
β Returns: { lines[], totalHeight, lineCount } β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Note: The text layout engine (pretext) is vendored into
src/vendor/pretext/(MIT licensed) rather than kept as an npm dependency. This eliminates single-author risk and enables future streaming optimizations (incrementalprepare()for measuring only new tokens).
| Prop | Type | Default | Description |
|---|---|---|---|
font |
string |
'16px sans-serif' |
CSS font shorthand |
fontSize |
number |
16 |
Font size in px |
lineHeight |
number |
fontSize Γ 1.5 |
Line height in px |
color |
string |
'#000' |
Text color |
whiteSpace |
'normal' | 'pre-wrap' |
'normal' |
White space mode |
height |
number | 'auto' |
'auto' |
Container height |
maxHeight |
number |
β | Max height before scroll |
autoScroll |
boolean |
true |
Auto-scroll on new content |
padding |
number | {top,right,bottom,left} |
0 |
Canvas padding |
ariaLive |
'polite' | 'assertive' | 'off' |
'polite' |
Screen reader mode |
className |
string |
β | Container CSS class |
style |
CSSProperties |
β | Container inline styles |
workerUrl |
string | URL |
auto | Custom worker URL |
interface ZeroJitterHandle {
appendText(chunk: string): void; // Append token (no re-render)
setText(text: string): void; // Replace all text
clear(): void; // Clear text and reset
layout: LayoutState; // Current layout result
containerRef: (node: HTMLElement | null) => void;
fontReady: boolean; // Font loaded?
}For advanced use cases where you need direct access to the layout engine:
import { useZeroJitter } from 'zero-jitter';
function CustomRenderer() {
const { appendText, layout, containerRef, fontReady } = useZeroJitter({
font: '16px Inter',
lineHeight: 24,
});
// Use layout.lines to render however you want
}# Install dependencies
npm install
# Type check
npm run typecheck
# Build
npm run build
# Storybook
npm run storybook
# Lint
npm run lint- Token arrives β
appendText(chunk)appends to auseRef(zero React re-renders) - rAF batch β Multiple tokens within a frame are coalesced into one worker message
- Worker measures β
@chenglou/pretextdoesprepareWithSegments()+layoutWithLines() - Result returns β Worker posts
{ lines[], totalHeight }back to main thread - Canvas paints β Only visible lines are
fillText()'d (O(log n) viewport culling) - A11y updates β Debounced (300ms)
aria-liveregion mirrors text for screen readers
DOM text rendering triggers layout recalculation on every token append. In a streaming LLM chat UI, this means:
- Forced synchronous layouts (Layout Thrashing)
- Scrollbar position jumps (Jitter)
- Frame drops during rapid token arrival
Canvas fillText() bypasses the entire DOM layout pipeline. Combined with off-thread measurement, the main thread stays free for user interaction.
For streaming markdown that incrementally parses and renders only the active block, check out StreamMD β block-level memoization with built-in syntax highlighting.
zero-jitter β streaming plain text (canvas, zero reflows)
stream-md β streaming markdown (smart DOM, incremental parsing)
Together, they own the "streaming LLM display" category.
MIT