High-quality WebGL2 background blur for video streams.
Read the full pipeline architecture docs →
Implements the full Google Meet technique stack using confidence masks, joint bilateral filtering, mask-weighted Gaussian blur, temporal EMA smoothing, masked downsampling, and foreground-biased compositing all as a standalone, framework-agnostic library.
Most background blur libraries either give you raw segmentation masks (TensorFlow.js) or lock you into a specific video platform (Twilio, LiveKit, Agora). Gregblur sits in the gap: a complete, production-quality blur pipeline that works with any video source.
| Technique | gregblur | LiveKit OSS | Volcomix | Twilio |
|---|---|---|---|---|
| Confidence masks | Yes | Yes | Yes | Yes |
| Joint bilateral filter | Yes | No | Yes | Yes |
| Temporal smoothing | Yes | No | No | Yes |
| Mask-weighted blur | Yes | No | No | Yes |
| Masked downsample | Yes | No | No | No |
| Foreground-biased matte | Yes | No | No | No |
| Open source | Yes | Yes | Yes | No |
| Framework-agnostic | Yes | No | Yes | No |
npm install gregblurimport { createLiveKitBlurProcessor } from 'gregblur/livekit'
const processor = createLiveKitBlurProcessor({
blurRadius: 25,
initialEnabled: true,
segmentationModel: 'selfie-multiclass-256',
})
await track.setProcessor(processor)import { createRawBlurProcessor } from 'gregblur/raw'
const processor = createRawBlurProcessor({ blurRadius: 25 })
const blurredTrack = await processor.start(cameraTrack)
// Use blurredTrack with any WebRTC connection
peerConnection.addTrack(blurredTrack)import { createGregblurPipeline, createMediaPipeProvider } from 'gregblur'
const provider = createMediaPipeProvider({ model: 'selfie-multiclass-256' })
const pipeline = createGregblurPipeline(provider, { blurRadius: 30 })
await pipeline.init(1280, 720)
pipeline.processFrame(videoElement, performance.now())
const canvas = pipeline.getCanvas()| Import | What you get |
|---|---|
gregblur |
Core pipeline + MediaPipe provider |
gregblur/livekit |
LiveKit TrackProcessor adapter |
gregblur/raw |
Raw MediaStreamTrack processor |
gregblur/detect |
Browser capability detection |
Creates the core WebGL2 blur pipeline. You manage frame timing yourself.
Options:
blurRadius— Gaussian blur radius (default: 25)bilateralSigmaSpace— Spatial sigma for bilateral filter (default: 4.0)bilateralSigmaColor— Color sigma for bilateral filter (default: 0.1)initialEnabled— Start with blur on (default: true)downsampleFactor— Background resolution divisor (default: 2)temporalBlendFactor— EMA blend with previous mask (default: 0.24)
Default segmentation provider using MediaPipe's selfie segmentation models.
Options:
model—'selfie-multiclass-256'or'selfie-segmenter'(default:'selfie-multiclass-256')mediapipeVersion— CDN version (default:'0.10.14')visionBundleUrl— custom URL forvision_bundle.mjsif you self-host MediaPipewasmBasePath— Custom WASM path (defaults to jsDelivr CDN)modelUrl— custom URL for the segmentation model file
Drop-in LiveKit TrackProcessor. Combines the core pipeline with a segmentation provider and track management.
Framework-agnostic processor. Takes a MediaStreamTrack, returns a blurred MediaStreamTrack.
Checks for WebGL2, WebAssembly, and Insertable Streams / canvas fallback support.
- Chrome (desktop) — full Insertable Streams path
- Edge (desktop) — full Insertable Streams path
- Safari (desktop) — canvas captureStream fallback
- Firefox — canvas captureStream fallback
- iOS — not supported (no WebGL2 + captureStream combination)
The pipeline processes each video frame through 8 GPU stages:
- Upload — Camera frame to WebGL texture
- Segmentation — MediaPipe produces a soft confidence mask (0.0–1.0)
- Bilateral filter — Refines mask edges using frame color as guide
- Temporal blend — EMA with previous frame's mask (reduces flicker)
- Masked downsample — Half-res with foreground-weighted sampling
- Mask-weighted blur — 2-pass separable Gaussian, foreground suppressed
- Composite — Smoothstep blend with foreground-biased matte
- Output — Rendered to canvas for capture
Implement the SegmentationProvider interface to use your own model:
import type { SegmentationProvider } from 'gregblur'
const myProvider: SegmentationProvider = {
async init(canvas) {
// Load your model, share the GL context via canvas
},
segment(source, timestampMs) {
// Return { confidenceTexture: WebGLTexture, close(): void }
// confidenceTexture values: 1.0 = background, 0.0 = person
},
destroy() {
// Cleanup
},
}Apache-2.0