A deep-zoom Mandelbrot and Julia set fractal explorer with GPU acceleration, interactive navigation, color cycling, and video rendering.
FracMann lets you explore the Mandelbrot and Julia sets interactively, then render what you find as videos:
- Zoom videos — the classic "falling into the fractal" animation, with optional color cycling
- Color cycle videos — a static fractal view with the color palette smoothly rotating, inspired by Fractint
- Live color cycling — real-time palette rotation for instant visual preview, just like Fractint
- Interactive zoom — click and drag a rectangle on the preview to zoom in, with aspect-ratio-constrained selection
- Zoom history — undo button steps back through your exploration
- GPU acceleration — PyTorch CUDA support for massively parallel rendering on NVIDIA GPUs
- Live GPU/CPU toggle — switch between GPU and CPU mid-render without restarting
- Deep zoom — perturbation theory automatically engages beyond 10^13 zoom for arbitrary-depth exploration
- Live color cycling — real-time Fractint-style palette rotation with F key for full screen
- 4 fractal types — Mandelbrot, Julia, Burning Ship, and Multibrot (configurable power)
- 10+ color palettes — built-in palettes plus import Fractint .map files or JSON palettes
- Save locations — bookmark interesting spots; they persist across sessions
- Save images — export the current view at full resolution as PNG or JPEG
- Video output — renders frames as JPEGs, stitches into MP4 with FFmpeg
- Python 3.8+
- FFmpeg (must be on your PATH for video stitching)
- NVIDIA GPU with PyTorch CUDA (optional, for GPU acceleration)
git clone https://github.com/YOUR_USERNAME/FracMann.git
cd FracMann
pip install -r requirements.txtFFmpeg is required for stitching rendered frames into video files.
- Windows: Download from ffmpeg.org and add the
binfolder to your PATH - Linux:
sudo apt install ffmpeg - macOS:
brew install ffmpeg
GPU rendering uses PyTorch with CUDA. If you already run AI tools like ComfyUI, Stable Diffusion, or LTX, you almost certainly have a working PyTorch+CUDA setup and FracMann will detect your GPU automatically.
# Install PyTorch with CUDA (pick your CUDA version at pytorch.org):
pip install torch --index-url https://download.pytorch.org/whl/cu121FracMann uses PyTorch tensor operations for GPU computation, so it shares the same CUDA stack as your AI tools.
FracMann works under WSL2 with WSLg. Note that WSLg's X11 compositor may slightly reposition the window on first interaction — this is a known WSLg behavior, not a bug in FracMann. The custom title bar with large buttons ensures usability on high-DPI displays where native window decorations are too small.
python FracMann.py- F — Toggle full screen (borderless, controls hidden, fractal only)
- C — Toggle live color cycling (Fractint-style palette rotation)
- Select a fractal type (Mandelbrot or Julia) and a preset location from the dropdowns
- Click Generate Preview to render the view
- Click and drag on the preview image to draw a zoom rectangle
- The preview automatically re-renders at the new zoom level
- Use Undo Zoom to step back through your exploration history
- Click Save Location to bookmark an interesting spot
- Click Save Image to export at full resolution
Click the Color Cycle button (or press C) to start real-time palette rotation on the current preview. Press F to go full screen while cycling. Press C again to stop. This is the same effect as the classic Fractint color cycling mode.
- Navigate to an interesting location using the preview
- Switch to the Zoom Render tab
- Click Use Current to set the target zoom to your current preview depth
- Adjust Zoom rate (higher = faster zoom, fewer frames; lower = smoother, more frames)
- The live estimate shows frame count and duration before you commit
- Optionally set Color cycle speed for palette rotation during zoom (10-30 subtle, 50+ fast)
- Click Render Zoom Video
The zoom rate controls how much deeper each frame goes. At the default of 1.020 (2% per frame), a zoom to 10^6 produces about 697 frames (~23 seconds at 30fps).
- Navigate to a view you like
- Switch to the Color Cycle tab
- Set duration and number of palette rotations
- Click Render Color Cycle Video
The Use GPU checkbox is always available. When checked, FracMann attempts GPU rendering via PyTorch CUDA. If the GPU fails, it silently falls back to CPU.
You can toggle this mid-render — the very next frame switches backends. The status bar shows which backend rendered each frame.
| Setting | Range | Default | Notes |
|---|---|---|---|
| Resolution | 640x480 to 7680x4320 | 3840x2160 | Higher = sharper but slower |
| Iterations | 100 to 100,000 | 5,000 | Higher = more detail at deep zoom |
| Palette | 10 options | fractint | Instantly swappable |
| FPS | 24 to 60 | 30 | Video frame rate |
| CRF | 0 to 51 | 18 | Video quality: 0 = lossless, 18 = visually lossless |
| Zoom rate | 1.001 to 1.200 | 1.020 | Per-frame zoom multiplier |
| Color cycle | 0 to 200 | 0 | Palette shift per frame during zoom (0 = off) |
- Mandelbrot — the classic z = z² + c, with perturbation theory for deep zoom
- Julia — z = z² + c where c is a constant from the selected coordinates
- Burning Ship — z = (|Re(z)| + i|Im(z)|)² + c, produces jagged asymmetric structures
- Multibrot — z = z^d + c with configurable power d (3 = triangular, 4 = square symmetry, etc.)
Click Import Palette to load a Fractint .map file (RGB triplets, one per line) or a JSON palette file. Hundreds of community-created Fractint palettes are available online. Imported palettes appear immediately in the palette dropdown.
Click Import Locations to load a JSON file containing location coordinates. Format:
[
{"name": "My Spot", "real": "-0.75", "imag": "0.1", "zoom": 500, "type": "Mandelbrot"}
]Each pixel maps to a complex number c. The engine iterates z = z^2 + c and records how quickly each point escapes. A smooth iteration count feeds into the color palette for artifact-free gradients.
On CPU, computation is parallelized via numba's parallel JIT. On GPU, PyTorch tensor operations iterate all pixels simultaneously across thousands of CUDA cores.
Standard 64-bit floating point breaks down around 10^13 zoom. Beyond that, FracMann automatically switches to perturbation theory: one reference orbit at arbitrary precision via mpmath, with all other pixels as fast double-precision perturbations.
The palette is rotated as a whole (like Fractint), so all colors shift simultaneously. For live cycling, palette indices are precomputed once and only the palette array is rotated each frame, making it fast enough for real-time animation at full resolution.
python tests.pyRuns 55+ tests covering palettes, fractal computation, perturbation theory, configuration, video stitching, and GPU/CPU consistency. GPU tests are automatically skipped if no GPU is available.
Clicking Save Location writes coordinates to saved_locations.json in your project folder. These appear in the preset dropdown on next launch. The file is human-readable JSON.
FracMann.py Entry point
ui.py PyQt5 interface with interactive zoom and color cycling
frac_eng.py Fractal computation engine (CPU + GPU)
palettes.py Color palette generation and mapping
config_mgr.py Preset coordinates and saved locations
ffmpeg_stitcher.py Frame-to-video stitching via FFmpeg
tests.py Test suite
requirements.txt Python dependencies
FracMann was built with the assistance of Claude (Anthropic).
MIT License