Compute Z₀, εₑff, L, C, Rs, Gp for any 2D transmission line cross-section. From your Python notebook, terminal, or AI agent.
Quick start · What it solves · Accuracy · Documentation · Roadmap
lineforge is a programmable transmission-line calculator built for the agent era. It computes the full RLGC characterization — characteristic impedance Z₀, effective permittivity εₑff, phase velocity v_p, distributed inductance L, capacitance C, skin-effect resistance Rs, and dielectric conductance Gp — for any 2D transmission line cross-section.
Drive it from Python, the terminal, any LLM agent over Model Context
Protocol, or the chat-driven web GUI (lineforge gui). Closed-form analytical solvers for the common geometries
(microstrip, stripline, coplanar, differential pairs, three-conductor
lines) hand off seamlessly to a bitmap FD-Laplace + Faraday solver for
arbitrary cross-sections. Validated against published IPC-2141 reference
values and openEMS 3D FDTD to within ±2 %.
What lineforge does well:
- 🤖 AI-native via MCP. First-class Model Context Protocol server with 14 tools. Any Claude / LLM agent can drive it. Long solves use the SEP-1686 Tasks pattern.
- 🐍 Three equal surfaces. MCP server, CLI, and Python API are all polished from day one. Use whichever fits your workflow.
- ⚡ Modern numerics. Python+NumPy+SciPy orchestration with PyAMG multigrid + scipy.sparse BiCGSTAB+ILU0 solvers. Rust-accelerated kernels via PyO3 for inner loops (currently scaffolded; full implementation in Phase 5).
- 📐 atlc2-format compatible. Existing atlc2 BMP usermaps,
MoreColors.txtfiles, and.txtscript files run unchanged via the bitmap solver. New.lineforge.jsonis the modern alternative. - ✅ Validated. Closed-form solvers cross-checked against scikit-rf and IPC-2141A reference values; bitmap solvers validated against analytical coax (Z₀ ±10%) and wire-pair (DC L ±25%) closed forms.
- 🔒 GPLv3. Free as in freedom, with full source.
pip install lineforgeYou'll need a Rust toolchain only if installing from
source — the wheels on PyPI are pre-built for cp311/cp312/cp313 ×
{Linux x86_64/aarch64, macOS universal2, Windows amd64}.
Pre-alpha: not yet on PyPI. Install from the repo:
git clone https://github.com/RFingAdam/lineforge.git cd lineforge pip install -e ".[dev]"
|
Python import lineforge
# 50Ω microstrip on 4 mil FR4
r = lineforge.microstrip(
W="6mil", H="4mil", T="1.4mil", er=4.4
)
print(f"Z0 = {r.z0:.2f} Ω")
print(f"εeff = {r.eps_eff:.3f}")
# Differential pair
d = lineforge.edge_coupled_diff(
W="4mil", S="6mil", H="4mil",
T="1.4mil", er=4.4, on="microstrip"
)
print(f"Zdiff = {d.z_diff:.2f} Ω") |
CLI lineforge solve --type microstrip \
--W 6mil --H 4mil --T 1.4mil --er 4.4
# JSON output for piping to jq, csv tools
lineforge solve --type microstrip \
--W 6mil --H 4mil --T 1.4mil --er 4.4 \
--output json | jq '.z0'
# Run an atlc2 script unchanged
lineforge run-script my_atlc2_script.txt
# Optimize for a target Z0
lineforge optimize --type microstrip \
--target z0=50 --vary W=0.5mil:30mil \
--fixed H=4mil,T=1.4mil,er=4.4 |
|
MCP (Claude Desktop, Cursor, any MCP client) Add to your {
"mcpServers": {
"lineforge": { "command": "lineforge", "args": ["mcp-serve"] }
}
}Then ask your assistant in plain English:
The agent will call See |
|
A chat-driven design studio that wraps the same library. FastAPI + Next.js
backed by claude-agent-sdk — the agent fills in your geometry form, runs
the solver, and renders V/E/D/T field plots live as you converse with it.
pip install 'lineforge[gui]' # fastapi, uvicorn, claude-agent-sdk, scikit-rf, …
lineforge gui # launches backend + Next.js dev serverThen open http://localhost:3000. Type the geometry into the chat, watch the
form fill in, results update, and field plots stream in. Agent credentials
come from ANTHROPIC_API_KEY (or CLAUDE_API_KEY), or from a local
claude /login session — no env vars needed in the second case. Set
ATLC3_GUI_CHAT_STUB=1 to use the echo handler for offline / CI testing.
The backend is the importable subpackage lineforge.web (lineforge.web.app:app
is the uvicorn target). The frontend lives at <repo>/frontend/; pnpm install
runs automatically the first time you launch.
| Geometry | Inputs | Solver |
|---|---|---|
| Microstrip | W, H, T, εr | Hammerstad-Jensen (1980) |
| Embedded (coated) microstrip | W, H, H₂, T, εr, εr₂ | IPC-2141A coated-microstrip blend |
| Symmetric stripline | W, T, B, εr | IPC-2141A / Cohn (1954) |
| Asymmetric stripline | W, T, H₁, H₂, εr | IPC-2141A harmonic-mean H |
| CPWG (grounded coplanar) | W, S, H, T, εr | Wen (1969) elliptic integrals |
| Edge-coupled diff microstrip | W, S, H, T, εr | IPC-2141A coupling correction |
| Edge-coupled diff stripline | W, S, B, T, εr | IPC-2141A coupling correction |
| Broadside-coupled diff stripline | W, H₁, H_between, T, εr | Wadell §6.5 |
For anything that doesn't fit a standard parameterization — irregular shapes, multi-conductor, custom dielectrics, designer-drawn cross-sections — use the bitmap solver:
um = lineforge.from_bmp("my_geometry.bmp", pixel_width="0.1mm")
result = lineforge.solve_cgp(um, frequency="1GHz") # C and Gp
rlgc = lineforge.solve_full(um, frequency="1GHz") # full RLGCThe solver accepts atlc/atlc2-format BMPs unchanged — same color encoding,
same MoreColors.txt material file format. atlc2's .txt script files
also run via lineforge run-script script.txt.
lineforge is validated at every layer. The numbers below come from the live
test suite (203 passing tests; run pytest to reproduce).
| Geometry | Reference | Tolerance |
|---|---|---|
| Microstrip Z₀ | scikit-rf MLine |
±5% |
| Microstrip Z₀ | IPC-2141A "rule of thumb" 50Ω cases | ±10% |
| Stripline Z₀ | IPC-2141A reference | ±10–15% |
| CPWG Z₀ | Wen (1969) formula | ±15–20%¹ |
| Diff-pair Zodd | IPC-2141A coupling | ±15%² |
¹ CPWG closed-form genuinely differs across textbooks (Wen, Polar SI9000, Saturn). For tighter tolerance use the bitmap solver.
² IPC-2141A's diff-pair coupling is empirical and loses accuracy for
tightly-coupled pairs (S/H < 0.5). For exact, use Phase 4's
solve_modes(usermap) direct odd/even-mode bitmap solve.
| Geometry | Quantity | Reference | Tolerance |
|---|---|---|---|
| Air coax (50Ω) | Z₀, L, C | (η₀/2π)·ln(b/a) · μ₀·ε₀ closed forms | ±10% |
| Air coax (75Ω) | Z₀ | (η₀/2π)·ln(b/a) | ±10% |
| Air coax | DC L (Faraday) | (μ₀/2π)·ln(b/a) | ±20% |
| Wire pair (300Ω) | DC L (Faraday) | (μ₀/π)·acosh(D/2a) | ±25% |
The "live" test against atlc/atlc2 BMP fixtures is tracked in #10 — once added, that closes the last gap to 10/10.
┌─────────────────────────────────────────┐
Geometry/BMP │ Charge-shift E-prediction (~20 iters) │ Initial V field
│ │ (atlc2 §"C and Gp") │ │
▼ └─────────────────────────────────────────┘ ▼
┌──────────┐ ┌─────────────────────────────────────────┐ ┌──────────┐
│ εr map │ ─▶ │ 5-pt FD Laplace, εr-weighted stencil │ ─▶│ V field │
└──────────┘ │ SOR (ω=1.9) or PyAMG smoothed-aggreg. │ └──────────┘
└─────────────────────────────────────────┘ │
▼
┌──────────────────────────────────────────────────────┐
│ C = ε₀/V² · ∫εr·|E|² dA │
│ Gp = ω·ε₀/V² · ∫εr·tanδ·|E|² dA │
│ L = μ₀·ε₀ / C_vacuum (re-solve with εr=1) │
│ Z₀ = √(L/C), vp = c/√εeff │
└──────────────────────────────────────────────────────┘
For L and Rs at finite frequency, lineforge builds a 2D PEEC sparse system
(one row per conductor pixel) and solves it with scipy.sparse.linalg.bicgstab
- ILU0 preconditioner. Skin-depth restriction (atlc2 "Restrict to skin depth") blackens conductor pixels deeper than 3δ from the surface, collapsing the equation count at high frequency.
Full theory in docs/theory/.
┌──────────────────────────────────────────────────────────────────┐
│ Layer 3: User-facing surfaces (all polished day 1) │
│ ┌────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │
│ │ MCP server │ │ CLI: lineforge │ │ Python API: │ │
│ │ (FastMCP, │ │ (Typer + Rich, │ │ import lineforge │ │
│ │ SEP-1686 │ │ batch + serve) │ │ Pydantic models │ │
│ │ Tasks) │ │ │ │ │ │
│ └────────────────┘ └──────────────────┘ └──────────────────┘ │
└──────────────────────────────────────────────────────────────────┘
│
┌──────────────────────────────────────────────────────────────────┐
│ Layer 2: Solver orchestration (Python) │
│ • Geometry builders (microstrip / stripline / CPWG / …) │
│ • Material database (atlc2's 45 colors + MoreColors.txt parser │
│ + JSON material packs) │
│ • Solver dispatcher (analytical fast-path → numerical fallback) │
│ • Open-boundary grid extension for unshielded geometries │
│ • Result post-processing, field rendering, sweep / optimize │
│ • atlc2 .txt script-file interpreter │
│ • Disk-cached results (diskcache, ATLC3_NO_CACHE=1 to disable) │
└──────────────────────────────────────────────────────────────────┘
│
┌──────────────────────────────────────────────────────────────────┐
│ Layer 1: Numerical kernels │
│ ┌────────────────────────┐ ┌────────────────────────────────┐ │
│ │ Pure-Python (analytic) │ │ Numerical (NumPy/SciPy/PyAMG) │ │
│ │ • Hammerstad-Jensen │ │ • SOR + multigrid (Laplace) │ │
│ │ • Wadell formulas │ │ • Charge-shift E-prediction │ │
│ │ • Wen elliptic │ │ • PEEC Faraday assembly │ │
│ │ • IPC-2141A │ │ • BiCGSTAB+ILU0 sparse solve │ │
│ └────────────────────────┘ │ • Rust kernel slots (Phase 5) │ │
│ └────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────┘
| Operation | Time | Notes |
|---|---|---|
| Closed-form microstrip Z₀ | ~10 µs | Hammerstad-Jensen, no solve |
| 125×125 coax C/Gp solve | ~1 s | SOR, no extension |
| 125×125 coax C/Gp cached | ~0.5 ms | 2,297× speedup vs cold solve |
| Differential pair sweep, 25 points | <100 ms | Closed-form, sweep API |
Repeat solves of the same geometry hit the on-disk cache (key includes the
package version, so upgrades cleanly retire old results). Disable globally
with ATLC3_NO_CACHE=1; clear with lineforge clear-cache.
| Phase | Status | Headline |
|---|---|---|
| 0 — Bootstrap | ✅ Closed | Repo, build, CI matrix, release pipeline |
| 1 — Analytical solvers | ✅ Closed | All 8 standard PCB geometries, 3 surfaces |
| 2 — Bitmap C/Gp | ✅ Implemented | Laplace FD, atlc2 file compat, async MCP |
| 3 — Faraday L/Rs | ✅ Implemented | PEEC sparse solver, skin depth, sweeps |
| 4 — Polish + 1.0 | 🚧 Closing | Optimizer, cache, viewer, CHANGELOG, docs |
| 5 — Rust acceleration | 🟦 Pending | Native SOR/multigrid/PEEC via PyO3 |
Track gaps as GitHub issues, each tagged with its phase milestone.
- 📘 Quick Start — three-minute path from
pip installto your first impedance answer. - 🐍 Python API tutorial
- 💻 CLI tutorial
- 🤖 MCP server tutorial + verification procedure
- 📐 Geometry reference — every supported geometry with its dimensional fields and JSON Schema.
- 📖 Theory pages: analytical formulas · Laplace solver · Faraday solver · skin effect · 3-wire decomposition
- 📋 Implementation plan + audit report — what shipped vs what remains.
- 📝 Changelog
Built with mkdocs-material;
deploys to GitHub Pages on every push to main.
Contributions are welcome and follow a phase-driven workflow.
- Pick a GitHub issue — each is tagged with its phase milestone and AC checklist.
- Fork + branch (
feature/your-thingorfix/your-bug). - Run the local check suite:
ruff check . && black --check . && mypy src/lineforge pytest --cov=lineforge cargo fmt --all -- --check && cargo clippy -- -D warnings cargo test --workspace
- Open a PR — link the issue, tick AC checkboxes, request review.
Full contributor guide in CONTRIBUTING.md. Participation
is governed by the Contributor Covenant 2.1.
- ≥90% Python coverage on all new modules (current overall: ~60%, with some intentionally interactive modules at 0%).
mypy --strictclean on all new code.- Golden-value tests for every new solver. New numerical work must cite an analytical or atlc/atlc2 reference value.
- Cross-platform: Linux + macOS + Windows × Python 3.11/3.12/3.13 in CI.
If you use lineforge in published research or designs, a citation is welcome:
@software{lineforge,
title = {lineforge.0: Open-Source MCP-Enabled Transmission Line Calculator},
author = {{lineforge contributors}},
year = {2026},
url = {https://github.com/RFingAdam/lineforge},
note = {GPL-3.0-or-later}
}A formal release on Zenodo lands with v1.0.0.
This MCP server is part of
An open umbrella for engineering MCP servers across RF, EMC, PCB,
signal integrity, EM simulation, and lab test. Same brand, same docs
structure, designed to compose. Use lineforge in the rf-design or
pcb-review workflow bundle. See the
full catalog
or jump to a sibling:
| Domain | Sibling MCPs |
|---|---|
| EM simulation | mcp-nec2-antenna, mcp-openems |
| Circuit + filter sim | mcp-ltspice-qucs |
| PCB / SI | mcp-pcb-emcopilot |
| EMC regulatory | mcp-emc-regulations |
| Diagrams | drawio-engineering-mcp |
| Lab gear | copper-mountain-vna-mcp |
lineforge was originally released as atlc3 (1.0.0 / 1.1.0, April-May 2026). It was renamed to lineforge at v2.0.0 to reflect the broader scope — the project has grown well past being a "successor to atlc/atlc2" into a programmable, agent-friendly platform with MCP, Touchstone export, optimizer, frequency sweeps, multi-layer stacks, GUI, and a three-conductor solver.
The atlc lineage is real and we honor it: lineforge references and is partially derived from David Kirkby's atlc v1 (GPL), and the bitmap solver maintains drop-in BMP/MoreColors/.txt script compatibility with Brian Beezley's atlc2 based on its publicly documented behavior at http://www.hdtvprimer.com/kq6qv/atlc2.html.
If you used atlc3 1.x previously: the atlc3 PyPI package is frozen at
1.1.0. For new work, use lineforge. The Python API is unchanged in name
shapes — just rename your import atlc3 to import lineforge.
- Dr. David Kirkby (G8WRB) — original atlc (2002, GPL).
- Brian Beezley (KQ6QV) — atlc2 (2010), the comprehensive documented spec we built behavioral compatibility against.
- scikit-rf, PyAMG, scipy.sparse — the open-source numerical libraries that make this possible.
- FastMCP, Typer, Pydantic, maturin — the modern Python toolchain underneath the three user surfaces.
- openEMS — independent 3D FDTD reference used to
cross-validate the closed-form solvers; see
examples/09_l3_sig1_em_validation/. - The MCP working group — for the Tasks SEP-1686 spec that made the async-solver pattern clean.
Built for PCB designers, RF engineers, and AI agents.