Minimal, production-clean pilot project exposing simple Euler-Bernoulli beam calculations as a remote MCP service over Streamable HTTP.
- Solves 4 beam cases:
- simply supported + one point load
- simply supported + full-span UDL
- cantilever + tip point load
- cantilever + full-span UDL
- Returns:
- support reactions
- maximum bending moment
- maximum deflection
- sampled arrays:
x,shear,moment,deflection - warnings and model assumptions
physics_core/: deterministic, pure physics logic (no MCP dependency).mcp_server/: MCP tool wrapper + FastAPI app.- MCP endpoint mounted at
/mcp(Streamable HTTP transport). - Health endpoint at
/health. - Dev helper endpoint at
/dev/tools/{tool_name}for quick local testing.
- MCP endpoint mounted at
client/: LangChain-based CLI tool-calling chat client with provider-agnostic model support.tests/: unit, validation, integration-like, and smoke tests.
physics-mcp/
├── client/
│ └── langchain_chat.py
├── mcp_server/
│ ├── app.py
│ ├── logging_config.py
│ └── middleware.py
├── physics_core/
│ ├── assumptions.py
│ ├── models.py
│ └── solver.py
├── scripts/
│ ├── local_smoke.sh
│ └── sample_request.json
├── tests/
│ ├── test_mcp_tools.py
│ ├── test_physics_core.py
│ ├── test_smoke_local.py
│ └── test_validation.py
├── .env.example
├── docker-compose.yml
├── Dockerfile
├── pyproject.toml
└── README.md
python3.11 -m venv .venv
source .venv/bin/activate
# default client setup (OpenAI provider)
pip install -e '.[dev,client-openai]'
cp .env.example .env
# put LLM_API_KEY (recommended) or OPENAI_API_KEY (backward-compatible) into .envFor Claude/Anthropic support install:
pip install -e '.[dev,client-anthropic]'LLM_PROVIDER: optional provider selector for the LangChain client (default:openai).LLM_MODEL: optional model name for the client (default fallback:OPENAI_MODEL, thengpt-4o-mini).LLM_API_KEY: recommended, provider-agnostic API key variable for the client.OPENAI_API_KEY: backward-compatible fallback for OpenAI usage.OPENAI_MODEL: backward-compatible fallback model variable.PHYSICS_MCP_URL: optional for client (default:http://127.0.0.1:8080).PHYSICS_MCP_HOST: server bind host (default:0.0.0.0).PHYSICS_MCP_PORT: server port (default:8080).
Recommended naming scheme:
export LLM_PROVIDER=openai
export LLM_MODEL=gpt-4o-mini
export LLM_API_KEY='...'Backward compatibility for existing OpenAI setups remains:
export OPENAI_API_KEY='sk-...'
export OPENAI_MODEL=gpt-4o-miniFor Claude/Anthropic, for example:
export LLM_PROVIDER=anthropic
export LLM_MODEL=claude-3-5-sonnet-latest
# choose one of the following:
export LLM_API_KEY='...'
# or
export ANTHROPIC_API_KEY='...'pytestIf physics-mcp-client is not found, reinstall in your active venv:
python3 -m pip install -e '.[dev,client-openai]'physics-mcp-server
# or
python3 -m mcp_server.appWith server running:
./scripts/local_smoke.shPOST /dev/tools/solve_beam_case
{
"case": "simply_supported_point",
"length_m": 6.0,
"point_load_n": 12000.0,
"point_load_position_m": 3.0,
"youngs_modulus_pa": 210000000000.0,
"second_moment_m4": 0.0000085,
"samples": 51
}Equivalent curl command:
curl -sS -X POST "http://127.0.0.1:8080/dev/tools/solve_beam_case" \
-H 'content-type: application/json' \
-d '{
"case": "simply_supported_point",
"length_m": 6.0,
"point_load_n": 12000.0,
"point_load_position_m": 3.0,
"youngs_modulus_pa": 210000000000.0,
"second_moment_m4": 0.0000085,
"samples": 51
}'Run with MCP server available locally:
# uses LLM_* vars (recommended) or OPENAI_* fallback vars
physics-mcp-client
# fallback without console script:
python3 -m client.langchain_chatInput UX in the client:
- Arrow keys navigate input history and cursor position.
Alt+Enterinserts a newline.Entersubmits the message.Ctrl+Topens Tool-Set selection while chatting./toolsalso switches Tool-Sets,/exitbeendet den Client.
The client uses LangChain chat models and supports these Tool-Sets:
0: kein Tool1: nurphysics-mcp
If a Tool-Set includes physics-mcp, the client checks PHYSICS_MCP_URL via /health and prints a clear startup error if the server is offline.
If the model produces invalid tool arguments, the client returns structured tool errors back to the model so it can retry with corrected parameters instead of crashing.
Der Dev-Tool-Endpunkt normalisiert außerdem häufige Kurzschreibweisen aus LLM-Tool-Calls (z. B. l/e/i, point_load_kn, udl_kn_per_m) und kann den Lastfall bei fehlendem case aus den Lastparametern ableiten.
Beispiel-Prompts (Deutsch):
- "Für einen 6-m-Einfeldträger (E=210e9 Pa, I=8.5e-6 m^4) mit 12 kN Mittellast: Lagerreaktionen, maximales Biegemoment und maximale Durchbiegung."
- "Welche Annahmen macht das Beam-Modell und wo liegen die Grenzen für reale Stahlträger?"
- "Vergleiche einfach gelagerten Träger mit Einzellast vs. UDL bei gleicher Gesamtlast und gib die Deflektionsmaxima an."
- "Erzeuge mir eine JSON-Anfrage für
/dev/tools/solve_beam_casefür einen Kragträger mit UDL und 81 Samples."
cp .env.example .env
# fill keys as needed
docker compose up -d --buildThis creates a small single-service deployment suitable for home-server use. Put a reverse proxy (Caddy/Nginx/Traefik) in front later for TLS and public exposure.
- Keep
physics-mcpon private LAN at:8080. - Add reverse proxy with TLS certificates.
- Forward
/mcpand/health. - Add auth and real rate limiting at proxy level.
- Keep app-level middleware hooks for future policies.
- Euler-Bernoulli assumptions only.
- No shear deformation (no Timoshenko beam).
- No variable section/material.
- Cantilever point load only at free tip (defaults to x=length if omitted).
- Simply supported point load defaults to midspan if load position is omitted.
- UDL assumed full-length only.
- No unit conversion layer (SI input/output only).
- Add partial-span loads and off-tip cantilever point loads.
- Add
solve_catenary_casewith compatible schema. - Add optional unit conversion at API boundary.
- Add auth + robust rate limiting.
- Add caching for repeated parameter sets.