feat(lean): optimize all Lean tools via Pantograph and LSP incremental elaboration#4
Merged
Merged
Conversation
lean_check was spawning a full `lean --threads=4` process per expression, re-importing Mathlib each time (~5s per call). Now it uses Pantograph's env.inspect first (milliseconds per expression since Mathlib is preloaded), falling back to a single batched Lean invocation for all expressions. Changes: - Pantograph fast path in tool_lean_check via env.inspect - Accept `exprs` array parameter for batch lookups in one call - System prompt updated to guide LLM toward batch usage - Integration tests for single, batch, unknown, and missing-args cases - .worktreeinclude updated with lean/.lake and vendor entries
Pantograph's frontend.process doesn't reliably handle full file verification: with imports it re-imports Mathlib (~170s), without imports the elaboration context is wrong. Revert to lean compiler fallback for lean_verify and lean_eval. lean_check Pantograph fast path via env.inspect remains (tested and working).
lean_verify now tries the existing LeanLspMcp.get_diagnostics() before falling back to spawning a fresh lean process. The LSP server keeps Mathlib loaded after the first elaboration, so subsequent verifications of the same file only re-elaborate changed portions (~200ms-2s vs 5-30s). Changes: - Add verify_scratch_via_lsp() in verify.rs: bridges DiagnosticsResult to LeanVerificationSummary with proper sorry detection - tool_lean_verify tries LSP fast path when ctx.lsp_mcp is available - verify_node_at accepts optional LSP handle (None at non-tool call sites) - All call sites updated (pass None where LSP not yet threaded)
…otes
LSP diagnostics use backtick quotes (`sorry`) not single quotes ('sorry').
Added integration tests that confirm:
- Valid proofs verify successfully via LSP
- Sorry proofs are correctly rejected
- Type errors are detected
- Incremental verification is fast (~23ms after cold start)
Route lean_eval and lean_search_tactic through LeanLspMcp.get_diagnostics() for incremental elaboration, same pattern as lean_verify. Extract suggestion parsing into shared extract_search_suggestions() function. All three slow tools now have the LSP fast path: - lean_verify: ~6ms incremental (was 5-30s) - lean_eval: incremental after warmup (was 5-30s) - lean_search_tactic: incremental after warmup (was 15-30s) Integration tests added for all paths.
…ean-check-optimization
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Optimizes every slow Lean tool to use fast paths (Pantograph REPL or LSP incremental elaboration), with graceful fallback to the existing
lean --threads=4compiler invocation.env.inspectfast path (hash lookup, ~ms). Accepts batchexprsarray parameter so the LLM can check multiple names in one call.get_diagnostics()fast path vialean-lsp-mcp. After first elaboration (~18s Mathlib import), subsequent verifications are incremental (~6-58ms)..worktreeincludeupdated withlean/.lakeandvendorfor worktree compatibility.lean_checkusage.Performance
lean_check(single)lean_check(batch 3)lean_verifylean_evallean_search_tacticlean_screen_tacticslean_goalsFirst call of each session still takes ~18s (Mathlib cold start), which is unavoidable. All subsequent calls benefit from incremental elaboration.
Test plan
cargo clippy --workspace -- -D warningscargo test --workspace(all 25 tests pass)