refactor: replace grind canonicalizer with type-directed normalizer#13166
Merged
leodemoura merged 10 commits intomasterfrom Mar 28, 2026
Merged
refactor: replace grind canonicalizer with type-directed normalizer#13166leodemoura merged 10 commits intomasterfrom
leodemoura merged 10 commits intomasterfrom
Conversation
Add `Sym/Canon.lean` with a type-directed canonicalizer that normalizes expressions in type positions. Performs eta reduction, projection reduction, match/ite reduction, and basic Nat normalization. Instances are canonicalized via `synthInstance`. This is the foundation for replacing `isDefEq`-based type canonicalization in grind's `canonImpl` with direct normalization. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Expand the module docstring with sections on reductions, instance canonicalization, and caching strategy. Add docstrings to `reduceProjFn?`, `checkDefEqInst`, and the public `canon` entry point. Rename `isToNormArithDecl` to `isNatArithApp` for clarity. Fix typo in `canonInsideType'` docstring. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wire `Sym.Canon.canon` into `grind`'s `canonImpl`, replacing the old `isDefEq`-based type canonicalization with direct normalization. The new canonicalizer goes inside binders and normalizes types via eta reduction, projection reduction, match/ite reduction, and Nat arithmetic normalization. Instances are re-synthesized via `synthInstance` with the type normalized first, ensuring `OfNat (Fin (2+1)) 0` and `OfNat (Fin 3) 0` produce the same canonical instance. Add `no_index` annotations to `val_addNat` and `val_castAdd` patterns in `Fin/Lemmas.lean` — arithmetic in type positions is now normalized, so patterns must not rely on the un-normalized form for indexing. Update test expected outputs for `grind_12140` (cleaner diagnostics), `grind_9572` (eta-reduced output), `grind_array_attach` (disable out-of-scope theorems), `grind_indexmap_trace` (hash changes). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
and delete old canonicalizer.
|
Mathlib CI status (docs):
|
Collaborator
|
Reference manual CI status:
|
kim-em
added a commit
to kim-em/lean4
that referenced
this pull request
Mar 31, 2026
After leanprover#13166, the canonicalizer normalizes Nat arithmetic in type positions (e.g. `n + 1 + 1` → `n + 2` in `Fin`) but leaves value-level expressions unchanged. When `toPoly` processes `↑n + 1 + 1` (in Int), the middle `1` appears as a numeral in a non-tail position and `addMonomial` would fall back to treating it as a variable. Fix: use `Poly.addConst` (which already exists in `Init.Data.Int.Linear`) to fold the numeral into the polynomial's constant term, regardless of position. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
kim-em
added a commit
to leanprover-community/mathlib4-nightly-testing
that referenced
this pull request
Mar 31, 2026
After leanprover/lean4#13166, grind can no longer see through the chain of semireducible defs connecting `J.toCoverage ≤ K.toCoverage` with `∀ ⦃X⦄, ∀ S ∈ J.coverings X, Sieve.generate S ∈ K X`. The new lemma states this defeq explicitly so it can be passed as a grind hint. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
kim-em
added a commit
to leanprover-community/mathlib4-nightly-testing
that referenced
this pull request
Mar 31, 2026
The grind calls in fix_aux were introduced in leanprover-community#35682 and broke after leanprover/lean4#13166. test/grind_mwe.lean contains a confirmed regression MWE (passes nightly-2026-03-25, fails nightly-2026-03-30). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
volodeyka
pushed a commit
that referenced
this pull request
Apr 16, 2026
…13166) This PR replaces the `grind` canonicalizer with a new type-directed normalizer (`Sym.canon`) that goes inside binders and applies targeted reductions in type positions, eliminating the O(n^2) `isDefEq`-based approach. The old canonicalizer maintained a map from `(function, argument_position)` to previously seen arguments, iterating the list and calling `isDefEq` for each new argument. This produced performance problems in some goal. For example, for a goal containing `n` numeric literals, it would produce O(n^2) `isDefEq` comparisons. The new canonicalizer normalizes types directly: - **Instances**: re-synthesized via `synthInstance` with the type normalized first, so `OfNat (Fin (2+1)) 0` and `OfNat (Fin 3) 0` produce the same instance. - **Types**: normalized with targeted reductions — eta, projection, match/ite/cond, and Nat arithmetic (`n.succ + 1` → `n + 2`, `2 + 1` → `3`). - **Values**: traversed but not reduced, preserving lambdas for grind's beta module. The canonicalizer enters binders (the old one did not), using separate caches for type-level and value-level contexts. Propositions are not normalized to avoid interfering with grind's proposition handling. Move `SynthInstance` from `Grind` to `Sym` since the canonicalizer now lives in `Sym` and needs instance synthesis. The `Grind` namespace re-exports the key functions. Add `no_index` annotations to `val_addNat` and `val_castAdd` patterns in `Fin/Lemmas.lean` — arithmetic in type positions is now normalized, so patterns must not rely on the un-normalized form for e-matching indexing. --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR replaces the
grindcanonicalizer with a new type-directed normalizer (Sym.canon) that goes inside binders and applies targeted reductions in type positions, eliminating the O(n^2)isDefEq-based approach.The old canonicalizer maintained a map from
(function, argument_position)to previously seen arguments, iterating the list and callingisDefEqfor each new argument. This produced performance problems in some goal. For example, for a goal containingnnumeric literals, it would produce O(n^2)isDefEqcomparisons.The new canonicalizer normalizes types directly:
synthInstancewith the type normalized first, soOfNat (Fin (2+1)) 0andOfNat (Fin 3) 0produce the same instance.n.succ + 1→n + 2,2 + 1→3).The canonicalizer enters binders (the old one did not), using separate caches for type-level and value-level contexts. Propositions are not normalized to avoid interfering with grind's proposition handling.
Move
SynthInstancefromGrindtoSymsince the canonicalizer now lives inSymand needs instance synthesis. TheGrindnamespace re-exports the key functions.Add
no_indexannotations toval_addNatandval_castAddpatterns inFin/Lemmas.lean— arithmetic in type positions is now normalized, so patterns must not rely on the un-normalized form for e-matching indexing.