Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[lia] Add Strategy to speed-up conversion #14607

Merged
merged 1 commit into from Jul 21, 2021
Merged

Conversation

fajb
Copy link
Contributor

@fajb fajb commented Jul 6, 2021

Add a Strategy directive for lia to speed-up conversion.
(The problem seems to be lia specific.)

Kind: bug fix / performance

Fixes #14604

  • Added / updated test-suite

@fajb fajb added kind: performance Improvements to performance and efficiency. needs: benchmarking Performance testing is required. part: micromega The lia, nia, lra, nra and psatz tactics. Also the legacy omega tactic. labels Jul 6, 2021
@fajb fajb requested a review from a team as a code owner July 6, 2021 12:58
@@ -142,6 +142,8 @@ Fixpoint Zeval_expr (env : PolEnv Z) (e: PExpr Z) : Z :=
| PEopp e => Z.opp (Zeval_expr env e)
end.

Strategy expand [ Zeval_expr ].
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to only pass this one? I expect that on sufficiently large goals, not passing along the full list of identifiers will eventually bite you.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a first attempt with -100 and the list of references you proposed. There was a significative slowdown for coq-prime more than 2%. Now, the directive mimics what ring is doing and the bench looks better.

Yet, I don't know whether (or why) the culprit is -100 or the longer list of references.
So far, the current setting seems a good compromise. If you find a problematic example, I'll need to dig deeper :(

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was the slowdown localized to some particular tactic call, or was it generalized? If the slowdown was in something using zchecker, that means that we probably forgot a constant in the list. If the slowdown was in some other reflective tactic, that probably means that the other reflective tactic uses some of the same constants as zchecker and its constants should be added to the list. If the slowdown was in some particular tactic line that's not reflective, that's probably an indication that something interesting is happening (and I'd like to know what). If the slowdown is in Qed, that's interesting and probably indicates that we're forgetting a cast somewhere in a proof term or something like that. If the slowdown is distributed, then I have no idea what's going on, and I guess this change is the best we can hope for.

I'd love to know the root cause of the slowdown (and one easy test would be to bench all four combinations of expand/-100 and the two lists), but as long as the current change doesn't have adverse performance effects, this shouldn't be considered blocking.

@fajb
Copy link
Contributor Author

fajb commented Jul 8, 2021

I never quite understand how to interpret bench results. Here they are:

┌─────────────────────────────┬─────────────────────────┬───────────────────────────────────────┬───────────────────────────────────────┬─────────────────────────┬─────────────────┐
│                             │      user time [s]      │              CPU cycles               │           CPU instructions            │  max resident mem [KB]  │   mem faults    │
│                             │                         │                                       │                                       │                         │                 │
│                package_name │     NEW      OLD  PDIFF │            NEW             OLD  PDIFF │            NEW             OLD  PDIFF │     NEW      OLD  PDIFF │ NEW  OLD  PDIFF │
├─────────────────────────────┼─────────────────────────┼───────────────────────────────────────┼───────────────────────────────────────┼─────────────────────────┼─────────────────┤
│                  coq-stdlib │  412.74   414.62  -0.45 │  1155258166616   1159201093096  -0.34 │  1473631123419   1474261785528  -0.04 │  594772   594732   0.01 │   0    0    nan │
│                 coq-bignums │   55.52    55.74  -0.39 │   153641435643    153952388131  -0.20 │   206509023844    206732660553  -0.11 │  464372   464056   0.07 │   0    0    nan │
│       coq-mathcomp-fingroup │   47.84    47.95  -0.23 │   133086068875    132686148173   0.30 │   186450228379    186460994425  -0.01 │  482432   481756   0.14 │   0    0    nan │
│               coq-fourcolor │ 2542.97  2546.78  -0.15 │  7089740541853   7098108240397  -0.12 │ 12242560263661  12240948157741   0.01 │  732148   731904   0.03 │   0    0    nan │
│                coq-coqprime │  325.77   326.18  -0.13 │   904883032607    905847185372  -0.11 │  1595910201284   1595200069285   0.04 │  919636   919452   0.02 │   0    0    nan │
│  coq-performance-tests-lite │ 1554.53  1555.65  -0.07 │  4337486548020   4339864862296  -0.05 │  7125536143273   7127231296559  -0.02 │ 2085048  2085016   0.00 │   0    0    nan │
│                coq-bedrock2 │  309.74   309.86  -0.04 │   858956202346    858645486989   0.04 │  1506953492041   1505921682964   0.07 │  799836   771352   3.69 │   0    0    nan │
│        coq-mathcomp-algebra │  141.05   141.07  -0.01 │   391777247855    391293561658   0.12 │   511135531050    511150034022  -0.00 │  554288   554132   0.03 │   0    0    nan │
│ coq-rewriter-perf-SuperFast │  737.86   737.94  -0.01 │  2054240691526   2053382655277   0.04 │  3213630561819   3214010250395  -0.01 │ 1074044  1074092  -0.00 │   0    0    nan │
│                    coq-corn │ 1471.96  1471.88   0.01 │  4097017288539   4095441761709   0.04 │  5896386047437   5896043354595   0.01 │  813140   812996   0.02 │   0    0    nan │
│      coq-mathcomp-odd-order │ 1048.68  1048.55   0.01 │  2922456456666   2921017282359   0.05 │  4665029641490   4665472184931  -0.01 │  958644   904576   5.98 │   0    0    nan │
│              coq-verdi-raft │ 1099.26  1099.10   0.01 │  3061857227946   3058954672425   0.09 │  4325348764137   4324553038361   0.02 │  902604   902860  -0.03 │   0    0    nan │
│      coq-mathcomp-character │  149.55   149.50   0.03 │   415855820914    415538108265   0.08 │   585134145915    585137191546  -0.00 │  731860   731860   0.00 │   0    0    nan │
│            coq-fiat-parsers │  583.48   583.24   0.04 │  1631816856728   1632095079448  -0.02 │  2496745277971   2496485572040   0.01 │ 3485460  3485580  -0.00 │   0    0    nan │
│                 coq-unimath │ 4292.12  4289.92   0.05 │ 11944101493846  11939158234706   0.04 │ 22925609591814  22910674544674   0.07 │ 3277808  3266924   0.33 │   0    0    nan │
│             coq-lambda-rust │ 1260.86  1260.20   0.05 │  3507854495466   3505591195826   0.06 │  5063194755298   5064649892305  -0.03 │ 1075540  1099888  -2.21 │   0    0    nan │
│                  coq-geocoq │ 1331.63  1330.88   0.06 │  3709604590573   3709919355354  -0.01 │  5469006076667   5468116670865   0.02 │  986648   980868   0.59 │   0    0    nan │
│             coq-fiat-crypto │ 4820.97  4815.85   0.11 │ 13388259387217  13373653950072   0.11 │ 23609771170800  23624925969093  -0.06 │ 2216712  2225988  -0.42 │   0    0    nan │
│      coq-mathcomp-ssreflect │   53.84    53.75   0.17 │   146622634083    146853652012  -0.16 │   190342331557    190302079058   0.02 │  525748   525936  -0.04 │   0    0    nan │
│          coq-mathcomp-field │  220.83   220.40   0.20 │   614883464023    614471484335   0.07 │   940624956711    940588897208   0.00 │  667748   667604   0.02 │   0    0    nan │
│       coq-engine-bench-lite │  312.45   311.83   0.20 │   879374127652    878431195788   0.11 │  1652031216077   1641733525304   0.63 │ 3506464  3505996   0.01 │   0    0    nan │
│       coq-mathcomp-solvable │  169.24   168.87   0.22 │   470540601337    469891289545   0.14 │   668642761956    668584878953   0.01 │  673888   695568  -3.12 │   0    0    nan │
│               coq-perennial │ 6812.46  6796.79   0.23 │ 18974362892834  18933022957144   0.22 │ 29096549601329  29095507809156   0.00 │ 3079612  3124368  -1.43 │   0    0    nan │
│                coq-rewriter │  668.00   666.15   0.28 │  1854260647361   1848860769305   0.29 │  2845620099244   2843766956911   0.07 │ 1074288  1074288   0.00 │   0    0    nan │
│                   coq-flocq │  146.80   146.39   0.28 │   406913810501    405878248590   0.26 │   504145952538    504825108805  -0.13 │  810444   810592  -0.02 │   0    0    nan │
│                   coq-color │  484.60   483.10   0.31 │  1350737805126   1347917526329   0.21 │  1699918374463   1698708388145   0.07 │ 1154384  1154264   0.01 │   0    0    nan │
│               coq-fiat-core │  109.12   108.61   0.47 │   307209355438    306150126199   0.35 │   414176357555    412742536504   0.35 │  480824   480992  -0.03 │   0    0    nan │
│            coq-math-classes │  174.52   173.70   0.47 │   486978855550    485651098172   0.27 │   644287527428    644274358152   0.00 │  481344   481328   0.00 │   0    0    nan │
│                 coq-coqutil │   58.68    58.39   0.50 │   162618159077    162411513809   0.13 │   214831202358    214448954430   0.18 │  599448   600780  -0.22 │   0    0    nan │
│                   coq-verdi │   98.40    97.90   0.51 │   270108697559    269122641326   0.37 │   380264188714    379232383780   0.27 │  544116   544052   0.01 │   0    0    nan │
│                    coq-hott │  280.28   278.77   0.54 │   772048886512    769751373696   0.30 │  1193574181829   1193581648160  -0.00 │  569424   569460  -0.01 │   0    0    nan │
│                    coq-core │  107.13   106.30   0.78 │   296438561009    296092422573   0.12 │   458703091100    458821985941  -0.03 │  289112   289024   0.03 │   0    0    nan │
│              coq-coquelicot │   68.85    67.66   1.76 │   187672360440    186317661498   0.73 │   240315691599    239437953909   0.37 │  557700   557660   0.01 │   0    0    nan │
└─────────────────────────────┴─────────────────────────┴───────────────────────────────────────┴───────────────────────────────────────┴─────────────────────────┴─────────────────┘```

@JasonGross
Copy link
Member

Looks like mostly noise to me (@ppedrot can you confirm?). The main one I'm concerned about is coq-perennial │ 6812.46 6796.79 0.23, which gained 16 seconds, and I'd be interested to know if this is spread out or concentrated.

@fajb
Copy link
Contributor Author

fajb commented Jul 13, 2021

@ppedrot If there is nothing suspicious in the bench, could you merge?

@ppedrot ppedrot self-assigned this Jul 21, 2021
@ppedrot ppedrot removed the needs: benchmarking Performance testing is required. label Jul 21, 2021
@ppedrot ppedrot added this to the 8.15+rc1 milestone Jul 21, 2021
@ppedrot ppedrot merged commit 1e4f6b4 into coq:master Jul 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind: performance Improvements to performance and efficiency. part: micromega The lia, nia, lra, nra and psatz tactics. Also the legacy omega tactic.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

slow lia conversion
3 participants