Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questionable runtime performance #2552

Closed
0152la opened this issue Sep 12, 2019 · 3 comments
Closed

Questionable runtime performance #2552

0152la opened this issue Sep 12, 2019 · 3 comments

Comments

@0152la
Copy link
Contributor

0152la commented Sep 12, 2019

We have noticed some interesting results in terms of runtime when running Z3 over 3 related test cases. The original test case (in constraint-1015084_orig.smt2) was taken from the set of QFLIA tests of SMTCOMP. We produced two follow-up test cases: first, using the C++ API, we read in the original file via z3::context::parse_file, iterate over the z3::ast vector and add each node to a z3::solver, finally printing z3::solver::to_smt to a file (test case constraint-1015084_rewrite.smt2), second by adding set_logic QF_LIA to the rewritten test case (constraint-1015084_rewrite_setlogic.smt2).

We execute all three test cases, first with default options (not passing any other arguments to the z3 binary), then by setting auto_config=false. We have run the experiment a number of times to account for noise, and observe the same patterns. The following are selected from one particular experiment run.

Test case Default time (s) Unset auto_config (s)
orig 126 53
rewrite 468 34
rewrite_setlogic 389 36

We are primarily interested in the large difference between original and rewrite. Does this large difference seem to be within what one would expect? Could rewriting via the API as discussed above make certain heuristics not trigger? Or could this be interpreted as a potential performance issue?

The difference between the default execution and the unset auto_config execution is also interesting, but we assume this is more easily explained due to auto_config making a lot of decisions which might impact the execution time. However, it is peculiar that unsetting it causes such an increase in performance. Is the expectation is that a user of Z3 would experiment with flags most suitable for the problem to be solved, or could this be considered an issue, as auto_config is expected to not have a negative performance impact of an order of magnitude?

performance_tests.zip

@NikolajBjorner
Copy link
Contributor

Note that you can get different runtimes by setting smt.random_seed=NNN to different values of NNN.

It would be better if auto config always finds the best settings, but it does not. The example is possibly useful in figuring out what decision was wrong in auto-config.

NikolajBjorner added a commit that referenced this issue Sep 13, 2019
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
@0152la
Copy link
Contributor Author

0152la commented Sep 16, 2019

We've been experimenting a bit with different seeds and flags, and have observed a few instances where such performance differences occur. We wanted to make sure these are useful to report, as you said, to better improve the decision making in Z3. I can update this issue with other examples, as we discover and validate them, if they are useful.

@NikolajBjorner
Copy link
Contributor

Fluctuations within 10x is actually common, especially for satisfiable problems.
One takeaway from the example you uploaded was that Z3 spent most of the time during pre-processing to solve equations for many of the variables. It spent time in an occurs check. This part has nothing to do with random seeds. So it often helps to take a second look at user examples because they exercise different features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants