-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Z3 Crashes #4533
Comments
You should communicate bug reproductions as SMTLIB2 files. It is not practical for me to install and run some other tool to reproduce produce bugs even if these other tools are called Boogie. Sometimes I can work with python scripts, Java, c++ programs using the API, but also this adds a level of friction to reproducing bugs. So SMTLIB2 files are much preferred. If a repro requires multiple runs make a note, I should be able to see. Overall a crash resilient approach to dealing with timeouts is to allow them, but when a timeout happens reset the solver state completely. I recognize this is not the first line of user-experience you want, but it is going to make the client behavior independent of bugs in the cancellation path. Small files that fit in the edit window should be just pasted in. Use triple quote |
Yes, smtlib files appear the best way to communicate this. I was just asking where to put them. The one in question is quite large because it involves the entire Move language encoding generated from Boogie. So I submitted this into our depot and you can find it in my clone here. There you find AFAICT, this does not involve any timeouts. We are seeing Z3 crashes in Continuous Integration ever now and then, as well as on our laptops (I would guess we are looking at ~5% of the runs). The likelihood to experience this crash seems to have increased since we switched from a few month older version of z3 to 4.8.8 recently, in case this is helpful. Thanks for helping with this, and I hope you can reproduce in debug mode! |
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
it crashes immediately for me in debug mode in the current master. |
Awesome! I'm wondering what your release model is. Are you creating them frequently? Otherwise we likely need to start versioning z3 via a commit hash. We have CI tests running and little changes in behavior can make some baseline/golden tests fail for many folks, so we need to ensure that everyone and the CI bot runs the exact same version. |
Releases are not too frequent. |
We are actually interested in the new arithmetic core (@junkil-park is looking at it), and hope that it solves some problems we have. On the other hand we have high stability requirements, since our plan is to run the Move Prover/Boogie/Z3 in continues integration for most Move smart contracts by later in July. We also have some problems with QI, which we currently try to nail down. So taking up other changes from you frequently is a good thing. Let me talk beginning of next week with the team to figure out what our best strategy is. We could (a) work on your nightly or some weekly (i.e. we tag some of your automatic daily as "green", based on our tests) or (b) we use your next official release and if there is some urgent issue, we kindly ask for accelerated release. |
I've built from head and cannot reproduce the crash anymore (running the test driver now since 15 minutes which is approx. 100,000 times of running the repro smtlib file). It was quite clearly not reliably reproducible on our side. Perhaps that is because we did not use the debug build which may have extra sanity checks (valgrind and so on). Looks like we should have a way to run the debug version of Z3 to help diagnose such problems faster. |
We are observing in the Libra Move Prover project some z3 (4.8.8) segmentation faults with smtlib files generated by Boogie (2.6.17.0). We were able to nail this down to specific smtlib verification problems which Boogie generates, which every now and then produce the crash.
How to best deal with passing this info to the Z3 team? I could attach the smtlib file to this issue, or create a PR which gives you a test to repro this. The repro requires the test to run many times, I'm not sure whether there is already some test framework support for this kind of problem.
@shazqadeer @DavidLDill @emmazzz
The text was updated successfully, but these errors were encountered: