Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No theory of floating point, bitvectors and arrays and mixed performance? #577

Closed
delcypher opened this issue Apr 26, 2016 · 8 comments

Comments

@delcypher
Copy link
Contributor

commented Apr 26, 2016

Consider the following queries (I think they are technically equi-satisfiable). From the perspective of the work that I'm doing I can think of them as being equivalent.

Using floating point variables (execution time < 1 second)

; start Z3 query
; Emited by klee::Z3SolverImpl::getConstraintLog()
; Rewritten by hand to not use arrays.

(set-logic QF_FPBV)
(declare-fun x() (_ FloatingPoint 11 53))
(assert (not (fp.isNaN x)))
(assert (not (fp.isInfinite x)))

(assert
  (fp.eq
    (fp.sub roundNearestTiesToEven
      (fp.mul roundNearestTiesToEven
        x
        ((_ to_fp 11 53) (_ bv4617315517961601024 64))
      )
      ((_ to_fp 11 53) (_ bv4621819117588971520 64))
    )
    ((_ to_fp 11 53) (_ bv0 64))
  )
)


(assert
  (not
    (fp.eq
      x
      ((_ to_fp 11 53) (_ bv4611686018427387904 64))
    )
  )
)
(check-sat)
; end Z3 query

Using arrays of bitvectors instead of (execution time: ~ 5 seconds)

; start Z3 query
; Emited by klee::Z3SolverImpl::getConstraintLog()
(set-info :status unknown)
(declare-fun x_0x2080d70 () (Array (_ BitVec 32) (_ BitVec 8)))
(assert
 (let ((?x9 (concat (select x_0x2080d70 (_ bv2 32)) (concat (select x_0x2080d70 (_ bv1 32)) (select x_0x2080d70 (_ bv0 32))))))
 (let ((?x29 (concat (select x_0x2080d70 (_ bv4 32)) (concat (select x_0x2080d70 (_ bv3 32)) ?x9))))
 (let ((?x12 (concat (select x_0x2080d70 (_ bv6 32)) (concat (select x_0x2080d70 (_ bv5 32)) ?x29))))
 (let ((?x30 ((_ to_fp 11 53) (concat (select x_0x2080d70 (_ bv7 32)) ?x12))))
 (not (fp.isNaN ?x30)))))))
(assert
 (let ((?x9 (concat (select x_0x2080d70 (_ bv2 32)) (concat (select x_0x2080d70 (_ bv1 32)) (select x_0x2080d70 (_ bv0 32))))))
 (let ((?x29 (concat (select x_0x2080d70 (_ bv4 32)) (concat (select x_0x2080d70 (_ bv3 32)) ?x9))))
 (let ((?x12 (concat (select x_0x2080d70 (_ bv6 32)) (concat (select x_0x2080d70 (_ bv5 32)) ?x29))))
 (let ((?x30 ((_ to_fp 11 53) (concat (select x_0x2080d70 (_ bv7 32)) ?x12))))
 (not (fp.isInfinite ?x30)))))))
(assert
 (let ((?x9 (concat (select x_0x2080d70 (_ bv2 32)) (concat (select x_0x2080d70 (_ bv1 32)) (select x_0x2080d70 (_ bv0 32))))))
 (let ((?x29 (concat (select x_0x2080d70 (_ bv4 32)) (concat (select x_0x2080d70 (_ bv3 32)) ?x9))))
 (let ((?x12 (concat (select x_0x2080d70 (_ bv6 32)) (concat (select x_0x2080d70 (_ bv5 32)) ?x29))))
 (let ((?x30 ((_ to_fp 11 53) (concat (select x_0x2080d70 (_ bv7 32)) ?x12))))
 (let ((?x143 (fp.sub roundNearestTiesToEven (fp.mul roundNearestTiesToEven ?x30 ((_ to_fp 11 53) (_ bv4617315517961601024 64))) ((_ to_fp 11 53) (_ bv4621819117588971520 64)))))
 (fp.eq ?x143 ((_ to_fp 11 53) (_ bv0 64)))))))))
(assert
 (let ((?x9 (concat (select x_0x2080d70 (_ bv2 32)) (concat (select x_0x2080d70 (_ bv1 32)) (select x_0x2080d70 (_ bv0 32))))))
(let ((?x29 (concat (select x_0x2080d70 (_ bv4 32)) (concat (select x_0x2080d70 (_ bv3 32)) ?x9))))
(let ((?x12 (concat (select x_0x2080d70 (_ bv6 32)) (concat (select x_0x2080d70 (_ bv5 32)) ?x29))))
(let ((?x30 ((_ to_fp 11 53) (concat (select x_0x2080d70 (_ bv7 32)) ?x12))))
(not (fp.eq ?x30 ((_ to_fp 11 53) (_ bv4611686018427387904 64)))))))))
(check-sat)
; end Z3 query

There is a noticeable performance difference between the two, does anyone know why?

I tried looking for a logic for "arrays of bitvectors and floating point arithmetic" logic in Z3's code so I could set the logic but I couldn't find one. Is there one?

@NikolajBjorner

This comment has been minimized.

Copy link
Contributor

commented Apr 26, 2016

Good example: Z3 uses the more efficient SAT core in the first case. In the second case, it does not realize that your uses of "select" are benign and can be ackermannized (@MikolasJanota) so it falls back to the general SMT core.

@delcypher

This comment has been minimized.

Copy link
Contributor Author

commented Apr 26, 2016

@NikolajBjorner Thanks for the informative answer. I'm not familiar with ackermannization. Is that "Ackermann’s reduction" in http://research.microsoft.com/en-us/um/redmond/projects/z3/smt07-slides.pdf ?

Also I take it that there is no QF_FPABV logic (or something with the same meaning) then?

@NikolajBjorner

This comment has been minimized.

Copy link
Contributor

commented Apr 26, 2016

Yes, I mean Akcermann's reduction: Every occurrence of (select x0x23234 (_ bv2 32)) can be replaced by a fresh constant x2, and (select x0x23234 (_ bv1 32)) by x1, etc. This preserves satisfiability and more usefully turns the formula into the syntactic fragment of QF_FP that Z3 can detect.

@delcypher

This comment has been minimized.

Copy link
Contributor Author

commented Apr 26, 2016

@NikolajBjorner Okay. Thanks for clarifying.

@wintersteiger

This comment has been minimized.

Copy link
Member

commented Apr 26, 2016

There is no dedicated tactic for QF_FPABV yet because that combination hasn't appeared in benchmarks yet. This means that Z3 will fall back to the most general solver, which supports everything (the 'smt' tactic). When you enable verbose output, e.g., -v:10, you'll see that it says

(simplifier ... )
(smt ...)

While for the first query it says

...
(fpa2bv ...
...
(ackermannize ...
...
(sat-....

If it's the floats that break things, you can rewrite them into bit-vectors up front by rolling your own custom tactic, e.g., by replacing (check-sat) with

(check-sat-using (then simplify fpa2bv qfaufbv))

(where qfaufbv is whatever tactic you were using before the addition of floats).

We also have recently added tactics that rewrite bit-vector arrays into bit-vector functions which then can be ackermannized, so a first draft QF_FPABV tactic for your type of problem could be as follows:

(check-sat-using (then simplify
                 fpa2bv
                 bvarray2uf
                 ackermannize_bv
                 (using-params simplify :elim_and true)
                 bit-blast sat))

which solves the problem even faster than the original FP problem was solved.

@delcypher

This comment has been minimized.

Copy link
Contributor Author

commented Apr 26, 2016

@wintersteiger That's cool! Clearly I need to invest some time in learning how to use the different tactics. Your custom tactic does indeed solve the problem faster :)

A few questions about it...

  • What does (then ...) mean here. Looking at Z3's output when given (help-tactic) I see (and-then <tactic>+) and (par-then <tactic1> <tactic2>). Would I be right in presuming that here (then ...) means (and-then ...)?
  • Is there anything significant about where you syntactically placed (using-param simplify :elim_and true) here?
  • Why did you use simplify here with :elim_and which is normally false by default?
  • The output of (help-tactic) has (or-else). What does it mean for a tactic to fail? For example for simplify if no simplifications were made is that a failure? Similarly do the other tactics have different notions of failure?
@wintersteiger

This comment has been minimized.

Copy link
Member

commented Apr 28, 2016

  • Yes, (then ...) is the same as (and-then ...)
  • Yes, (using-params ...) takes 1 tactic, 1 parameter name and 1 (or more) parameter value, in other words, it runs simplify with parameter elim_and set to true.
  • The bit-blast tactic does not support everything, it relies on the simplifier to get rid of some functions first, that is why we have to run simplify just before bit-blast. The additional option elim_and is enabled to make sure the simplifier also rewrites and-expressions.
  • A "failure" is a proper error. The "fail-if-undecided" tactic which will report an error if a goal is not trivially say/unsat. Note that in general tactics do not "solve" a problem or "determine satisfiability", instead, they produce a set of goals from one input goal. Some tactics are guaranteed to always produce trivial result goals (or errors), but this is not true for all of them.
@wintersteiger

This comment has been minimized.

Copy link
Member

commented Jun 2, 2016

I'm closing this as all questions have been answered. If you hit more performance problems involving floats, please do let us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.