-
Notifications
You must be signed in to change notification settings - Fork 15
preprocessing loses precision #65
Comments
Yes, this is the item 7 that I mentioned in the chat. It also affects the I will work on this today. |
I can't believe you are working on Christmas. |
Christmas holidays is known to be the most productive time for the geeks (i.e. Guido van Rossum created Python) :-) |
Free parking! |
Apart from precise representation, the deeper problem is when there isn't enough precision (which can always happen), the solver should have used the right rounding mode to ensure overapproximation, not rounding to the nearest. |
I can't believe your wife lets you.
|
@scungao, can you check the version that you're using?
In the latest version, I have SAT for the example with the following log:
|
The latest released version is still 2.14.8, which is what people are still using and told me about this bug. Do make a formal release of 2.14.12. However, the problem of failing to ensure overapproximation is not solved. I simply added a couple of 9 to make the following formula and it's unsat in the newest version.
In fact, even with the original formula which seems to have the correct answer, something wrong is still going on. I changed 0.5 to 0.6 to make sure we don't confuse between the constants, i.e.:
and the trace becomes:
You see how the constants with multiple digits are simply treated as 0.5, which violates overapproximation. |
|
I think there can be two approaches to solve the problem.
To me, (plan A) is the way to go.. What do you think? |
Yes A is the way to go. We only need to do this when, at parsing time, we realize that it's possible to lose digits from a particular constant. |
dReal returns unsat on the following formula
(set-logic QF_NRA)
(declare-fun omega0 () Real)
(assert (>= omega0 0.49999952316284185))
(assert (<= omega0 0.4999997615814209))
(assert (not (>= omega0 0.5)))
(check-sat)
(exit)
and it's clearly wrong.
What happens is when passing constants like 0.49999952316284185 from preprocessing to solving, it blurs it with 0.5, which is horrible. What's funny is the 2.14.5 version handles this formula right -- I found that out because the webpage is still using that version. So this is a second problem: we are not updating the version used on the webpage.
The text was updated successfully, but these errors were encountered: