You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've been using Z3Py to solve an instance of a vertex cover problem. The vertex cover instance is moderately sized and it can be solved by Z3 in under 2 minutes on a single core. However, Z3Py takes a massive amount of time to load the instance when we specify it dynamically.
At a high level, our approach looks something like this:
from z3 import Bool, Or, Optimize
# Assume edges is a list of tuples and size is the number of vertices
optimize = Optimize()
nodes = [Bool(i) for i in range(size)]
for edge in edges:
optimizer.add(Or(nodes[x], nodes[y]) for x, y in edge)
...
Note that we actually add multiple Or clauses at once in the call to optimizer.add (i.e the code above is just for illustration).
A profile for this is given below. You'll see that we spend a great deal of time just reading in the clauses:
If we instead serialize the model to a smt_lib string and then read it back in, this overhead almost entirely disappears:
This leads to around a factor 6 reduction in walltime.
My question is: is there a better way to use the Python bindings that a) allows us to use the API without serialising to a smt_lib string and b) that prevents this slowdown? Looking at Z3.py it doesn't look like it, but I'd be curious to know if there is.
The text was updated successfully, but these errors were encountered:
python overhead comprises of both debug asserts, that can be mitigated by running python without enabling them, and Python's object creation that is quite a bit of overhead by itself. There is a little cottage industry around tuning Python scripts for the latter.
We've been using Z3Py to solve an instance of a vertex cover problem. The vertex cover instance is moderately sized and it can be solved by Z3 in under 2 minutes on a single core. However, Z3Py takes a massive amount of time to load the instance when we specify it dynamically.
At a high level, our approach looks something like this:
Note that we actually add multiple
Or
clauses at once in the call tooptimizer.add
(i.e the code above is just for illustration).A profile for this is given below. You'll see that we spend a great deal of time just reading in the clauses:
If we instead serialize the model to a
smt_lib
string and then read it back in, this overhead almost entirely disappears:This leads to around a factor 6 reduction in walltime.
My question is: is there a better way to use the Python bindings that a) allows us to use the API without serialising to a
smt_lib
string and b) that prevents this slowdown? Looking at Z3.py it doesn't look like it, but I'd be curious to know if there is.The text was updated successfully, but these errors were encountered: