Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MiniZinc 2.5.3 flattening performance issue #448

Closed
informarte opened this issue Jan 12, 2021 · 1 comment
Closed

MiniZinc 2.5.3 flattening performance issue #448

informarte opened this issue Jan 12, 2021 · 1 comment

Comments

@informarte
Copy link

I am experiencing a flattening performance issue with a model for vehicle routing.

cvrp.tar.gz contains everything to reproduce the issue. There is a model and three instances of different sizes. The model is split into three parts: cvrp_cp, cvrp and vrp. These parts can be considered as a class hierarchy with cvrp_cp a subclass of cvrp and cvrp a subclass of vrp with superclasses declaring predicates and constants to be defined by subclasses. In particular, vrp declares and uses the predicate delivery implemented by cvrp_cp.

Reproduction steps:

tar xzf cvrp.tar.gz
cd cvrp
time minizinc -c cvrp_cp.mzn X-n469-k138.dzn

On my old i5 laptop, the runtime for this instance is about 40 seconds.

I managed to trace down the issue to the argument TravelTimes (a matrix) to delivery. When I remove this argument (and let delivery take TravelTimes from its environment), the runtime drops by an order of magnitude:

delivery with TravelTimes delivery without TravelTimes
X-n101-k25 0.5s 0.3s
X-n469-k138 40s 5s
X-n1001-k43 103s 9s

You'll find the relevant code at the end of vrp.mzn; look out for the comments SLOW and FAST.

(Btw, the two variants of the model yield identical FlatZinc output.)

@Dekker1
Copy link
Member

Dekker1 commented Jan 12, 2021

Just to elaborate on the problem. The problem here is that when using an exponential computational comprehension (TravelTime) as an argument to a custom predicate (delivery), there is seems to be a big overhead in the flattening time compared to when this comprehension is only stored globally in the environment.

It sounds as if the comprehension is evaluated multiple times when passed in as an argument.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants