-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
183 revisit linearize transformation #221
Conversation
cpmpy/transformations/linearize.py
Outdated
elif lhs.name == "sum": | ||
weights = [-1 if isinstance(a, NegBoolView) else 1 for a in lhs.args] | ||
lhs = sum(w * a._bv if isinstance(a, NegBoolView) else w * a for w,a in zip(weights,lhs.args)) | ||
rhs -= sum(w < 0 for w in weights) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a bit obscure... isn't a generic function possible that gets w_lhs, w_exp, rhs as input and outputs the same but without negboolviews?
cpmpy/transformations/linearize.py
Outdated
# other global constraints | ||
elif isinstance(lhs, GlobalConstraint) and hasattr(lhs, "decompose_comparison"): | ||
decomp = lhs.decompose_comparison(cpm_expr.name, rhs) | ||
return linearize_constraint(flatten_constraint(decomp),supported=supported) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not assume everything unsupported is already flattened? Otherwise, could, e.g., a sum
expression also have nested terms?
In essence, what is the input expected by the transformation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We expect constraints to be in flat normal form or a subset thereof (See other comment)
cpmpy/transformations/linearize.py
Outdated
new_rhs, cons = get_or_make_var(rhs + 1) | ||
return [lhs >= new_rhs] + cons | ||
if cpm_expr.name == "!=": | ||
# Big-M implementation, ensure no new reifications are added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- a new (smart) reification is added, see
z=boolvar()
. - we count on solvers to do their own Big-M by allowing the
BoolVar -> LinExpr
output. We do this because we hope they can do it more efficiently. The same assumption should hold here, so we should just translate to that type of constraint. - so just add
z -> linearize(lhs > rhs)
and~z -> linearize(lhs < rhs)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this linearization will not work when the LinExpr
is nested, e.g., p -> x!=y
. Here we need a separate variable for both reifications I think.
It does work when nested, but then we really need the Big-M transformation to avoid recursive reification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a very tricky case indeed...
Either we use the ~z -> linearize(lhs > rhs) & ~z -> linearize(lhs < rhs)
approach but it will indeed mess up when LinExpr is already an implication. So then we need the Big-M implementation anyway I think.
I'm okay with adding an argument to indicate we are linearizing the rhs of an implication and switch linearization of != accordingly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conclusion after discussion: current approach seems fine, except that we want to keep the newly introduced reification when not nested (as there is no risk of infinite recursive reifications).
Looks like it will be a nice improvement! If we want to be really thorough, we could add a check that tests whether the output adheres to the predefined grammar, and run it during testing. We can use that same method to check the input of other transformations, e.g., |
cpmpy/transformations/linearize.py
Outdated
# Calculate bounds of M = |lhs - rhs| + 1 | ||
_, M = (abs(lhs - rhs) + 1).get_bounds() | ||
|
||
cons = [lhs - M*z <= rhs-1, lhs - M*z >= rhs-M+1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took a while to understand, but I think it is correct. Please add the following:
M is chosen so that
lhs - rhs + 1 =< M*z
rhs - lhs + 1 =< M*~z
holds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, maybe rewrite as wsum
to avoid introducing the subtractions? Or are these also rewritten as wsum
already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented suggestions and also added the optimization of using 2 different big M constants
cpmpy/transformations/linearize.py
Outdated
# Calculate bounds of M = |lhs - rhs| + 1 | ||
_, M = (abs(lhs - rhs) + 1).get_bounds() | ||
|
||
cons = [lhs - M*z <= rhs-1, lhs - M*z >= rhs-M+1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, maybe rewrite as wsum
to avoid introducing the subtractions? Or are these also rewritten as wsum
already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, let me know what you think :)
…ze-transformation
…ze-transformation # Conflicts: # cpmpy/transformations/linearize.py
First version of the new linearization transformation.
Work in progress and some tests are still failing! Mainly due to the
!=
linearization.For this operator, we can either do the traditional big-M implemention or use indicator constraints.
For the latter one, we risk introducing unsupported constraints depending on lhs and rhs of original
!=
constraint...To be discussed.