Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

183 revisit linearize transformation #221

Merged
merged 48 commits into from
May 15, 2023

Conversation

IgnaceBleukx
Copy link
Collaborator

First version of the new linearization transformation.

Work in progress and some tests are still failing! Mainly due to the != linearization.
For this operator, we can either do the traditional big-M implemention or use indicator constraints.
For the latter one, we risk introducing unsupported constraints depending on lhs and rhs of original != constraint...
To be discussed.

@IgnaceBleukx IgnaceBleukx requested review from tias and JoD February 3, 2023 11:37
@IgnaceBleukx IgnaceBleukx linked an issue Feb 3, 2023 that may be closed by this pull request
cpmpy/transformations/linearize.py Outdated Show resolved Hide resolved
cpmpy/transformations/linearize.py Outdated Show resolved Hide resolved
cpmpy/transformations/linearize.py Outdated Show resolved Hide resolved
elif lhs.name == "sum":
weights = [-1 if isinstance(a, NegBoolView) else 1 for a in lhs.args]
lhs = sum(w * a._bv if isinstance(a, NegBoolView) else w * a for w,a in zip(weights,lhs.args))
rhs -= sum(w < 0 for w in weights)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a bit obscure... isn't a generic function possible that gets w_lhs, w_exp, rhs as input and outputs the same but without negboolviews?

cpmpy/solvers/gurobi.py Outdated Show resolved Hide resolved
# other global constraints
elif isinstance(lhs, GlobalConstraint) and hasattr(lhs, "decompose_comparison"):
decomp = lhs.decompose_comparison(cpm_expr.name, rhs)
return linearize_constraint(flatten_constraint(decomp),supported=supported)
Copy link
Collaborator

@JoD JoD Feb 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we not assume everything unsupported is already flattened? Otherwise, could, e.g., a sum expression also have nested terms?

In essence, what is the input expected by the transformation?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We expect constraints to be in flat normal form or a subset thereof (See other comment)

new_rhs, cons = get_or_make_var(rhs + 1)
return [lhs >= new_rhs] + cons
if cpm_expr.name == "!=":
# Big-M implementation, ensure no new reifications are added
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • a new (smart) reification is added, see z=boolvar().
  • we count on solvers to do their own Big-M by allowing the BoolVar -> LinExpr output. We do this because we hope they can do it more efficiently. The same assumption should hold here, so we should just translate to that type of constraint.
  • so just add z -> linearize(lhs > rhs) and ~z -> linearize(lhs < rhs)?

Copy link
Collaborator

@JoD JoD Feb 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, this linearization will not work when the LinExpr is nested, e.g., p -> x!=y. Here we need a separate variable for both reifications I think.
It does work when nested, but then we really need the Big-M transformation to avoid recursive reification.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very tricky case indeed...
Either we use the ~z -> linearize(lhs > rhs) & ~z -> linearize(lhs < rhs) approach but it will indeed mess up when LinExpr is already an implication. So then we need the Big-M implementation anyway I think.
I'm okay with adding an argument to indicate we are linearizing the rhs of an implication and switch linearization of != accordingly

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Conclusion after discussion: current approach seems fine, except that we want to keep the newly introduced reification when not nested (as there is no risk of infinite recursive reifications).

@JoD
Copy link
Collaborator

JoD commented Feb 7, 2023

Looks like it will be a nice improvement!

If we want to be really thorough, we could add a check that tests whether the output adheres to the predefined grammar, and run it during testing. We can use that same method to check the input of other transformations, e.g., only_positive_bv, and be more confident that the code does what it is supposed to.

# Calculate bounds of M = |lhs - rhs| + 1
_, M = (abs(lhs - rhs) + 1).get_bounds()

cons = [lhs - M*z <= rhs-1, lhs - M*z >= rhs-M+1]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took a while to understand, but I think it is correct. Please add the following:

M is chosen so that 
lhs - rhs + 1 =< M*z
rhs - lhs + 1 =< M*~z
holds.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, maybe rewrite as wsum to avoid introducing the subtractions? Or are these also rewritten as wsum already?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented suggestions and also added the optimization of using 2 different big M constants

tests/test_trans_linearize.py Show resolved Hide resolved
# Calculate bounds of M = |lhs - rhs| + 1
_, M = (abs(lhs - rhs) + 1).get_bounds()

cons = [lhs - M*z <= rhs-1, lhs - M*z >= rhs-M+1]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, maybe rewrite as wsum to avoid introducing the subtractions? Or are these also rewritten as wsum already?

Copy link
Collaborator

@JoD JoD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let me know what you think :)

@IgnaceBleukx IgnaceBleukx merged commit d465692 into master May 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Revisit linearize transformation
4 participants