-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the Largest-First
algorithm pay for outputs collectively.
#73
Make the Largest-First
algorithm pay for outputs collectively.
#73
Conversation
This change fixes a subtle problem with our implementation of Largest-First. == Old implementation (before applying this change) Rather than considering the total collective value of all outputs when attempting to find a selection of inputs, the old implementation considered each output independently, and for each output, attempted to find a selection of inputs to cover just that output. This created an unnecessary cardinality restriction, namely that a single unique input could only be used to pay for at most one unique output, which meant that any set of n outputs could only be paid for if there were at least n inputs available. Under this restriction, the algorithm could fail even if the total value of inputs was sufficient to cover the required outputs. == New implementation (after applying this change) The updated algorithm now considers the total value of all outputs collectively, rather than considering them individually. Under this relaxed scheme, it will always be possible to pay for a given set of outputs of total value v if the total value u of available inputs satisfies u >= v.
Previously, Largest-First would fail to produce a result if the number of available inputs was lower than the number of requested outputs. However, under the new relaxed scheme, Largest-First will always succeed in paying for a given set of outputs of total value v as long as the total value u of the UTxO satisfies u >= v, regardless of how many inputs there are. This change updates several unit tests that previously expected failure to now expect a successful result.
This test is no longer required, as Largest-First no longer imposes the cardinality restriction that the number of inputs must be greater than or equal to the number of outputs.
This function indicates whether or not a given error value is one that can be returned by the Largest-First algorithm.
Random-Improve and Largest-First now no longer fail for exactly the same cases. We need to be a bit more careful when comparing failures.
Update documentation for `largestFirst` to reflect that the cardinality restriction has been removed.
- Group similar tests together. - Give each test a unique number that allows the test to be easily identified on failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks fine to me
Make the field names self-documenting.
469b042
to
c83a9c3
Compare
To avoid too many instances of discarded data in properties that make use of the CoinSelectionData type, incorporate a bias towards generating sets of inputs that can completely pay for all the outputs. Additionally, generate coin sizes that have a reasonably high chance of collision with one another. This makes it more likely that coin selection algorithms are able to select just enough inputs to pay for the requested outputs (with zero excess value), which is an extremely useful edge case.
We confirm that a selection is minimal by removing the smallest entry from the inputs and verifying that the reduced input total is no longer enough to pay for the total value of all outputs.
c83a9c3
to
5543454
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added tests and properties look good.
Q: So after collectively addressing all inputs to be covered (instead of one to one) we have more robust algo. Let's assume we want to cover [2, 1, 2]
inputs. and have [4, 1, 1]
utxos. In old implementation:
2
could be attributed 4
, but the next one 1
would be not enough to cover the next2
and we would end up with error. Right now, 4
will cover [2,2]
and [1]
will cover the rest?
So now we can have smaller number of utxos than inputs to be covered and find covering coin selection?
If we have inputs of
If we have inputs of
|
These specifically cover cases where the number of available inputs is smaller than the number of required outputs.
885ea1f
to
19edd71
Compare
Related Issue
#39
Summary
This PR changes the reference implementation of the
Largest-First
coin selection algorithm to match the specification.After applying this PR, the algorithm should always be successful at paying for a given set of outputs of total value v provided that the total value u of available inputs satisfies u ≥ v. (Unless the maximum input count has been exceeded.)
Old implementation (before applying this PR)
Rather than considering the total collective value of all outputs when attempting to find a selection of inputs, the old implementation considered each output independently, and for each output, attempted to find a selection of inputs to cover just that output.
This created an unnecessary cardinality restriction, namely that a single unique input could only be used to pay for at most one unique output, which meant that any set of n outputs could only be paid for if there were at least n inputs available. Under this restriction, the algorithm could fail even if the total value of inputs was sufficient to cover the required outputs.
New implementation (after applying this PR)
The updated algorithm now considers the total value of all outputs collectively, rather than considering them individually.
Under this relaxed scheme, it will always be possible to pay for a given set of outputs of total value v if the total value u of available inputs satisfies u ≥ v. (Unless the maximum input count has been exceeded.)