Initial barycentric Langrange interpolation #52
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Implements barycentric Lagrange interpolation. Uses algorithm (3.1) from the
paper "Polynomial Interpolation: Langrange vs Newton" by Wilhelm Werner to find
the barycentric weights, and then evaluates at
Gf256::zero()
using the secondor "true" form of the barycentric interpolation formula.
I also earlier implemented a variant of this algorithm, Algorithm 2, from "A new
efficient algorithm for polynomial interpolation," which uses less total
operations than Werner's version, however, because it uses a lot more
multiplications or divisions (depending on how you choose to write it), it runs
slower given the running time of subtraction/ addition (equal) vs
multiplication, and especially division in the Gf256 module.
The new algorithm takes n^2 / 2 divisions and n^2 subtractions to calculate the
barycentric weights, and another n divisions, n multiplications, and 2n
additions to evaluate the polynomial*. The old algorithm runs in n^2 - n
divisions, n^2 multiplications, and n^2 subtractions. Without knowing the exact
running time of each of these operations, we can't say for sure, but I think a
good guess would be the new algorithm trends toward about 1/3 running time as n
-> infinity. It's also easy to see theoretically that for small n the original
lagrange algorithm is faster. This is backed up by benchmarks, which showed for
n >= 5, the new algorithm is faster. We can see that this is more or less what
we should expect given the running times in n of these algorithms.
To ensure we always run the faster algorithm, I've kept both versions and only
use the new one when 5 or more points are given.
Previously the tests in the lagrange module were allowed to pass nodes to the
interpolation algorithms with x = 0. Genuine shares will not be evaluated at x =
0, since then they would just be the secret, so:
scheme::secret_share
deals them out.division by 0 panics.
This meant getting rid of the
evaluate_at_works
test, butinterpolate_evaluate_at_0_eq_evaluate_at
provides a similar test.Further work will include the use of barycentric weights in the
interpolate
function.
A couple more interesting things to note about barycentric weights:
shares are present. When additional shares come in, computation can resume
with no penalty to the total runtime.
and the x value we want to evaluate for. We only need to know the x values of
our interpolation points.