Permalink
Browse files

restructure optimize_coefficients_full() and _evec()

  • Loading branch information...
shi-bai committed Jun 30, 2018
1 parent af2f336 commit 30a42016e945323193ad6b536928036073a68cae
Showing with 140 additions and 34 deletions.
  1. +84 −17 fplll/pruner/pruner.h
  2. +8 −4 fplll/pruner/pruner_cost.cpp
  3. +9 −5 fplll/pruner/pruner_optimize.cpp
  4. +39 −8 fplll/pruner/pruner_optimize_tc.cpp
View
@@ -469,13 +469,14 @@ template <class FT> class Pruner
Hierarchy of calls if PRUNER_HALF is not set (default):
- This function first invokes optimize_coefficients_evec() which optimizes using only
- This function first invokes optimize_coefficients_evec_core() which optimizes using only
even-position vectors for speed.
- Then it calls several local tuning functions optimize_coefficients_local_adjust_*() to adjust
the parameters in small scales.
- Finally it does an optimization using full vectors using optimize_coefficients_full(). This
- Finally it does an optimization using full vectors using optimize_coefficients_full_core().
This
procedure is repeated for several rounds until no further improvements can be achieved.
@@ -493,8 +494,8 @@ template <class FT> class Pruner
Hierarchy of calls if PRUNER_HALF is not set (default):
- This function first invokes optimize_coefficients_evec() and then
optimize_coefficients_full() to optimize the overall enumeration cost.
- This function first invokes optimize_coefficients_evec_core() and then
optimize_coefficients_full_core() to optimize the overall enumeration cost.
- Then it tries to adjust the pruning parameters to achieve the target succ. probability (or
expected number of solutions) by calling either optimize_coefficients_incr_prob() or
@@ -512,14 +513,18 @@ template <class FT> class Pruner
@brief Run the optimization process using 'even' coefficients.
Run the optimization process, successively using the algorithm activated using using half
coefficients: the input pr has length n; but only the even indices in the vector will be used
in the optimization. In the end, we have pr_i = pr_{i+1}.
coefficients: the input pr has length n; but only the even indices in the vector will be
used in the optimization. In the end, we have pr_i = pr_{i+1}.
Note that this function only optimizes the overall enumeration time where the target function
is: `single_enum_cost(pr) * trials + preproc_cost * (trials - 1.0)`
Note that this function calls `optimize_coefficients_evec_core()`. The difference is that
this function `optimize_coefficients_evec()` do not assume pr contains some valid
coefficient in prior. If the input pr is empty, it starts with the `greedy()` method and
does some preparation work before it invokes `optimize_coefficients_evec_core()`.
This may be used in both `optimize_coefficients_cost_fixed_prob()` and
`optimize_coefficients_cost_vary_prob()`.
Note that this function (and `optimize_coefficients_evec_core()`) only optimizes the
overall enumeration time where the target function is:
`single_enum_cost(pr) * trials + preproc_cost * (trials - 1.0)`
*/
void optimize_coefficients_evec(/*io*/ vector<double> &pr);
@@ -529,11 +534,15 @@ template <class FT> class Pruner
Run the optimization process, successively using the algorithm activated using using full
coefficients. That is, we do not have the constraint pr_i = pr_{i+1} in this function.
Note that this function only optimizes the overall enumeration time where the target function
is: single_enum_cost(pr) * trials + preproc_cost * (trials - 1.0);
Note that this function calls `optimize_coefficients_full_core()`. The difference is that
this function `optimize_coefficients_full()` do not assume pr contains some valid
coefficient in prior. If the input pr is empty, it starts with the `greedy()` method and
does some preparation work before it invokes `optimize_coefficients_full_core()`.
Note that this function (and `optimize_coefficients_full_core()`) only optimizes
the overall enumeration time where the target function is:
This is used in both optimize_coefficients_cost_fixed_prob() (as a first step) and
optimize_coefficients_cost_vary_prob().
`single_enum_cost(pr) * trials + preproc_cost * (trials - 1.0)`
*/
void optimize_coefficients_full(/*io*/ vector<double> &pr);
@@ -644,7 +653,7 @@ template <class FT> class Pruner
void load_basis_shapes(const vector<vector<double>> &gso_rs);
/**
@brief convert pruning coefficient from external to internal format.
@brief convert pruning coefficients from external to internal format.
Convert type, reverse the ordering, and only select coefficent at even position
@@ -664,7 +673,7 @@ template <class FT> class Pruner
void print_coefficients(/*i*/ const evec &pr);
/**
@brief Convert pruning coefficient from internal to external.
@brief Convert pruning coefficients from internal to external.
Convert type, reverse the ordering, and repeat each coefficent twice
@@ -822,6 +831,64 @@ template <class FT> class Pruner
*/
FT repeated_enum_cost(/*i*/ const evec &b);
/**
@brief Do some preparation work for the optimization process.
If the flag PRUNER_START_FROM_INPUT is not enabled, this function
tries to find some raw pruning coefficients using the `greedy()`
method (as the input pr may be empty).
If the flag PRUNER_START_FROM_INPUT is enabled, this function will
start with the input pruning parameter without using the `greedy()`.
This function is used in the beginning stage for both
`optimize_coefficients_evec()` and `optimize_coefficients_full()`.
*/
void optimize_coefficients_preparation(/*io*/ vector<double> &pr);
/**
@brief Run the optimization process using 'even' coefficients.
Run the optimization process, successively using the algorithm activated
using using half coefficients: the input pr has length n; but only the even
indices in the vector will be used in the optimization. In the end, we
have pr_i = pr_{i+1}.
Note that this function only optimizes the overall enumeration time where
the target function is:
`single_enum_cost(pr) * trials + preproc_cost * (trials - 1.0)`
This may be used in both `optimize_coefficients_cost_fixed_prob()` and
`optimize_coefficients_cost_vary_prob()`.
Note that to use this function, the input pr must be non-empty.
It should already contain valid pruning coefficients. E.g. they
could be derived from `optimize_coefficients_preparation()`.
*/
void optimize_coefficients_evec_core(/*io*/ vector<double> &pr);
/**
@brief Run the optimization process using all the coefficients.
Run the optimization process, successively using the algorithm activated
using using full coefficients. That is, we do not have the constraint
pr_i = pr_{i+1} in this function.
Note that this function only optimizes the overall enumeration time
where the target function is:
`single_enum_cost(pr) * trials + preproc_cost * (trials - 1.0)`
This is used in both `optimize_coefficients_cost_fixed_prob()` and
`optimize_coefficients_cost_vary_prob()`.
Note that to use this function, the input pr must be non-empty.
It should already contain valid pruning coefficients. E.g. they
could be derived from `optimize_coefficients_preparation()`.
*/
void optimize_coefficients_full_core(/*io*/ vector<double> &pr);
/**
@brief gradient of the cost of repeating enumeration and preprocessing
@param b reference for output
@@ -923,7 +990,7 @@ template <class FT> class Pruner
Heuristic adjust procedure which seems to be useful to achieve the
target succ. probability (or expected number of solutions).
This function is used in the end to make sure the ratio between
the succ. prob form current pruning coefficient and the target
the succ. prob form current pruning coefficients and the target
succ. prob are sufficiently close. Depending on whether the
succ. prob is larger (or smaller), it will try to reduce the
pruning coefficients (or increase) in small scale to make succ. prob
@@ -141,7 +141,8 @@ template <class FT> inline FT Pruner<FT>::target_function(/*i*/ const vec &b)
FT trials = log(1.0 - target) / log(1.0 - probability);
if (!trials.is_finite())
{
throw std::range_error("NaN or inf in target_function (METRIC_PROBABILITY_OF_SHORTEST)");
throw std::range_error("NaN or inf in target_function (METRIC_PROBABILITY_OF_SHORTEST). "
"Hint: using a higher precision sometimes helps.");
}
trials = trials < 1.0 ? 1.0 : trials;
return single_enum_cost(b) * trials + preproc_cost * (trials - 1.0);
@@ -152,7 +153,8 @@ template <class FT> inline FT Pruner<FT>::target_function(/*i*/ const vec &b)
FT trials = target / expected;
if (!trials.is_finite())
{
throw std::range_error("NaN or inf in target_function (METRIC_EXPECTED_SOLUTION)");
throw std::range_error("NaN or inf in target_function (METRIC_EXPECTED_SOLUTION). Hint: "
"using a higher precision sometimes helps.");
}
// if expected solutions > 1, set trial = 1
trials = trials < 1.0 ? 1.0 : trials;
@@ -172,7 +174,8 @@ template <class FT> inline FT Pruner<FT>::repeated_enum_cost(/*i*/ const vec &b)
FT trials = log(1.0 - target) / log(1.0 - probability);
if (!trials.is_finite())
{
throw std::range_error("NaN or inf in repeated_enum_cost (METRIC_PROBABILITY_OF_SHORTEST)");
throw std::range_error("NaN or inf in repeated_enum_cost (METRIC_PROBABILITY_OF_SHORTEST). "
"Hint: using a higher precision sometimes helps.");
}
trials = trials < 1.0 ? 1.0 : trials;
return single_enum_cost(b) * trials + preproc_cost * (trials - 1.0);
@@ -183,7 +186,8 @@ template <class FT> inline FT Pruner<FT>::repeated_enum_cost(/*i*/ const vec &b)
FT trials = 1.0 / expected;
if (!trials.is_finite())
{
throw std::range_error("NaN or inf in repeated_enum_cost (METRIC_EXPECTED_SOLUTION)");
throw std::range_error("NaN or inf in repeated_enum_cost (METRIC_EXPECTED_SOLUTION). Hint: "
"using a higher precision sometimes helps.");
}
// if expected solutions > 1, set trial = 1
trials = trials < 1.0 ? 1.0 : trials;
@@ -11,8 +11,11 @@ template <class FT> void Pruner<FT>::optimize_coefficients_cost_vary_prob(/*io*/
FT old_c0, old_c1, new_c, min_c;
vec b(n), best_b(n);
// step 1 use half coefficients only
optimize_coefficients_evec(pr);
// step 1 preparation
optimize_coefficients_preparation(pr);
// step 2 optimization use half coefficients only
optimize_coefficients_evec_core(pr);
load_coefficients(b, pr);
best_b = b;
@@ -64,7 +67,7 @@ template <class FT> void Pruner<FT>::optimize_coefficients_cost_vary_prob(/*io*/
#endif
// step 2.2 full optimization
optimize_coefficients_full(pr);
optimize_coefficients_full_core(pr);
load_coefficients(b, pr);
new_c = target_function(b);
@@ -102,9 +105,10 @@ void Pruner<FT>::optimize_coefficients_cost_fixed_prob(/*io*/ vector<double> &pr
FT prob;
// step 1 call global optimization (without fixing succ. prob)
optimize_coefficients_evec(pr);
optimize_coefficients_preparation(pr);
optimize_coefficients_evec_core(pr);
optimize_coefficients_local_adjust_smooth(pr);
optimize_coefficients_full(pr);
optimize_coefficients_full_core(pr);
#ifdef DEBUG_PRUNER_OPTIMIZE
load_coefficients(b, pr);
@@ -6,9 +6,9 @@ FPLLL_BEGIN_NAMESPACE
//#define DEBUG_PRUNER_OPTIMIZE_TC
/**
* optimize with constrains that b_i = b_{i+1} for even i.
* preparation work to have some raw pruning coefficients
*/
template <class FT> void Pruner<FT>::optimize_coefficients_evec(/*io*/ vector<double> &pr)
template <class FT> void Pruner<FT>::optimize_coefficients_preparation(/*io*/ vector<double> &pr)
{
evec b(d);
@@ -45,19 +45,38 @@ template <class FT> void Pruner<FT>::optimize_coefficients_evec(/*io*/ vector<do
// achieve the target probability.
if (!opt_single)
{
vector<double> pr(n);
save_coefficients(pr, min_pruning_coefficients);
vector<double> pr_min(n);
save_coefficients(pr_min, min_pruning_coefficients);
if (measure_metric(min_pruning_coefficients) > target)
{
fill(min_pruning_coefficients.begin(), min_pruning_coefficients.end(), 0.);
optimize_coefficients_decr_prob(pr);
optimize_coefficients_decr_prob(pr_min);
}
load_coefficients(min_pruning_coefficients, pr);
load_coefficients(min_pruning_coefficients, pr_min);
}
preproc_cost *= 10;
}
save_coefficients(pr, b);
}
// 2. gradient method // modify this to becomes an independent method!!!!
/**
* preparation and call optimization with constrains that b_i = b_{i+1} for even i.
*/
template <class FT> void Pruner<FT>::optimize_coefficients_evec(/*io*/ vector<double> &pr)
{
optimize_coefficients_preparation(pr);
optimize_coefficients_evec_core(pr);
}
/**
* optimize with constrains that b_i = b_{i+1} for even i.
*/
template <class FT> void Pruner<FT>::optimize_coefficients_evec_core(/*io*/ vector<double> &pr)
{
evec b(d);
load_coefficients(b, pr);
// gradient method (default flag enables PRUNER_GRADIENT)
if (flags & PRUNER_GRADIENT)
{
if (verbosity)
@@ -74,6 +93,7 @@ template <class FT> void Pruner<FT>::optimize_coefficients_evec(/*io*/ vector<do
#endif
};
// gradient method
if (flags & PRUNER_NELDER_MEAD)
{
if (verbosity)
@@ -95,9 +115,20 @@ template <class FT> void Pruner<FT>::optimize_coefficients_evec(/*io*/ vector<do
}
/**
* optimize without constrains b_i = b_{i+1} for even i.
* Optimize without constrains b_i = b_{i+1} for even i.
* Note that this function assumes the pr already contains some valid
* pruning coefficients.
*/
template <class FT> void Pruner<FT>::optimize_coefficients_full(/*io*/ vector<double> &pr)
{
optimize_coefficients_preparation(pr);
optimize_coefficients_full_core(pr);
}
/**
* optimize without constrains b_i = b_{i+1} for even i.
*/
template <class FT> void Pruner<FT>::optimize_coefficients_full_core(/*io*/ vector<double> &pr)
{
vec b(n);

0 comments on commit 30a4201

Please sign in to comment.