-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved autotuning scoring and parameter search process #813
Conversation
tgaddair
commented
Feb 6, 2019
- Use time between samples instead of aggregated time spent in the background thread for computing sample score
- Normalized scores after each sample to focus the Bayesian optimization more on certain ranges of parameters
- Increased noise parameter from 0.2 -> 0.8 to reflect apriori entropy expectations in samples
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of comments.
*mu = sum / v.size(); | ||
|
||
std::vector<double> diff(v.size()); | ||
std::transform(v.begin(), v.end(), diff.begin(), std::bind2nd(std::minus<double>(), *mu)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it take lambda instead of std::bind2nd
? Seems more reader-friendly :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. Done!
|
||
VectorXd y_i(1); | ||
y_i(0) = norm_score; | ||
y_sample.row(i) = y_i; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I read this correctly, all observations are re-normalized every time, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we only call BayesianOptimization::NextSample()
once after each call to AddSample()
, so it should be performant. This is also idempotent, because we store the raw value in y_samples_
, while the matrix y_sample
is normalized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Signed-off-by: Lin Yuan <apeforest@gmail.com>