You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
after using @ja-thomas snippet in #395 & reading carefully the literature that comes with mlrMBO specially " Model-Based Multi-objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark"; i made an example where i want to understand how things works :
how to determine or even calculate lambda for both LCB/DIB for example in mix space optimization section in mlrMBO tuturials lambda was set to 5 but in package vignette 1 for purely numeric space & 2 for otherwise(mixed space) so from where 5 is coming in that example & in our example above is correct to pass lambda to 2 .
is there away to tune a nested level resampling with multi objective since i do care for final performance of the test set in the outer resampling instance.however i find it clunky to do so specially when tuning the inner resampling like the example above it gives 3 sometimes 2 best hyperparmeter combination so to obtain performance on the outer test set would require the shady tune-grid.
3.in the paper " Model-Based Multi-objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark" epsilon-EGO was found to have less deterioration in multi-point proposal than sms-EGO than why sms-EGO is the default for DIB.
4.any advice to enhance that example would be great.
The text was updated successfully, but these errors were encountered:
There is no true best lambda value. In some papers some values work better, but this is just due to the specific problems they looked at. If you have totally different functions another value might work better. If you know that your function really is a realization of a certain Gauss process then there certainly is an optimal lambda, but this assumption usually does not hold. My gut feeling tells me that a value between 2 and 5 seems fair. After all the difference of how fast you get close to true optimum (Pareto front) should just be small.
You do nested resampling to estimate the performance you can get if you tune your hyperparameters. Therefore, we need one optimal result. Multi-objective optimization gives you a Pareto set instead of a single best result. You could come up with a strategy or always select one setting yourself. Afterwards you could do the nested resampling to validate the tuning + selection process. Multi-objective optimization is not for tuning hyperparameters, it's for exploring the possible trade of between to or more objectives.
I guess the choice was made before the benchmark showed that eps-EGO is better. (Also @danielhorn ?)
Sorry, no (but it's probably too late anyhow)
Closing, because there is no bug (just questionable defaults)
after using @ja-thomas snippet in #395 & reading carefully the literature that comes with mlrMBO specially " Model-Based Multi-objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark"; i made an example where i want to understand how things works :
here are my questions :
3.in the paper " Model-Based Multi-objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark" epsilon-EGO was found to have less deterioration in multi-point proposal than sms-EGO than why sms-EGO is the default for DIB.
4.any advice to enhance that example would be great.
The text was updated successfully, but these errors were encountered: