-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MOEA/D-DE vs NSGA-II : Battle of the Optimizers #294
Comments
SpeedUsing the configurations mentioned above, I've ran the test setting random seed for My specs:Processor: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz 1.80 GHz
@zoq @coatless @say4n (or anyone else for that matter) I'd be grateful if you guys could confirm these results on your local PC by cloning the repository above along with your specs. Somehow native Linux is slower than WSL2? |
armadillo-10.5.2
|
g++-10.2.1
I didn't realize that I didn't have OpenBLAS, so I installed that and tried again.
I noticed that this still only ran singlethreaded, so I added the
Yet, that still ran primarily singlethreaded, so this optimizer must not be using many parallelized OpenBLAS functions. Why not one more? This time with
Okay I'm having fun here, what if we reduce OpenBLAS's overhead by setting
And hey why not let's switch back to reference BLAS... (now
Interesting, I guess the OpenMP overhead accounts for the runtime differences between OpenBLAS and reference BLAS. Now I'm curious about what
Looks like It would be interesting to see what clang does, but, I probably should move on to other things... Anyway I have absolutely no idea if all this information is useful, but I had fun playing with compiler flags. Results will not generalize past my particular machine. :) |
@rcurtin I can't thank you enough for the detailed analysis you've provided me with. I have a request, can we keep this thread open indefinitely or pin this or add in the documentation? What's your thoughts? |
Personally, I would not keep this open or pin it; what has been shown here is that MOEAD is faster than NSGA2 on the ZDT test suite, which aligns with the results in the paper. Looking at the results posted, they are all very similar, so I don't think we will see huge differences between the two methods on different setups. One thing that currently isn't covered in your benchmark script is to start with different initial coordinates; maybe the one we are testing against right now are more beneficial for MOEAD, or maybe NSGA2 found a close solution before max generation is reached, what we can see here is that one MOEAD step is faster than NSGA2. That said, what we should do is compile a suite of problems and benchmark different optimizers with different settings, similar to https://arxiv.org/abs/1910.05446, https://arxiv.org/abs/2007.01547 publish it in the form of a paper and link that one on the website. |
Acknowledged. To re-iterate, we agree that the implemented algorithm is displaying results as was promised in the paper (perhaps more than promised). MOEAD has not only outdone classic NSGA-II in terms of Quality Metric but also speed. In conclusion, we have sufficient proof to say algorithm is implemented correctly. Reg. Initial Point, if diving into that matter, would you suggest we start randomly or should making an educated guesses? If the latter, what should be the criterion? Moving on to "finding solution before max gen", we need a way to track this. I guess a callback would do? We can maybe modify template<typename IndicatorType, typename TestSuiteType ......>
FunctionName(const IndicatorType& indicator, const TestSuiteType suite)
{
// For example suite = ZDT
CubeType bestFront = suite.GetReferenceFront();
// For example indicator = IGD+
currentQuality= indicator::Apply(currentParetoFront, bestFront);
if( currentQuality < bestQuality)
{
....
}
steps++
.....
} something like this? Finally, do you have other suites in mind? We can try a combination of single-objective functions but not sure to what avail they'll test the optimizer's ability. We could port DLTZ and the test suite mentioned in the MOEAD paper as well. Let me know your thoughts. |
I really like the real world problems - http://www-personal.umich.edu/~fioretto/cfp/OPTMAS18/papers/paper_7.pdf. But if we move forward with it, we should make sure the interface is there to easily add new problems. |
This issue has been automatically marked as stale because it has not had any recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions! 👍 |
Should we re-open this? |
+1. @zoq ? |
I. Introduction
This issue is a documentary of the differences between NSGA-II and MOEA/D-DE. The results presented have been tested rigorously, links of which will be shared. These optimizers would be compared on their ability to the accuracy of results, their speed and the quality of the Pareto Front. The parameters have been taken directly from [1] for comparison.
II. Methodology
i) Parameters
Common
numGenerations
: 500.populationSize
: 300.crossoverRate
: 1.0.mutationRate
: 1 /num_variables
.mutationStrength
: 1 / (distributionIndex
+ 1.0) ;distributionIndex
= 20.0.upperBound
: 1.0.lowerBound
: -1.0.NSGA-II Specific
epsilon
: 1e-6.MOEAD Specific
neighborProb
: 0.9.neighborSize
: 20.differentialWeight
: 0.5.maxReplace
: 2epsilon
: 1E-10ii) Testing agents
.
a) Speed
Using the parameters and testing agents mentioned above, find the time in
milliseconds
and calculate how many times MOEAD is faster than NSGA-II.b) Quality
With the above arguments, use Indicators against True Pareto Front of ZDT and determine accuracy.
c) Plot
The notebook for portfolio optimization has a good real life application, using callbacks to track the optimization process see how these algorithms fare against each other.
Related: #282 #269 #176 #149
III. Reference Repository
https://github.com/jonpsy/Battle-of-Optimizers
IV. Bibliography
The text was updated successfully, but these errors were encountered: