-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benefit of the gradients on the 2-dimensional Rosenbrock function #388
Comments
Hi Benoit. Maybe rosenbrock is not the best function to make GEK shine. The surface changes smoothly and plain kriging fits it basically pretty well and derivatives does not bring much more information (at least it does not worsen the prediction 😅 ). I would say that benefits should be seen with a function with more strong changes between training points. |
We have detected some problems in GEKPLS in SMT GitHub master, not present in SMT 1.3. Do you use SMT 1.3? |
Yes, I use SMT 1.3. |
Hello Benoit,
=> What you are comparing is therefore a prediction error related to the sample size and some noise from an ill-conditioned PLS matrix. If you chose a reduction of the model size from Rosenbrock(ndim=5) to n_comp=2 and if you chose a sample size of 50 points you would obtain the result below. So GEKPLS helps but need a certain amount of points and an effective reduction of the dimension for the model. |
Hi, I totally understand your point. What you want to compare is Kriging vs Gradient-Enhanced Kriging (GEK). In this case, we should obtain better performances with GEK, you are totally right ! Unfortunately, GEK is not implemented in SMT (it has a lot of limitations for a large number of points or an high-dimensional problem). You are also right for the bad accuracy, as we add more badly approximate direction with GEKPLS, the numerical errors should add up and the accuracy decrease. In fact, GEKPLS is based on the PLS-reduced principal components derivatives. In this case, not only the PLS directions are ill-conditioned but it do not corresponds to GEK as GEK do not use dimension reduction at the price of being more expensive. |
Thank you Paul for the explanation. |
Thanks Paul! |
Hello,
I'm experimenting with KPLS and GEKPLS on the 2-dimensional Rosenbrock function by measuring their prediction accuracy (in terms of relative L2-distance).
In the results I get GEKPLS does not seem to benefit from the gradients.
I was expecting GEKPLS to be significantly more accurate than KPLS.
You'll find my script and the results I get hereunder. Am I doing something wrong?
I've tried changing the number of starting points and switching the optimizer to TNC but the results were similar.
Thank you for your time,
Benoît
The text was updated successfully, but these errors were encountered: