New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor laplacian #2212
Refactor laplacian #2212
Conversation
To answer your questions:
|
@@ -1 +1 @@ | |||
Subproject commit 06d38c0751ec6450c33a85fbe15ceb8543c6cc65 | |||
Subproject commit 3165600ed1d43ad630b367311e648716125ab686 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this again for?
The matrix operations class is a great idea. We are working on this. It should be a bit more general than your ideas here - but you are totally right with pulling things like log-determinats out. @lambday it would be great if you could also think about adding the things that @yorkerlin needs for the GPs. We then cover many many things at once. @lambday @yorkerlin this is a great example of synergy effects of GSoC and is perfect for the pre-GSoC time. Having those problems solved in a general way will massively benefit the rest of Shogun |
@yorkerlin I'm open for discussion :) We're aiming at separating Shogun's linear algebra frontend from any particular backend dependency. So, in linalg/internal we can provide different implementations for most commonly used linear algebra operations in Shogun with different backends and using a common interface Shogun classes can choose to use any of them for those tasks. We'll always have some global settings (some default backend, say, Eigen3) for these and also if we want we can have module specific settings. All these things can be done via cmake options. If a user wants to use a particular backend for his algorithm that's also possible. I have made a prototype implementation here. Please check the README. Also, could you please let me know whether your requirements fall under the modules I mentioned in the above README? What exactly are your requirements (what's the input/output of the operation that you're trying to do using Eigen3)? This will help a lot in further polishing and discover faults in the plan. @karlnapf yeah I am quite excited about this :D Lets hope that we get the basics integrated within next week (I'll add Eigen3 sum and dot first). I gotta check some cmake stuffs also. |
@lambday MatrixXd eigen_V = eigen_L.triangularView().adjoint().solve(MatrixXd::Identity(eigen_L.rows(),eigen_L.cols())); eigen_v.block(0, 1, n-1, n-1).diagonal() = (0.5*ArrayXd::LinSpaced(n-1,1,n-1)).sqrt(); EigenSolver eig(eigen_v); |
@karlnapf |
@karlnapf |
@karlnapf |
sorry for closing the PR accidentally. |
@karlnapf it seems travis fails due to the python module. |
@yorkerlin alright it fits nicely in the linalg internal library I was planning. So according to me it should be best to have it like template <class Scalar, class Vector, class Matrix, enum Backend>
struct get_cholesky
{
// may be use better naming for variables here? like W and sW?
static Matrix compute(Vector W, Vector sW, Matrix Kernel, Scalar scale)
{
// something default
}
}; and then partial specialization for your Eigen3 implementation like template <class Scalar>
struct get_cholesky<Scalar, Matrix<Scalar, Dynamic, 1>, Matrix<Scalar, Dynamic, Dynamic>, Backend::Eigen3>
{
typedef Matrix<Scalar, Dynamic, 1> VectorXt;
typedef Matrix<Scalar, Dynamic, Dynamic> MatrixXt;
static MatrixXt compute(VectorXt W, VectorXt sW, MatrixXt Kernel, Scalar scale)
{
// add your implementation that you have in MatrixOperations.h
}
}; please check out this and this. This way you can directly work with Eigen3 vectors and matrices as per your need as its all internal. And also, we can have a backend independent implementation like (see this) template <class Scalar, class Vector, class Matrix>
Matrix get_cholesky(Vector W, Vector sW, Matrix Kernel, Scalar scale)
{
return impl::get_cholesky<Scalar, Vector, Matrix, linalg_traits<Factorization>::backend>::compute(W, sW, Kernel, scale);
} Then the use case would be as simple as (check this) with your default Eigen3 backend // W, sW, kernel are Eigen3 objects
linalg::get_cholesky<float64_t, VectorXd, MatrixXd>(W, sW, kernel, scale); Similar way you can add other methods. I'll add the basic stuffs to shogun as soon as the design gets approved :) |
Please ping me on irc if you have any questions or doubt regarding this :) |
@lambday could you push this hard this week? Then @yorkerlin can use at least the Cholesky solver and log-determinants. Its looking good guys! :) |
@karlnapf absolutely. So the latest design is finalized, right? I was checking some cmake things regarding how to use this. Just figured it out. So it would work like
that sets the USE_EIGEN3/USE_VIENNACL flags. For module specific settings I am not finding better var names than
|
That sounds good to me, but out of my expertise. I guess @vigsterkr has a comment on this too |
@lambday do we really want to do this compile time? |
@karlnapf @lambday and @vigsterkr |
@lambday |
@yorkerlin no pls do not add stuff to the matrix operations class. This class might be used for very GP specific operations (which I dont think exist). However, methods like the ones you mentioned are supposed to go to linear algebra framework. |
@yorkerlin yeah as @karlnapf said, we should aim at doing these things in a better way. I think your methods are already suited nicely in the linalg framework that we have planned (thanks to @lisitsyn for his further suggestions, we're trying to make the API super simplified). You just have this method and then as soon as I add the basics, you can add these methods in shogun/mathematics/linalg/internal/ (which doesn't exist right now). |
@vigsterkr as per our discussion on irc, the other runtime alternative is far more painful to maintain according to me. We can, however, choose to use any backend irrespective of what global backend was set, even as shogun users. This compile time option is leading to much smaller and manageable code for these tasks I believe. |
@karlnapf @lambday |
@yorkerlin are you sure that these methods that you want to add are way too specific and won't be used anywhere else but gp? I think methods like I'll surely let you know when the basics are added. Trying to finish within this week. |
@lambday For
For
Let me know your thought |
@yorkerlin thats sounds good. You do your GP-specific transformations inside the helper class, and then call the linalg framework from within once you have reduced your tasks to standard problems. BTW have a look into the documentation of googletest how to select certain tests |
working on extending Laplace method for multi-classification. |
i think it should be the same, a user doesnt care, he just wants to use one class. |
@karlnapf |
However, variatoinal method can work for |
@yorkerlin I agree with you, things are different under the hood. However, a user should have the possibility to just say "Laplace" and then the corresponding Laplace method is used internally. I think this can be solved via introducing a wrapper class CLaplacianInference that checks the likelihood and then instanciates the corresponding inference method object internally. Then we could even hide the other classes from the modular interfaces and things might be cleaner. This is in particular interesting for users who are not familiar with too many details about these things. |
Yeah the Girolami thing would be neat to have. He is my former supervisor and we in fact already talked about having this in Shogun. Though @emtiyaz had some not so promising results with this I believe |
Girolami's method works reasonably well for prediction accuracy, but not for marginal likelihood approximation. I have the results in fig. 2 of the following paper. |
1 similar comment
Girolami's method works reasonably well for prediction accuracy, but not for marginal likelihood approximation. I have the results in fig. 2 of the following paper. |
Girolami's method works reasonably well for prediction accuracy, but not On Wed, Jul 30, 2014 at 9:44 PM, Heiko Strathmann notifications@github.com
Emtiyaz |
Hi Wu, Thanks On Thu, Jul 31, 2014 at 4:01 PM, Wu Lin notifications@github.com wrote:
Emtiyaz |
@emtiyaz |
@emtiyaz |
we should focus on the writeup now could you please send a pull request with the notebook only? |
@karlnapf |
I have few suggestions on that, but I am busy with a paper submission until emt On Fri, Aug 1, 2014 at 4:36 PM, Wu Lin notifications@github.com wrote:
Emtiyaz |
August 11, but the last week is reserved for other things. We want to finish implementing/writing things within the next days |
@yorkerlin whats the state of this one? |
@lambday, @karlnapf, @yorkerlin, what should be done about this one? |
The code is too tightly coupled with Eigen3. Even if cholesky is there in linalg, we'd have to use specific Eigen3 backend for this so I think its okay for now to keep it like this way. Many of these operations are specific to GP and I'm afraid there is no better way to manage all these operations with generic linalg apart from being dependent on Eigen3. Even in future linalg won't be (and not intended to be) able to generalize all of what Eigen3 does! Just a few things that I'd do differently for this PR
@yorkerlin could you please give it a couple of mins to review this once? If you think the it's ready please let us know :) |
@lambday I second your thoughts on generality of linalg actually. However, it would still be cool to have expensive and simple operations in lingalg, like Cholesky, linear solve, etc. These are also used everywhere in Shogun, so we get a lot from generalising them. @yorkerlin could you address the points that @lambday mentioned, I think they are really good |
I will working on it at the beginning of next week. On Fri, Oct 24, 2014 at 6:48 PM, Heiko Strathmann notifications@github.com
best, wu lin |
Cool, @yorkerlin. However, before adding new stuff, it is more relevant to take care of the existing features. At least, that is my opinion ;-) |
Working on it On Tuesday, October 28, 2014, Fernando Iglesias notifications@github.com
best, wu lin |
I will first clean up the existing GP codes in order to use the @karlnapf @lambday On Wednesday, October 29, 2014, jaster yorker yorkerjaster@gmail.com
best, wu lin |
@yorkerlin @lambday first step: Cholesky factorisation and linear solves. Maybe matrix matrix product, but only if the same matrix has to be multiplied many many times (otherwise makes no sense to use GPU) |
@karlnapf, @yorkerlin, ping :-) |
Further clean-up work will be done once I complete the FITC stuff. |
ok! |
@karlnapf take a look at this.
I will send the link for the notebook tomorrow.
Note that the original implementation of LaplacianInferenceMethod in Shogun used log(lu.determinant()) to compute the log_determinant is not numerical stable. (In fact, this implementation do not follow the GPML code)
Maybe MatrixOperations.h will be merged in Math.h.
However, I think in that case the Math.h file need to include the Eigen3 header.
Another issue is currently I use MatrixXd and VectorXd to pass variables in MatrixOperations.h.
maybe SGVector and SGMatrix will be better. (should I use "SGVector &" or "SGVector")
I do now know whether passing SGVector to a function is to copy elements in the SGVector.