Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add plain and cholesky rank updates #4387

Merged
merged 1 commit into from Jul 31, 2018

Conversation

karlnapf
Copy link
Member

No description provided.

@@ -591,8 +591,8 @@ void CGMM::max_likelihood(SGMatrix<float64_t> alpha, float64_t min_cov)
switch (cov_type)
{
case FULL:
linalg::dger(
alpha.matrix[j * alpha.num_cols + i], v, v, cov_sum);
linalg::rank_update(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iglesias the old dget routine was a desaster, I added a new one, pls double check

auto y_martix =
SGVector<T>::convert_to_matrix(y, A.num_cols, 1, false);

auto temp_martix = SGMatrix<T>::matrix_multiply(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iglesias it created a temp matrix, called a multiply method, and then added it.
The new one doesnt create a temp matrix. It only works for x=y, though can be easily extended and we didnt need x!=y in any calls of the dger

@karlnapf
Copy link
Member Author

fixes #4385 and prepares ground for a linear regression with bias update

@karlnapf
Copy link
Member Author

Eigen3 can also do this, but it only updates the lower or upper part of a matrix. We might want to think about introducing types link symmetric matrices where only half of the memory is effectively used and the rest can be used a temp storage by algorithms, like eigen does

@karlnapf karlnapf requested review from iglesias and OXPHOS July 31, 2018 11:27
@karlnapf
Copy link
Member Author

@OXPHOS btw is there a reason why we don't define the methods in linalg using a type trait

template <typename T, typename T2 = typename std::enable_if_t<std::is_arithmetic<T>::value>>
template <typename T, typename T2 = typename std::enable_if_t<std::is_floating_point<T>::value>>
template <typename T, typename T2 = typename std::enable_if_t<std::is_integral<T>::value>>

etc

Could this replace the macros to define things for certain ptypes?

@karlnapf karlnapf merged commit 58b914a into shogun-toolbox:develop Jul 31, 2018
@OXPHOS
Copy link
Member

OXPHOS commented Aug 7, 2018 via email

@karlnapf
Copy link
Member Author

karlnapf commented Aug 8, 2018

wow congrats! :)

Copy link
Collaborator

@iglesias iglesias left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice stuff!

Sorry about the long delay. Más vale tarde que nunca. :-)

T update;
for (auto j : range(A.num_cols))
{
x = b[j];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, having the local x is better than just using b[j] all over the method? Same question for update.

}

/**
* Updates a matrix \f$A\f$ with a rank one term in-place,If A = LL^T
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: I think for this method the the A = LL^T can be left out.

* Updates the Cholesky factorization \f$A = L L^{*}\f$ with a rank one
* term in-place.
* If A = LL^T before the update, then after it
* LL^{*} = A + alpha * b b^{*}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quite nitpicking: perhaps a different name for the L as they are not the same as the ones above. In wikipedia they use tilde analogously for A. It is completely subjective I think, whether we want the doc to be more math correct, or just reflect what happens in the code. I am happy either way :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants