Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Moving EVD and SVD heavy lifiting into separate methods #3775

Merged
merged 2 commits into from Apr 23, 2017

Conversation

izcram
Copy link
Contributor

@izcram izcram commented Apr 22, 2017

Small refactoring PR to get to know the project and its processes

@@ -102,168 +102,176 @@ bool CPCA::init(CFeatures* features)

// center data
Map<MatrixXd> fmatrix(feature_matrix.matrix, num_features, num_vectors);

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can put your name to the authors at the beginning of the file
as well as changing the license to BSD (grep for BSD license in the source tree)

Copy link
Member

@karlnapf karlnapf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minimal changes required.

Let's see what travis says


if (m_whitening) {
for (int32_t i = 0; i < num_dim; i++) {
if (CMath::fequals_abs<float64_t>(0.0, eigenValues[i], m_eigenvalue_zero_tolerance)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor style:
We do new lines for {


// covariance matrix
MatrixXd cov_mat(num_features, num_features);
cov_mat = fmatrix*fmatrix.transpose();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

next (small) PR: transpose actually changes the matrix memory, which is horrible. adjoint is the method we want to use here

@@ -225,6 +225,10 @@ class CPCA: public CDimensionReductionPreprocessor
* whitening to tackle numerical issues
*/
float64_t m_eigenvalue_zero_tolerance;

private:
void init_with_evd(const SGMatrix<float64_t>& feature_matrix, int32_t max_dim_allowed);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These needs minimal doxygen documentation saying that they alter the state (transformation matrix).
Also potentially could say that EVD is based on an eigendocoposition of the covariance matrix, while SVD does an SVD on the data matrix.

fixed style (opening braces on new line)
@karlnapf
Copy link
Member

One final request: Could you squash the commits? (or using git amend and force push to your fork)

@lisitsyn
Copy link
Member

@karlnapf you can squash commits with a click now. "Squash and merge"

@karlnapf karlnapf merged commit 22a41a8 into shogun-toolbox:develop Apr 23, 2017
karlnapf pushed a commit that referenced this pull request Apr 25, 2017
* Moving EVD and SVD heavy lifiting into separate methods
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants