Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generalize gradient check for multiple weight matrices #818

Merged
merged 5 commits into from Aug 4, 2017

Conversation

prlz77
Copy link
Contributor

@prlz77 prlz77 commented Jul 23, 2017

No description provided.

@mention-bot
Copy link

@prlz77, thanks for your PR! By analyzing the history of the files in this pull request, we identified @Randl, @nyanp and @edgarriba to be potential reviewers.


if (w.empty()) continue;
if ((*weights[0]).empty()) continue;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if (weights.empty() || (*weights[0]).empty()) continue; is preferred in case of current->weights() returns empty vector.

@bhack
Copy link
Contributor

bhack commented Jul 23, 2017

You need to run clang-format


if (w.empty()) continue;
if (weights.empty() || (*weights[0]).empty()) continue;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nyanp what would make that weights is not empty but yes first indexed element ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@edgarriba
Ah probably if (weights.empty()) is enough in this case. Checking emptiness of each elements can be performed in calc_delta.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I change it then? I can do it in just a moment but I think it is correct both ways.

@edgarriba
Copy link
Member

@prlz77 use clang-format so that we can merge

@Randl
Copy link
Contributor

Randl commented Jul 30, 2017

Problems with training? Looks strange

@prlz77
Copy link
Contributor Author

prlz77 commented Jul 31, 2017

@Randl It seems a precission problem, I think this can be solved just by training for more iterations.

@edgarriba
Copy link
Member

@prlz77 does #846 fix this ? otherwise we can merge it

1 similar comment
@edgarriba
Copy link
Member

@prlz77 does #846 fix this ? otherwise we can merge it

@prlz77
Copy link
Contributor Author

prlz77 commented Aug 4, 2017

@edgarriba this is complementary to #846, so it is still necessary.

@edgarriba edgarriba merged commit d043857 into tiny-dnn:master Aug 4, 2017
Randl pushed a commit to Randl/tiny-cnn that referenced this pull request Aug 20, 2017
* Generalize gradient check for multiple weight matrices

* Add nyanp fix

* clang-format
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants