Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross Entropy? #1

Closed
Zelex opened this issue Dec 1, 2016 · 1 comment
Closed

Cross Entropy? #1

Zelex opened this issue Dec 1, 2016 · 1 comment

Comments

@Zelex
Copy link

Zelex commented Dec 1, 2016

Looking through the code, not sure if its using this. If not, why not?

@codeplea
Copy link
Owner

codeplea commented Dec 1, 2016

Hi Zelex,

No, Genann doesn't use the cross entropy method. Genann implements very basic, standard back-propagation.

Why? Three reasons.

  1. New features should be justified. i.e. if I'm going to add a new feature, I need a good reason to add it, not a good reason to leave it out.

  2. I personally don't use back-prop at all in my personal applications for Genann (I'm doing reinforcement learning). The back-propagation was added mostly as a sanity check and because others expect it.

  3. One of Genann strengths is that it's small and hackable. If we implement cross entropy, then why not also implement the 500 other techniques and optimizations in common use?

Anyway, I'm not very familiar with cross entropy, so if I'm missing a compelling argument for it, please let me know. I'd like to hear your input.

Best,
Lewis

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants