Skip to content

Conversation

perkyguy
Copy link
Contributor

oil-man

Description

I removed a lot of the hardcoded handling in _updateTrainingOptions, and instead have it iterate over all the trainingDefault options and set them if they exist. I didn't want to use a spread operator, as this keeps it limited to the options we want passed. this isn't a big deal IMO, and it could make this very simple to implement if we don't care about extra options.

Motivation and Context

This is in response to some comments not being submitted before #137 was merged

How Has This Been Tested?

yarn test - all passed

Screenshots (if appropriate):

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)

Author's Checklist:

  • My code focuses on the main motivation and avoids scope creep.
  • My code passes current tests and adds new tests where possible.
  • My code is SOLID and DRY.
  • I have updated the documentation as needed.

Reviewer's Checklist:

  • I kept my comments to the author positive, specific, and productive.
  • I tested the code and didn't find any new problems.
  • I think the motivation is good for the project.
  • I think the code works to satisfies the motivation.

…g in the _ prefix. Also, made the operation a little less fragile by updating any options which are in the default object, rather than a specific list. Some exceptions are handled separately (e.g. activation and log). Same thing for the _getTrainOpts function
Copy link
Contributor

@freddyC freddyC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the changes. There are a few things that I commented on that I think we need to handle still. Let me know what you think.

* momentum: (number),
* activation: ['sigmoid', 'relu', 'leaky-relu', 'tanh']
*/
updateTrainingOptions(opts) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I originally had this method as private (i.e. starting with _) but then asked myself why. I think it is fine to be exposed to the user but I could see it going either way. Mostly if we don't have the _ it indicates to the user that they are safe calling it without having strange side effects.

I would vote for keeping it public but it isn't a big deal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They set these options implicitly via the constructor or the train, do they need a third way to set the training options?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no they don't need it, but if we have it already setup it is 0 cost to us letting it be explicitly exposed. I am ok keeping it private I just thought I would bring it up.

if (opts.timeout) { this.trainOpts.timeout = opts.timeout; }
if (opts.activation) { this.activation = opts.activation; }
_updateTrainingOptions(opts) {
Object.keys(NeuralNetwork.trainDefaults).forEach(opt => this.trainOpts[opt] = opts[opt] || this.trainOpts[opt]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would create weird behavior in the instance of updating training options. For example:

net.train(data, { log: true });
net.train(data2, { logPeriod: 50 });

The second training would not print because the log would be reset to the default. The same problem would occur any time the update training options method is called.

I like the idea of iterating instead of static if's but maybe we need to iterate on the union of keys from defaults and the passed in objects.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm iterating over the keys of the static, not the static itself. I'm still setting the this.trainOpts

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ideally this would throw an error, as the net is still training

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@robertleeplummerjr , I think this is assuming synchronous training, so there shouldn't be any overlap with train timing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doh, that is correct.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all the new tools easily confuse me

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SUPPER SORRY. I read ... = opts[opt] || this.trainOpts[opt] and thought ... = opts[opt] || this.trainDefaults[opt]. This is fine the way it is given that we call this in the constructor:

this._updateTrainingOptions(Object.assign({}, this.constructor.trainDefaults, options));

tenor

return results;
return Object.keys(NeuralNetwork.trainDefaults)
.reduce((opts, opt) => {
if (this.trainOpts[opt]) opts[opt] = this.trainOpts[opt];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to add the json object of a callback to the results

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What should it be? If someone wants to output it via JSON.stringify, the keys with functions in them will get omitted. This is a little awkward, but sending in some string representation of the function means we'd have to account for this on both converting to and from a JSON. If they instead are just using the JSON as a holder in-memory for their net's configuration (e.g. how the tests use it), it would actually provide meaningful data back with references to the actual functions used rather than a placeholder.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another question would be why would we want to give the json of something someone else owns?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Said differently: There are likely values we do want to retain as json, perhaps we could just be static in defining them, rather than including all of them. If it were, for example, just a list of the ones that matter, that'd give us both the small footprint of code, and logic for what actually affected the net, which may be important later for json exporting.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nm, I see it now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thought: using reduce is overkill. Lets get a list of the items that should be specifically exported, then map them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it isn't that I hate reduce, it has it's place (dark side, unstructured data) and here we know the context and structure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so are you proposing we have get exportableTrainingOptions() which will give us a subset of get trainingDefaults?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial thought was that this concept interfered with the general output would somehow make the library too non-specific enough for the lib, but the more I think about this, the more it makes sense.

ty guys for thinking this through, and it does clean up the training area.

Copy link
Contributor

@robertleeplummerjr robertleeplummerjr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

options are now fat free!

@robertleeplummerjr robertleeplummerjr dismissed their stale review January 30, 2018 21:41

some good points brought up

Copy link
Contributor

@freddyC freddyC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the first comment (the private vs public) I can see going either way, and I am fine with it. I just thought I would mention it as it might be convenient for users.

The second (resetting training options) was my bad in reading it. When I read it I thought "that will lead to weird behavior" but I totally read it wrong. Sorry.

The third comment I think is the only thing left. I mentioned how I would vote to either ignore both or possibly restore log with the console.log method. Either way we do it lets make sure the doc's reflect it.

The last thing we don't address is validation. Do we want to? If we do lets do that in a different pr (unless you want to group it I am fine either way).

@robertleeplummerjr
Copy link
Contributor

what kind of validation?

Copy link
Contributor

@robertleeplummerjr robertleeplummerjr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lookin' good, almost there

return results;
return Object.keys(NeuralNetwork.trainDefaults)
.reduce((opts, opt) => {
if (this.trainOpts[opt]) opts[opt] = this.trainOpts[opt];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thought: using reduce is overkill. Lets get a list of the items that should be specifically exported, then map them.

@freddyC
Copy link
Contributor

freddyC commented Jan 31, 2018

I think validation should be a new pr, I just brought it up (probably not the best place to do that but we are on the topic in lol). Mostly with the focus of helping a user find out why their network isn't training.

@robertleeplummerjr
Copy link
Contributor

just fix up conflicts and we're good to go

@perkyguy perkyguy merged commit 33479b0 into develop Feb 2, 2018
@perkyguy perkyguy deleted the fix-training-option-setting branch February 2, 2018 21:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants