Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learning algos #19

Merged
merged 16 commits into from
Apr 13, 2017
Merged

Learning algos #19

merged 16 commits into from
Apr 13, 2017

Conversation

NikSQ
Copy link
Contributor

@NikSQ NikSQ commented Mar 28, 2017

Implementation of GD variants. ADAM seems to have a problem.

@anandtrex
Copy link
Contributor

That's one mega-code dump :-) Will take some time to review. Hopefully @maharjun can help.

Copy link
Contributor

@anandtrex anandtrex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some major refactorization needs to be done to try to share more code:

  1. Why create one file in bin/ per gradient descent variation? Can't you have just one file with some sort of options for all GD variants? I would highly prefer this.
  2. Ditto for GD implementations. Can't you just have some class method which creates new individuals based on which variant of GD is passed in as argument in a single GD class?

Minor: As much as possible please try to make sure that lines are not longer than 120 characters.
You should also address the failing PEP8 tests.


def main():
name = 'LTL-FUN-ADAM'
root_dir_path = '~/LTL/sim_results.txt' # CHANGE THIS to the directory where your simulation results are contained
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reset to None


# NOTE: Outerloop optimizer initialization
# TODO: Change the optimizer to the appropriate Optimizer class
parameters = AdamParameters(learning_rate=0.01, exploration_rate=0.01, n_random_steps=5, first_order_decay=0.8, second_order_decay=0.8, n_iteration=100, stop_criterion=np.Inf)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line too long.

ADAM works now
Made sure that no line exceeds 120 characters
Copy link
Contributor

@anandtrex anandtrex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Structure and implementation look good. You still have to update things to confirm to PEP8 code style and make sure the CI tests pass.


parameters = ClassicGDParameters(learning_rate=0.01, exploration_rate=0.01, n_random_steps=5, n_iteration=100, stop_criterion=np.Inf)
# parameters = AdamParameters(learning_rate=0.01, exploration_rate=0.01, n_random_steps=5, first_order_decay=0.8, second_order_decay=0.8, n_iteration=100, stop_criterion=np.Inf)
# parameters = StochGDParameters(learning_rate=0.01, stoch_deviation=1, stoch_decay=0.99, exploration_rate=0.01, n_random_steps=5, n_iteration=100, stop_criterion=np.Inf)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use full name -- StochasticGDParameters

"""

StochGDParameters = namedtuple('StochGDParameters',
['learning_rate', 'stoch_deviation', 'stoch_decay', 'exploration_rate', 'n_random_steps', 'n_iteration',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefere 'stochastic_deviation', 'stochastic_decay' etc.

elif type(parameters) is AdamParameters:
self.initAdam(parameters, traj)
elif type(parameters) is RMSPropParameters:
self.initRMSProp(parameters, traj)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add an else check if parameters is none of the known types, and throw an error.

logger.info("-- End of (successful) gradient descent --")


def initClassicGD(self, parameters, traj):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All function names have to be lower_case_underscore_separated. See PEP8.

@NikSQ
Copy link
Contributor Author

NikSQ commented Apr 7, 2017

I should've now addressed the requested changes. I assume that CircleCI does a complete style check for PEP-8. If that is not the case, I'd still need to review the code for style errors.


if self.g < traj.n_iteration - 1 and traj.stop_criterion > self.current_fitness:
# Create new individual using the appropriate gradient descent
self.update_function(traj, np.dot(np.linalg.pinv(dx), fitnesses - self.current_fitness))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, the update_function moves self.current_individual to the next step, and then below, you create a population by adding noise to it?

comment='Decay of the second order momentum')

self.delta = 10**(-8)
self.s = np.zeros(len(self.current_individual))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use more descriptive names for this. What are self.s and self.r?

@anandtrex anandtrex merged commit 2d8b031 into master Apr 13, 2017
@anandtrex anandtrex deleted the learning_algos branch April 13, 2017 15:41
@anandtrex
Copy link
Contributor

Merged! I did a squash merge, so you might want to continue development off master and not continue on the learning_algos branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants