Skip to content
This repository has been archived by the owner on Jul 7, 2023. It is now read-only.

added optimizer registry #1401

Merged
merged 4 commits into from
Jan 24, 2019
Merged

Conversation

jackd
Copy link
Contributor

@jackd jackd commented Jan 22, 2019

Added optimizer registration, similar to other registration methods. This allows users to add new optimizers in their t2t_usr_dir without having to manipulate tensor2tensor code directly.

EDIT: points below resolved
Error checking number of args in registered function is a bit gross (inspect doesn't play nicely with functools.partial), but unsure of a better way of handling this.

I'm not convinced the way of handling optimizers in tf.contrib.layers.OPTIMIZER_CLS_NAMES is optimal, but I don't have a better way to bind loop variables to function calls.

@googlebot googlebot added the cla: yes PR author has signed CLA label Jan 22, 2019
@afrozenator
Copy link
Contributor

This is really neat @jackd -- many thanks! Will merge this in now.

Just FYI @rsepassi just a few days ago wrote a generic registry ee64f6f we'll port this (and maybe other registries) over to that abstraction -- but many thanks again!

@afrozenator afrozenator merged commit ba31f44 into tensorflow:master Jan 24, 2019
tensorflow-copybara pushed a commit that referenced this pull request Jan 24, 2019
PiperOrigin-RevId: 230778721
@jackd jackd deleted the optimizer_registry branch January 25, 2019 00:02
@jackd
Copy link
Contributor Author

jackd commented Jan 25, 2019

@afrozenator cheers. I was considering abstracting away some of the registry details myself, glad I wasn't the only one.

I'm a little confused as to the point of the new implementation though - _GENERIC_REGISTRIES/create_registry/_reset don't seem be used anywhere except tests? _reset loops over hard-coded registers rather than all registered registries? There's also no way of registering registries with custom error checking (like those already implemented)?

_OPTIMIZERS register is looking a bit lonely down the bottom too (and so I'm not surprised it was overlooked in _reset).

In short, this isn't how I'd have refactored it and it looks like a mash up of competing ideas, but there may well be use cases this is trying to serve that I'm not appreciating. Happy to put in a separate PR, just don't want to step on anyone's toes.

@rsepassi
Copy link
Contributor

rsepassi commented Jan 25, 2019 via email

@jackd jackd mentioned this pull request Jan 25, 2019
@jackd
Copy link
Contributor Author

jackd commented Jan 25, 2019

@rsepassi So I started drafting a plan but it got a bit out of hand so I just went ahead and made a pull request. Happy to discuss/make changes, but probably best to continue conversation over there :)

kpe pushed a commit to kpe/tensor2tensor that referenced this pull request Mar 2, 2019
* added optimizer registry

* fixed adafactor -> Adafactor

* fixed default naming

* improved base optimizer registration implementation
kpe pushed a commit to kpe/tensor2tensor that referenced this pull request Mar 2, 2019
PiperOrigin-RevId: 230778721
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cla: yes PR author has signed CLA
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants