-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add text style transfer (#166) #263
base: master
Are you sure you want to change the base?
Conversation
* initial commit * bug fixes and adjusting conv inputs * separate forward function for Discriminator and Generator and disable Gen training for debugging * remove debugger statement * bug fix * detaching stuff before accumulating * refactor and add component as optional parameter * Add optimizer for and backprop against encoder * Add in README
* initial commit * bug fixes and adjusting conv inputs * separate forward function for Discriminator and Generator and disable Gen training for debugging * remove debugger statement * bug fix * detaching stuff before accumulating * refactor and add component as optional parameter * Add optimizer for and backprop against encoder * Add in README * more fixes to eval mode * create optimizers so that they can be saved * fix typo
Codecov Report
@@ Coverage Diff @@
## master #263 +/- ##
==========================================
- Coverage 82.53% 82.48% -0.05%
==========================================
Files 205 206 +1
Lines 15829 15848 +19
==========================================
+ Hits 13064 13072 +8
- Misses 2765 2776 +11
Continue to review full report at Codecov.
|
* initial commit * bug fixes and adjusting conv inputs * separate forward function for Discriminator and Generator and disable Gen training for debugging * remove debugger statement * bug fix * detaching stuff before accumulating * refactor and add component as optional parameter * Add optimizer for and backprop against encoder * Add in README * more fixes to eval mode * create optimizers so that they can be saved * fix typo * linting issues * add type annotation for encoder * fix linting * Isolate AE in training * works after changing the learning rate * remove debugger
Please merge |
You might also want to change the PR title; should reference #166 instead. |
train_op_g.zero_grad() | ||
step += 1 | ||
|
||
vals_d = model(batch, gamma_, lambda_g_, mode="train", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which dataset do you use here? train_g
? It seems that train_d
is used in texar-tf
. Why does such a difference exist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For training, I use the same dataset because it is tricky to switch datasets between steps. And considering the discriminator and generator are trained separately, I've noticed it doesn't really affect the results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing the 2 iterators and keeping just one train iterator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If that is the case, we do not need to have train_d
in the iterator (?)
hparams=config.model['opt'] | ||
) | ||
|
||
def _train_epoch(gamma_, lambda_g_, epoch, verbose=True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In texar-tf
, both train_d
and train_g
are used in the _train_epoch
, but we only use train_g
in this function. Did I miss something important?
* Reviewed changes * linting
* initial commit * linting
initial commit
bug fixes and adjusting conv inputs
separate forward function for Discriminator and Generator and disable Gen training for debugging
remove debugger statement
texar bug fix
detaching stuff before accumulating
refactor and add component as optional parameter
Add optimizer for and backprop against encoder
Add in README