Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TODOs #13

Open
2 of 7 tasks
ghost opened this issue Apr 26, 2018 · 6 comments
Open
2 of 7 tasks

TODOs #13

ghost opened this issue Apr 26, 2018 · 6 comments

Comments

@ghost
Copy link

ghost commented Apr 26, 2018

This is an umbrella issue where we can collectively tackled some problems and improve general open source reading comprehension quality.

Goal
The network is already there. We just need to add more features on top of the current model.

  • Implement full features stated in the original paper
  • Achieve EM/F1 performance stated in the original paper with a single model settings

Model

  • Increase the hidden units to 128. Report the results #15 reported performance increase when the hidden units increased from 96 to 128
  • Increase the number of heads to 8
  • Add dropouts in better locations to maximize regularization
  • Train "unknown" word embedding

Data

  • Implement paraphrasing by back-translation to increase the data size

Contribution to any of these issues is welcome and please comment on this issue and let us know if you want to work on these problems.

@ghost ghost changed the title Regularization TODOs Apr 26, 2018
@ghost ghost added the help wanted label Apr 26, 2018
@ghost
Copy link
Author

ghost commented Apr 27, 2018

As of f0c79cc, I have changed the location of dropouts to "after" layer norm from "before" layer norm. It doesn't make sense to drop input channels to layer norm as they normalize across channel dimensions, this will cause distribution mismatch during inference time and training time. We shall see how this improves the model.

@alphamupsiomega
Copy link

alphamupsiomega commented Apr 29, 2018

To overcome your GPU memory constraints, what about just decreasing batch size?

On a 1080 Ti (11GB), I'm able to run 128 hidden units, 8 attention heads, 300 glove_dim, 300 char_dim with a batch size of 12. At least 16 and above, CUDA is out of memory. Accuracy seems comparable so far.

@ghost
Copy link
Author

ghost commented Apr 29, 2018

You have a valid point, and I would like to know how your experiment goes. I would also suggest trying group norm instead of layer norm as they report better performance with lower batch sizes.

@alphamupsiomega
Copy link

Good suggestion, Min. Since the paper compares against batch norm, have you found that layer norm generally outperforms batch norm lately? One could try batch norm also for comparison. Interestingly the 'break-even' point is about batch size 12 between batch norm and group norm for those paper's conditions. Layer norm is supposedly more robust to small mini batches compared to batch norm.

Also the conditions from the above comment run fine on a 1070 gpu.

Do you have a sense if model parallelization across multiple gpus is worth it for this type of model?

@localminimum
Copy link
Owner

Hi @mikalyoung , I haven't tried parallelisation across multiple GPUs so I wouldn't know what the best way to go about it is. I heard that data parallelism is easier to get working than model parallelisation. It seems that from #15 using bigger hidden size and bigger number of heads in attention improves the performance, so I would try fitting the bigger model with smaller batches into multiple GPUs.

@JACKHAHA363
Copy link

Right now what is the status reproducing the paper's result?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants