Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluating test data while training, not other process #5

Open
SSUHan opened this issue Dec 19, 2018 · 1 comment
Open

Evaluating test data while training, not other process #5

SSUHan opened this issue Dec 19, 2018 · 1 comment

Comments

@SSUHan
Copy link

SSUHan commented Dec 19, 2018

Hello jingxil,

Thank you for sharing this wonderful codes :)
I'm trying to use this code in "Sentence Reordering Task" now, then I got some troubles..

You use the forward_only option to separate "train" scope and "test" (or inference) scope into process units.
However, I want to do test during training to see if the model overfits.. but it's not easy because of attention mechanism(-> train and test run differently..) :(

Is there any good way to solve this problem??

Even if you don't code it, I'd really appreciate if you could share a reference sites or hints I could refer to

Thank you

@jingxil
Copy link
Owner

jingxil commented Dec 25, 2018

Hi, SSUHan. Currently, I am using the PyTorch framework which is easy to switch between 'test' and 'train' mode. Since I am quite rusty at the Tensorflow, I do not think I can help here. Sorry for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants