-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about pre-training language model #43
Comments
As I reproduced, 1~2 epochs are enough and final model (vision model + language model we pretrained + alignment model) could catch up with the performance author published in paper. But another problem I found, param use_sm is set False in pretrain_language_model.yaml. Is spelling mutation used when pretrain language model ? |
@Jack-Lee-NULL did you try to evaluate LM separately? I'm training it using a smaller dataset (3.1M words, lowercase alphanumeric) and a similar setup (effective batch size = 4096), but word accuracy always saturates below 40%. This is well below the performance of the VM alone (> 85%) which converges to an acceptable state much more quickly. |
@FangShancheng I'm using the pretrained weights for the LM and a small test script to probe its outputs given arbitrary inputs. I'm getting weird results. Below are some examples: Input: hello Input: hello2 Input: hllo Input: test For the first sample, the output is as expected. However, for the next two ones, the outputs are way off. For the last one, the model erroneously corrected |
@baudm I did. Different training methods I tried. On different dataset(MJ+ST lexicon, Wiki103), similar conclusion I got.
|
@Jack-Lee-NULL what's the metric you're using for evaluation? When using the pretrained LM weights, I'm getting unexpected results (see previous comment). Did you try to check the actual individual outputs of the LM? Here's my minimal test script for checking individual inputs. |
@FangShancheng @Jack-Lee-NULL I just checked Table 4 of the paper and word accuracy of BCN is indeed just above 40%. So I guess the results I posted in my previous comments were expected. |
Hello, I find the setting of epoch in pretrain_language_model.yaml is 80, and I use 4 titan xp to pre-train language model. However, 1 epoch takes 8 hours, and 80 epochs need 640 hours(nearly 26 days). Should I have to train 80 epochs? How to judge the training of language model become convergence?
The text was updated successfully, but these errors were encountered: