-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not reproduce report result about MultiView BART #17
Comments
|
**Thank a lot for your in-time reply. ** And I try more epoch, but Eval Loss increase and increase which looks like already overfit So I stop training And I try train_single_view by exec train_single_view.sh in train_sh. And the result seems still below the reported result in paper It seems Hit lowest val loss in Epoch4. According test ROUGE is 2021-05-31 10:21:37 | INFO | fairseq_cli.train | model bart_large, criterion LabelSmoothedCrossEntropyCriterion |
|
Thank a lot for reply During these days I tried to fetch the previous version of BART. But get nothing, the DownLoad Link in Fairseq Repo seems unchange since it was created. Do you keep the BART model.pt and other file(like vocab) you used in paper? Thanks a lot! For the BART baseline, I can not found the version of Baseline you implement in this Repo. The baseline implemented myself is differ from yours ( because of different data preprocessing or some other things) |
I did not keep the version of BART I was using. I noticed this issue because when I was preparing this repo and trying to load the trained model (https://drive.google.com/file/d/1Rhzxk1B7oaKi85Gsxr_8WcqTRx23HO-y/view) I saved, I got an error saying vocab mismatch. (I guessed they updated data pre-processing configs like encoder.json', /vocab.bpe' and 'dict.txt'). |
For the BART baseline, i think the easiest way is to change the input files (remove all the segmentations). |
I want to reproduce the result in paper. But result still lower than paper's result.
Any suggestion or solution greatly appreciated~~~~~~~~
And i do with follow step
Following is my train.log
`2021-05-30 16:31:37 | INFO | fairseq_cli.train | model bart_large, criterion LabelSmoothedCrossEntropyCriterion
2021-05-30 16:31:37 | INFO | fairseq_cli.train | num. model params: 416791552 (num. trained: 416791552)
2021-05-30 16:31:44 | INFO | fairseq_cli.train | training on 1 GPUs
2021-05-30 16:31:44 | INFO | fairseq_cli.train | max tokens per GPU = 800 and max sentences per GPU = None
2021-05-30 16:31:47 | INFO | fairseq.trainer | loaded checkpoint /home/data_ti4_c/gengx/PGN/DialogueSum/bart.large/bart.large/model.pt (epoch 41 @ 0 updates)
group1:
511
group2:
12
2021-05-30 16:31:47 | INFO | fairseq.trainer | NOTE: your device may support faster training with --fp16
here schedule!
2021-05-30 16:31:47 | INFO | fairseq.trainer | loading train data for epoch 0
2021-05-30 16:31:47 | INFO | fairseq.data.data_utils | loaded 14731 examples from: cnn_dm-bin_2/train.source-target.source
2021-05-30 16:31:47 | INFO | fairseq.data.data_utils | loaded 14731 examples from: cnn_dm-bin/train.source-target.source
2021-05-30 16:31:47 | INFO | fairseq.data.data_utils | loaded 14731 examples from: cnn_dm-bin_2/train.source-target.target
2021-05-30 16:31:47 | INFO | fairseq.tasks.translation | cnn_dm-bin_2 train source-target 14731 examples
!!! 14731 14731
2021-05-30 16:31:48 | WARNING | fairseq.data.data_utils | 5 samples have invalid sizes and will be skipped, max_positions=(800, 800), first few sample ids=[6248, 12799, 12502, 9490, 4269]
True
2021-05-30 16:43:49 | INFO | train | epoch 001 | loss 5.35 | nll_loss 3.418 | ppl 10.686 | wps 549.2 | ups 0.13 | wpb 4165.4 | bsz 158.3 | num_updates 93 | lr 1.395e-05 | gnorm 30.101 | clip 100 | oom 0 | train_wall 706 | wall 725
2021-05-30 16:44:02 | INFO | valid | epoch 001 | valid on 'valid' subset | loss 4.067 | nll_loss 2.182 | ppl 4.537 | wps 1721.8 | wpb 132.8 | bsz 5 | num_updates 93
100%|██████████| 817/817 [02:32<00:00, 5.35it/s]here bpe NONE
here!
Test on val set:
Val {'rouge-1': {'f': 0.466633180309513, 'p': 0.49140138382586446, 'r': 0.48556837413794035}, 'rouge-2': {'f': 0.2283604408486965, 'p': 0.23967396780627975, 'r': 0.2406360296133875}, 'rouge-l': {'f': 0.45239921360854707, 'p': 0.4768419669949866, 'r': 0.46298214107054253}}
2021-05-30 16:46:51 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints/checkpoint_best.pt (epoch 1 @ 93 updates, score 4.067) (writing took 13.230378480977379 seconds)
100%|██████████| 818/818 [02:34<00:00, 5.30it/s]Test on testing set:
Test {'rouge-1': {'f': 0.46401575684701973, 'p': 0.4876149230960775, 'r': 0.4856753031382108}, 'rouge-2': {'f': 0.22558266819086983, 'p': 0.23804809718663697, 'r': 0.2380510356102369}, 'rouge-l': {'f': 0.4507830089574146, 'p': 0.47369243895761404, 'r': 0.463735231898608}}
2021-05-30 17:01:12 | INFO | train | epoch 002 | loss 4.071 | nll_loss 2.233 | ppl 4.702 | wps 371.3 | ups 0.09 | wpb 4165.4 | bsz 158.3 | num_updates 186 | lr 2.79e-05 | gnorm 3.805 | clip 100 | oom 0 | train_wall 690 | wall 1768
2021-05-30 17:01:25 | INFO | valid | epoch 002 | valid on 'valid' subset | loss 3.943 | nll_loss 2.093 | ppl 4.267 | wps 1714.4 | wpb 132.8 | bsz 5 | num_updates 186 | best_loss 3.943
100%|██████████| 817/817 [02:40<00:00, 5.10it/s]here bpe NONE
here!
Test on val set:
Val {'rouge-1': {'f': 0.48628543166111304, 'p': 0.4849894455764677, 'r': 0.5314782969117535}, 'rouge-2': {'f': 0.24927341750101978, 'p': 0.24701375251072502, 'r': 0.2760416390474299}, 'rouge-l': {'f': 0.47186839505679423, 'p': 0.47350846043003486, 'r': 0.5050869486974158}}
2021-05-30 17:04:31 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints/checkpoint_best.pt (epoch 2 @ 186 updates, score 3.943) (writing took 23.186852996994276 seconds)
100%|██████████| 818/818 [02:43<00:00, 5.00it/s]Test on testing set:
Test {'rouge-1': {'f': 0.4786341588084106, 'p': 0.4786291670320602, 'r': 0.5265071766157988}, 'rouge-2': {'f': 0.2388174902966539, 'p': 0.23853517761149043, 'r': 0.2657237453293394}, 'rouge-l': {'f': 0.46179904611032535, 'p': 0.4642844043532549, 'r': 0.496457510321212}}
2021-05-30 17:18:59 | INFO | train | epoch 003 | loss 3.863 | nll_loss 2.03 | ppl 4.083 | wps 363.2 | ups 0.09 | wpb 4165.4 | bsz 158.3 | num_updates 279 | lr 2.95062e-05 | gnorm 3.972 | clip 100 | oom 0 | train_wall 684 | wall 2835
2021-05-30 17:19:12 | INFO | valid | epoch 003 | valid on 'valid' subset | loss 3.886 | nll_loss 2.05 | ppl 4.141 | wps 1659.8 | wpb 132.8 | bsz 5 | num_updates 279 | best_loss 3.886
100%|██████████| 817/817 [02:32<00:00, 5.36it/s]here bpe NONE
here!
Test on val set:
Val {'rouge-1': {'f': 0.48681990701428274, 'p': 0.5021639425012622, 'r': 0.514617894002924}, 'rouge-2': {'f': 0.25267840837027383, 'p': 0.2601730865438312, 'r': 0.2694340002654152}, 'rouge-l': {'f': 0.47263410942856693, 'p': 0.48723857579759916, 'r': 0.4928951658275421}}
2021-05-30 17:22:09 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints/checkpoint_best.pt (epoch 3 @ 279 updates, score 3.886) (writing took 21.174986565019935 seconds)
100%|██████████| 818/818 [02:33<00:00, 5.32it/s]Test on testing set:
Test {'rouge-1': {'f': 0.48538106530910036, 'p': 0.5043341635928553, 'r': 0.5140378691028713}, 'rouge-2': {'f': 0.24471431210883865, 'p': 0.2551209404134376, 'r': 0.2601614283339945}, 'rouge-l': {'f': 0.468263423938775, 'p': 0.4845770907657205, 'r': 0.4894307801054096}}
2021-05-30 17:36:32 | INFO | train | epoch 004 | loss 3.672 | nll_loss 1.826 | ppl 3.545 | wps 367.9 | ups 0.09 | wpb 4165.4 | bsz 158.3 | num_updates 372 | lr 2.8925e-05 | gnorm 2.403 | clip 100 | oom 0 | train_wall 692 | wall 3888
2021-05-30 17:36:45 | INFO | valid | epoch 004 | valid on 'valid' subset | loss 3.866 | nll_loss 2.047 | ppl 4.133 | wps 1719.5 | wpb 132.8 | bsz 5 | num_updates 372 | best_loss 3.866
100%|██████████| 817/817 [02:36<00:00, 5.23it/s]here bpe NONE
here!
Test on val set:
Val {'rouge-1': {'f': 0.4827315494753739, 'p': 0.50526373048346, 'r': 0.5041134859110057}, 'rouge-2': {'f': 0.24840728492515798, 'p': 0.2592307191193046, 'r': 0.2622010621533662}, 'rouge-l': {'f': 0.4675137179138125, 'p': 0.4863900933656205, 'r': 0.4838663279210453}}
2021-05-30 17:39:46 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints/checkpoint_best.pt (epoch 4 @ 372 updates, score 3.866) (writing took 22.199564302980434 seconds)
100%|██████████| 818/818 [02:34<00:00, 5.31it/s]Test on testing set:
Test {'rouge-1': {'f': 0.47986946672852465, 'p': 0.505675068662463, 'r': 0.5010302542324921}, 'rouge-2': {'f': 0.24667273240725318, 'p': 0.2601824293731443, 'r': 0.2595407303752472}, 'rouge-l': {'f': 0.4649056998973506, 'p': 0.4862129586100383, 'r': 0.4809156933510188}}
2021-05-30 17:54:06 | INFO | train | epoch 005 | loss 3.507 | nll_loss 1.646 | ppl 3.131 | wps 367.7 | ups 0.09 | wpb 4165.4 | bsz 158.3 | num_updates 465 | lr 2.83438e-05 | gnorm 2.009 | clip 100 | oom 0 | train_wall 687 | wall 4941
2021-05-30 17:54:18 | INFO | valid | epoch 005 | valid on 'valid' subset | loss 3.882 | nll_loss 2.059 | ppl 4.167 | wps 1719.8 | wpb 132.8 | bsz 5 | num_updates 465 | best_loss 3.866
100%|██████████| 817/817 [02:58<00:00, 4.59it/s]here bpe NONE
here!
Test on val set:
Val {'rouge-1': {'f': 0.4843439334064069, 'p': 0.4676299223115565, 'r': 0.5524586002963356}, 'rouge-2': {'f': 0.24974488509752005, 'p': 0.24019083936995303, 'r': 0.28883689957033615}, 'rouge-l': {'f': 0.46841726367851844, 'p': 0.4529545727532749, 'r': 0.5246822708277491}}
2021-05-30 17:57:33 | INFO | fairseq.checkpoint_utils | saved checkpoint checkpoints/checkpoint_last.pt (epoch 5 @ 465 updates, score 3.882) (writing took 13.954715799016412 seconds)
100%|██████████| 818/818 [02:54<00:00, 4.68it/s]Test on testing set:
Test {'rouge-1': {'f': 0.4831880715673854, 'p': 0.4698315545550697, 'r': 0.5481711003208287}, 'rouge-2': {'f': 0.24967258791161379, 'p': 0.24298097108018568, 'r': 0.28566565132721744}, 'rouge-l': {'f': 0.47041083526959915, 'p': 0.45832861492381066, 'r': 0.5230912021841242}}
(base) gengx@v100-13:~/Multi-View-Seq2Seq-master/Multi-View-Seq2Seq-master/train_sh$
03752472}, 'rouge-l': {'f': 0.4649056998973506, 'p': 0.4862129586100383, 'r': 0.4809156933510188}}
`
The text was updated successfully, but these errors were encountered: