Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

Fine-tuning the 2.7B Blender model raises an attribute error #3743

Closed
NeonAndrii opened this issue Jun 24, 2021 · 1 comment · Fixed by #3744
Closed

Fine-tuning the 2.7B Blender model raises an attribute error #3743

NeonAndrii opened this issue Jun 24, 2021 · 1 comment · Fixed by #3744

Comments

@NeonAndrii
Copy link

Bug description
Attribute error ('TransformerGeneratorAgent' object has no attribute '_fake_forward_pass') raised while fine-tuning the 2.7B model following the tutorial steps.

Reproduction steps
Run the fine-tuning command from the tutorial
parlai train_model -t blended_skill_talk,wizard_of_wikipedia,convai2:normalized,empathetic_dialogues --multitask-weights 1,3,3,3 -veps 0.25 --attention-dropout 0.0 --batchsize 128 --model transformer/generator --embedding-size 2560 --ffn-size 10240 --variant prelayernorm --n-heads 32 --n-positions 128 --n-encoder-layers 2 --n-decoder-layers 24 --history-add-global-end-token end --delimiter ' ' --dict-tokenizer bytelevelbpe --dropout 0.1 --fp16 True --init-model zoo:blender/reddit_3B/model --dict-file zoo:blender/reddit_3B/model.dict --label-truncate 128 --log_every_n_secs 10 -lr 7e-06 --lr-scheduler reduceonplateau --lr-scheduler-patience 3 --optimizer adam --relu-dropout 0.0 --activation gelu --model-parallel true --save-after-valid True --text-truncate 128 --truncate 128 --warmup_updates 100 --fp16-impl mem_efficient --update-freq 2 --gradient-clip 0.1 --skip-generation True -vp 10 -vmt ppl -vmm min --model-file /tmp/test_train_27B

Expected behavior
The process proceeds to fine-tuning the initial 2.7B Blender model on blended_skill_talk, wizard_of_wikipedia, convai2 and empathetic_dialogues tasks.

Logs
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/bin/parlai", line 11, in
load_entry_point('parlai', 'console_scripts', 'parlai')()
File "/home/ubuntu/ParlAI/parlai/main.py", line 14, in main
superscript_main()
File "/home/ubuntu/ParlAI/parlai/core/script.py", line 325, in superscript_main
return SCRIPT_REGISTRY[cmd].klass._run_from_parser_and_opt(opt, parser)
File "/home/ubuntu/ParlAI/parlai/core/script.py", line 108, in _run_from_parser_and_opt
return script.run()
File "/home/ubuntu/ParlAI/parlai/scripts/train_model.py", line 936, in run
return self.train_loop.train()
File "/home/ubuntu/ParlAI/parlai/scripts/train_model.py", line 900, in train
for _train_log in self.train_steps():
File "/home/ubuntu/ParlAI/parlai/scripts/train_model.py", line 803, in train_steps
world.parley()
File "/home/ubuntu/ParlAI/parlai/core/worlds.py", line 865, in parley
batch_act = self.batch_act(agent_idx, batch_observations[agent_idx])
File "/home/ubuntu/ParlAI/parlai/core/worlds.py", line 833, in batch_act
batch_actions = a.batch_act(batch_observation)
File "/home/ubuntu/ParlAI/parlai/core/torch_agent.py", line 2214, in batch_act
output = self.train_step(batch)
File "/home/ubuntu/ParlAI/parlai/core/torch_generator_agent.py", line 753, in train_step
self._fake_forward_pass()
AttributeError: 'TransformerGeneratorAgent' object has no attribute '_fake_forward_pass'

@stephenroller
Copy link
Contributor

Appreciate the speedy report. PR out for fix.

@NeonAndrii NeonAndrii changed the title Finu-tuning the 2.7B Blender model raises an attribute error Fine-tuning the 2.7B Blender model raises an attribute error Jun 24, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants