Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The version of pytorch_lightning #29

Open
Jamesswang opened this issue Nov 30, 2021 · 4 comments
Open

The version of pytorch_lightning #29

Jamesswang opened this issue Nov 30, 2021 · 4 comments

Comments

@Jamesswang
Copy link

Thank you for your open source code. I tried to run your program on the server, but the interface of pytorch_lightning has changed, so I got some errors. May I know the version of pytorch_lightning you and your team use? Thank you!

Looking forward to your reply.

@tannonk
Copy link

tannonk commented Nov 30, 2021

@Jamesswang, according to the environment.yml on the master branch, you should be right to use pytorch-lightning==0.8.5.

If you setup a clean environment from this file, e.g. conda env create -f environment.yml, you should avoid dependency issues. That said, I had to remove the following two lines when setting up the environment:

  • bluert==0.0.1
  • en-core-web-sm==2.3.1

@Jamesswang
Copy link
Author

@tannonk Thank you for your reply

When I was running the code, I encountered the following problems after an epoch

Traceback (most recent call last):
File "finetune.py", line 876, in
main(args)
File "finetune.py", line 784, in main
logger=logger,
File "/home/wanghaotian/PrefixTuning/seq2seq/lightning_base.py", line 795, in generic_train
trainer.fit(model)
File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit
results = self.single_gpu_train(model)
File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 186, in single_gpu_train
results = self.run_pretrain_routine(model)
File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine
self.train()
File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 470, in run_training_epoch
self.run_evaluation(test_mode=False)
File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 430, in run_evaluation
self.on_validation_end()
File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_hook.py", line 112, in on_validation_end
callback.on_validation_end(self, self.get_model())
File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py", line 12, in wrapped_fn
return fn(*args, **kwargs)
File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 318, in on_validation_end
self._save_model(filepath)
TypeError: _save_model() missing 2 required positional arguments: 'trainer' and 'pl_module'

Have you ever encountered such a problem when running code

@tannonk
Copy link

tannonk commented Dec 2, 2021

Yes, I ran into the same error actually and haven't managed to solve that one yet. I'd open a new issue for that...

@XiangLi1999
Copy link
Owner

try pip install pytorch-lightning==0.9.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants