-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting issue while loading model #1
Comments
did you use data parallel to train the model? |
@abhishekkrthakur Yes used data parallel while training |
does your bert_base_path has bert base uncased model files? |
@abhishekkrthakur Fixed the issue was with data parallel. I local i was not making use of "MODEL = nn.DataParallel(MODEL)". Now working with this. Can you help me understand the use of DataParallel and if it does require after training also? |
DataParallel is used only when you have multiple GPUs during training. Closing this issue for now. :) |
@abhishekkrthakur : Can you give any leads on how to load device = torch.device('cpu')
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=device)) |
Getting below issue while loading the model in local system. Model was trained on colab.
The text was updated successfully, but these errors were encountered: