You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @dungdx34
I'm not sure I fully understand your question.
What exactly do you mean by parellel the model using BERT? The architecture of the model is using BERT for each sentence in a sperate way, such that each document is processed in one pass in the feed forward, if that's what you are asking.
Thank you for your reply!
My problem is that I want to train your model on multi-GPU, but your project does not support multi-GPU. In PyTorch, if I want a parallel model, I use torch.nn.DataParallel, example: self.model = nn.DataParallel(self.model)
Please help me!
I have a question: how to parallel your model using BERT?
The text was updated successfully, but these errors were encountered: