You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In class GraphAttentionLayer, there are two parameter matrices(self.W and self.a). When I want to use multi GPU, the parameter matrices can only be placed in cuda‘0’,but data of mini batch is located in different cuda. How to solve it?
The text was updated successfully, but these errors were encountered:
In class GraphAttentionLayer, there are two parameter matrices(self.W and self.a). When I want to use multi GPU, the parameter matrices can only be placed in cuda‘0’,but data of mini batch is located in different cuda. How to solve it?
The text was updated successfully, but these errors were encountered: