Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backwarping for multiple GPUs #17

Closed
ahmadmughees opened this issue Oct 7, 2020 · 3 comments
Closed

backwarping for multiple GPUs #17

ahmadmughees opened this issue Oct 7, 2020 · 3 comments

Comments

@ahmadmughees
Copy link

Hi, this is an amazing work and saved me from alot of work. I just want to ask that why you have written backwarping outside the Network class while you are also defining preprocess and basic inside the class. is there any theoretical background which I am missing right now?

Note. Actually I am trying to use your code to run on multiple GPUS and this function is creating a problem.

thanks.

return torch.nn.functional.grid_sample(input=tensorInput, grid=(Backward_tensorGrid[str(tensorFlow.size())] + tensorFlow).permute(0, 2, 3, 1),\
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
@sniklaus
Copy link
Owner

sniklaus commented Oct 7, 2020

Thank you for your kind words! The location of the code doesn't matter much. I see it more as a utility function which is why I prefer it to be separate from the model. Regarding your error message, try changing the backward warping to the following.

def backwarp(tenInput, tenFlow):
	if str(tenFlow.shape) not in backwarp_tenGrid:
		tenHor = torch.linspace(-1.0 + (1.0 / tenFlow.shape[3]), 1.0 - (1.0 / tenFlow.shape[3]), tenFlow.shape[3]).view(1, 1, 1, -1).expand(-1, -1, tenFlow.shape[2], -1)
		tenVer = torch.linspace(-1.0 + (1.0 / tenFlow.shape[2]), 1.0 - (1.0 / tenFlow.shape[2]), tenFlow.shape[2]).view(1, 1, -1, 1).expand(-1, -1, -1, tenFlow.shape[3])

		backwarp_tenGrid[str(tenFlow.shape)] = torch.cat([ tenHor, tenVer ], 1)
	# end

	tenFlow = torch.cat([ tenFlow[:, 0:1, :, :] / ((tenInput.shape[3] - 1.0) / 2.0), tenFlow[:, 1:2, :, :] / ((tenInput.shape[2] - 1.0) / 2.0) ], 1)

	return torch.nn.functional.grid_sample(input=tenInput, grid=(backwarp_tenGrid[str(tenFlow.shape)].cuda() + tenFlow).permute(0, 2, 3, 1), mode='bilinear', padding_mode='border', align_corners=False)
# end

Closing for now since this should do the trick. I am happy to reopen this issue if it still persists though, just let me know.

@sniklaus sniklaus closed this as completed Oct 7, 2020
@ahmadmughees
Copy link
Author

ahmadmughees commented Oct 9, 2020

Thanks for your responce.

actually, by using .cuda() was pusing the tensor on to default GPU, and other tensors were on different GPUs. the solution for that is pushing all the tensors on the same GPU by specifying GPU device. .cuda(tenFlow.device)

return torch.nn.functional.grid_sample(input=tenInput, grid=(backwarp_tenGrid[str(tenFlow.shape)].cuda(tenFlow.device) + tenFlow).permute(0, 2, 3, 1), mode='bilinear', padding_mode='border', align_corners=False)

@ahmadmughees ahmadmughees changed the title backwarping backwarping for multiple GPUs Oct 9, 2020
@sniklaus
Copy link
Owner

sniklaus commented Oct 9, 2020

Sorry that I didn't catch this (I didn't test it ...), and thank you for sharing your findings!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants