Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concatenation operation in InputTransition #1

Closed
RongzhaoZhang opened this issue Mar 26, 2017 · 3 comments
Closed

Concatenation operation in InputTransition #1

RongzhaoZhang opened this issue Mar 26, 2017 · 3 comments

Comments

@RongzhaoZhang
Copy link

If my understanding is right, the concatenation in the InputTransition block should be applied along dim=1 instead of dim=0, because the second dimension is channel. i.e.

# split input in to 16 channels
x16 = torch.cat((x, x, x, x, x, x, x, x,
                 x, x, x, x, x, x, x, x), 0)

should be

# split input in to 16 channels
x16 = torch.cat((x, x, x, x, x, x, x, x,
                 x, x, x, x, x, x, x, x), 1)
@mattmacy
Copy link
Owner

mattmacy commented Apr 1, 2017

Sorry, for the delay. The dimensions are BatchSize, Channels, Z, Y, X. The point is to create 16 channels, not to increase the batch size by 16x.

@Cassieyy
Copy link

If my understanding is right, the concatenation in the InputTransition block should be applied along dim=1 instead of dim=0, because the second dimension is channel. i.e.

# split input in to 16 channels
x16 = torch.cat((x, x, x, x, x, x, x, x,
                 x, x, x, x, x, x, x, x), 0)

should be

# split input in to 16 channels
x16 = torch.cat((x, x, x, x, x, x, x, x,
                 x, x, x, x, x, x, x, x), 1)

In the meanwhile, the x's channel number is changed via conv1, so that it needs to save the original input(whose channel number is 1), namely

split input in to 16 channels

x16 = torch.cat((input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x,
input_x, input_x, input_x, input_x, input_x, input_x, input_x, input_x), 1)

@PussyCat0700
Copy link

Sorry, for the delay. The dimensions are BatchSize, Channels, Z, Y, X. The point is to create 16 channels, not to increase the batch size by 16x.

Thx very much for the apply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants