You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that stage0 and stage2 should be amenable to batch processing by a simple rearrangement: x_rearranged = rearrange(x, "b f c h w -> (b f) c h w").
How about stage_1? What modifications are needed for it to operate on a batchsize > 1?
The text was updated successfully, but these errors were encountered:
Suppose the batch size is B and the batch size of input tensor is B, N, C, H, W = x.shape.
For frame-wise processing, x = x.view(B*N, C, H, W) -> 2D CNN processing.
For y in stage 2, y = x.view(B,N*C,H,W), y = torch.roll(y, slice_c,1), y = y.reshape(B*N, C, H, W)
Suppose the batch size is B and the batch size of input tensor is B, N, C, H, W = x.shape. For frame-wise processing, x = x.view(B*N, C, H, W) -> 2D CNN processing. For y in stage 2, y = x.view(B,N*C,H,W), y = torch.roll(y, slice_c,1), y = y.reshape(B*N, C, H, W)
Can you explain it in more detail? thanks ~
Hi,
Is it possible to use a batch-size (per GPU) larger than 1?
The model's forward pass currently contains statements such as,
Shift-Net/basicsr/models/archs/gshift_deblur2.py
Line 750 in 816f1b2
It seems that
stage0
andstage2
should be amenable to batch processing by a simple rearrangement:x_rearranged = rearrange(x, "b f c h w -> (b f) c h w")
.How about
stage_1
? What modifications are needed for it to operate on a batchsize > 1?The text was updated successfully, but these errors were encountered: