You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the Fig. 3 in Hengshuang's Point Transformer model, the output feature dimension of each transformer block should be different . e.g. [32, 64, 128, 256, 512]. But your implementation uses a unqiue one, e.g. 512. Any comment on this?
Thanks!
The text was updated successfully, but these errors were encountered:
Notice that in Figure 4(a), there are two fully connected layers (fc1, fc2 in code) before/after the actual transformer. I think 32, 64, 128, 256, 512 is the dimensions before the first fully connected layer instead of the actual transformer. The dimension of the actual transformer is not given in the paper.
Hi, @qq456cvb ,
According to the Fig. 3 in Hengshuang's Point Transformer model, the output feature dimension of each transformer block should be different . e.g. [32, 64, 128, 256, 512]. But your implementation uses a unqiue one, e.g. 512. Any comment on this?
Thanks!
The text was updated successfully, but these errors were encountered: