You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that in your code the connection weights are stored as parameters of the generator module. They don't seem to be related to image features, which means at inference, the network uses the same connection weights for all images. Wouldn't this limit the network's ability to handle different images?
The text was updated successfully, but these errors were encountered:
Yes. For different images, the only difference in the BSP-trees is the plane parameters. The connections are the same. It may limit the network's capacity. The concern is that the size of the connection matrix T is pxc, where p is the number of planes and c the number of convexes. In our implementation, p=4,096 and c=256, therefore pxc=1,048,576. It is hard to output those weights without reducing p or c.
Thanks for the clarification. I wonder if you have done any experiment to see if outputting the weights from image features is possible in simple cases, for example, the 2d toy problems.
It seems that in your code the connection weights are stored as parameters of the generator module. They don't seem to be related to image features, which means at inference, the network uses the same connection weights for all images. Wouldn't this limit the network's ability to handle different images?
The text was updated successfully, but these errors were encountered: