You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had a question regarding the MNIST example present in your paper, as I do not completely follow here and it is not found in the Git. As opposed to the FAUST dataset where we are dealing purely with shapes, I guess that MNIST is transformed to lie on a rough grid so that we have two types of information, namely the features belonging to a node (in this case the raw pixel value, or "data.x") and the position in 3D for each node ("data.pos" so to speak). Am I interpreting it right when I say that the gauge-equivariant mesh convolutions are based on the information found in "pos", while they are applied to the raw pixel values?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
You're right in that we didn't include that code in this release, as we thought it'd be of limited value to others. You're also right in that interpretation. The positions affect the kernel values, but the input features to the network are only the pixel values.
Thank you for the interesting code!
I had a question regarding the MNIST example present in your paper, as I do not completely follow here and it is not found in the Git. As opposed to the FAUST dataset where we are dealing purely with shapes, I guess that MNIST is transformed to lie on a rough grid so that we have two types of information, namely the features belonging to a node (in this case the raw pixel value, or "data.x") and the position in 3D for each node ("data.pos" so to speak). Am I interpreting it right when I say that the gauge-equivariant mesh convolutions are based on the information found in "pos", while they are applied to the raw pixel values?
Thanks in advance!
The text was updated successfully, but these errors were encountered: