-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with running inverse_canonicalization #16
Comments
Thank you for the nice feedback and the PR, @olayasturias! I will look at the PR later and merge it. The However, in your case, when you want to apply the group action to the image, we are dealing with the "scalar" representation type. Therefore, we do not need to check if For more details on understanding induced representations, I recommend Section 2 of the Steerable CNNs paper (and, for further reading, Section 2 of General E(2)—Equivariant Steerable CNNs). Please let me know if the answer needs to be further clarified (or if I have misunderstood your question), otherwise feel free to close this issue. |
Thank you for your fast and helpul reply. I understand now, thanks :) |
Hello all,
Thanks for sharing such amazing work! I'm very interested in using and implementing your code, particularly for equivariant tasks.
For learning to use your library, I've been playing around with your notebook under tutorials/images/understanding_discrete_canonicalization.ipynb. The notebook is very straightforward and easy to understand, so congrats for that :)
As a step further, I tried to add the function invert_canonicalization to play with it as follows:
When doing this, some problems and questions were raised:
get_action_on_image_features
under/images/utils.py
:feature_map. shape [1]
need to be multiple ofnum_group
? In this example, these dimensions are 3 and 4, respectively. However, with that requirement, the image's channel size would have to be 4 or 8, for example. As far as I understand, this function reverses the canonicalization, that is, rotates the image by a certain angle. So why is the expected dimension different than that of the image? Have I misunderstood what this function is aiming to do?prediction_network
(not used in this particular example but exemplified in your README) has a different shape than the input required by theprediction_network
? For example, the input is an image, and the output is a vector. In that case, we'd like to apply the rotation action to the vector instead of an input image. Do you store the canonicalization (or inverse canonicalization) transform somewhere so that it can be applied to whichever data shape? I've seen that you have a list of angles undergroup_element_dict["group_element"]["rotation"]
; however, I don't quite understand what these angles represent. Shouldn't it be just one angle?Thank you so much :)
Olaya
The text was updated successfully, but these errors were encountered: