Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved invariant feature extraction - Improved group pooling #77

Open
Danfoa opened this issue Aug 24, 2023 · 0 comments
Open

Improved invariant feature extraction - Improved group pooling #77

Danfoa opened this issue Aug 24, 2023 · 0 comments

Comments

@Danfoa
Copy link
Contributor

Danfoa commented Aug 24, 2023

Hi @Gabri95,

Me again.

I was playing around with the Invariant Maps and especially GroupPooling. At the moment, what you do is compute the norm of every field type as one of the invariant features. This usually results in far fewer invariant features than what is possible to obtain. The maximum number of invariant features you can obtain for a particular representation (with the norm) is the number of irreps composing your representation.

As you know, any irreducible representation is associated with a G-invariant/stable vector space of the same dimension as the representation. This means that for compact groups, given feature vector z, we can obtain an invariant feature by taking the norm of the projection of z to each G-invariant/stable space.

Your framework makes this easy to achieve, considering the change of basis matrix Q already encodes this projection. This implies that if you do not define your feature spaces in the appropriate basis, in order to obtain the most significant amount of invariant features, you have to pay the cost of applying the change of basis to obtain the projections of the features to each G-invariant subspace.

I am already using this process as my custom G-invariant feature extraction with good results. Would it be nice to include an additional parameter in the group pooling module indicating if the user wants to pay the computational cost of applying the change of basis to obtain the largest amount of invariant features? This seems reasonable to me.

Once in the basis exposing the irreps (an Isotypic Basis), the computation of the invariant features would look something like this:

def compute_invariant_features(x: torch.Tensor, field_type: FieldType) -> torch.Tensor:
    n_inv_features = len(field_type.irreps)
    # TODO: Ensure isotypic basis i.e irreps of the same type are consecutive to each other.
    inv_features = []
    for field_start, field_end, rep in zip(field_type.fields_start, field_type.fields_end, field_type.representations):
        # Each field here represents a representation of an Isotypic Subspace. This rep is only composed of a single 
        # irrep type.
        x_field = x[..., field_start:field_end]
        num_G_stable_spaces = len(rep.irreps)   # Number of G-invariant features = multiplicity of irrep 
        # Again this assumes we are already in an Isotypic basis 
        assert len(np.unique(rep.irreps, axis=0)) == 1, "This only works for now on the Isotypic Basis"
        # This basis is useful because we can apply the norm in a vectorized way 
        # Reshape features to [batch, num_G_stable_spaces, num_features_per_G_stable_space]
        x_field_p = torch.reshape(x_field, (x_field.shape[0], num_G_stable_spaces, -1))
        # Compute G-invariant measures as the norm of the features in each G-stable space
        inv_field_features = torch.norm(x_field_p, dim=-1)
        # Append to the list of inv features
        inv_features.append(inv_field_features)
    # Concatenate all the invariant features
    inv_features = torch.cat(inv_features, dim=-1)
    assert inv_features.shape[-1] == n_inv_features, f"Expected {n_inv_features} got {inv_features.shape[-1]}"
    return inv_features

I could also contribute to this, including what we have previously discussed. Let me know what you think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant