Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

questions regarding coord2radial #7

Open
pengzhangzhi opened this issue Feb 12, 2023 · 1 comment
Open

questions regarding coord2radial #7

pengzhangzhi opened this issue Feb 12, 2023 · 1 comment

Comments

@pengzhangzhi
Copy link

Hi. I find the way you calculate radial is different from other similar works, e.g., EGNN.
Your strategy. the radial is the dot product of the coord differences.

def coord2radial(edge_index, coord):
    row, col = edge_index
    coord_diff = coord[row] - coord[col]  # [n_edge, n_channel, d]
    radial = torch.bmm(coord_diff, coord_diff.transpose(-1, -2))  # [n_edge, n_channel, n_channel]
    # normalize radial
    radial = F.normalize(radial, dim=0)  # [n_edge, n_channel, n_channel]
    return radial, coord_diff

EGNN's strategy. the radial is the squared distance between two nodes.

    def coord2radial(self, edge_index, coord):
        row, col = edge_index
        coord_diff = coord[row] - coord[col]
        radial = torch.sum(coord_diff**2, 1).unsqueeze(1)

        if self.normalize:
            norm = torch.sqrt(radial).detach() + self.epsilon
            coord_diff = coord_diff / norm

        return radial, coord_diff

I think your radial can represent the orientation of two multi-channel residues, and egnn's radial represents the distance. Is this reasonable? What do you think it represents? What's your motivation for defining it this way instead of following egnn?
The way you normalize the radial is quite interesting, you normalize it along the n_edge dimension (similar to "batch dimension").
Why? Have you tried removing normalization?

Best,
Zhangzhi

@kxz18
Copy link
Collaborator

kxz18 commented Feb 16, 2023

Thanks for the insightful questions.

  1. Actually we adopt the inner-product radial due to its representation completeness. EGNN is designed for single-channel nodes, and we do not know whether the norm-based radial can fit arbitrary orthogonality-equivariant functions with multi-channel nodes. On the contrary, the representation completeness of the inner-product radial is explored already in "Equivariant graph mechanics networks with constraints" (Section 3.2).
  2. As for the normalization, we found it is necessary to maintain numerical stability. If the normalization is removed, the training procedure will be unstable and easily produce NaN in the loss. But we didn't quite explore the benefit of different form of normalization. I have just tried the normalization strategy in EGNN, and the performance on CDR design did not exhibit significant change, so maybe either is OK.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants