Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Roadmap] torch_geometric.nn.aggr 🚀 #4712

Closed
25 of 26 tasks
rusty1s opened this issue May 25, 2022 · 6 comments
Closed
25 of 26 tasks

[Roadmap] torch_geometric.nn.aggr 🚀 #4712

rusty1s opened this issue May 25, 2022 · 6 comments

Comments

@rusty1s
Copy link
Member

rusty1s commented May 25, 2022

🚀 The feature, motivation and pitch

The goal of this roadmap is to unify the concepts of aggregation inside GNNs across both MessagePassing and global readouts. Currently, these concepts are separated, e.g., via MessagePassing.aggr = "mean" and global_mean_pool(...) while the underlying implementation is the same. In addition, some aggregations are only available as global pooling operators (global_sort_pool, Set2Set, ...), while, in theory, they are also applicable during MessagePassing (and vice versa, e.g., SAGEConv.aggr = "lstm"). One additional feature is the combination of aggregations, which is a useful feature both in MessagePassing (PNAConv, EGConv, ...) and global readouts.

As such, we want to provide re-usable aggregations as part of a newly defined torch_geometric.nn.aggr.* package. Unifying these concepts also helps us to perform optimization and specialized implementations in a single place (e.g., fused kernels for multiple aggregations). After integration, the following functionality is applicable:

class MyConv(MessagePassing):
    def __init__(self):
        super().__init__(aggr="mean")

class MyConv(MessagePassing):
    def __init__(self):
        super().__init__(aggr=LSTMAggr(channels=...))

class MyConv(MessagePassing):
    def __init__(self):
        super().__init__(aggr=MultiAggr("mean", "max", Set2Set(channels=...))

Roadmap

The general roadmap looks as follows (at best, each implemented in a separate PR):

Any feedback and help from the community is highly appreciated!

cc: @lightaime @Padarn

@rusty1s rusty1s changed the title [Roadmap] 🚀 torch_geometric.nn.aggr.* [Roadmap] torch_geometric.nn.aggr.* 🚀 May 25, 2022
@rusty1s rusty1s pinned this issue May 25, 2022
@Padarn
Copy link
Contributor

Padarn commented May 25, 2022

Looks great @rusty1s! I'll try to pick up some of the smaller tasks this weekend

@lightaime
Copy link
Contributor

Added some simple ones. MaxAggr, MinAggr, SumAggr, SoftmaxAggr, PowermeanAggr, VarAggr, StdAggr.

@rusty1s rusty1s changed the title [Roadmap] torch_geometric.nn.aggr.* 🚀 [Roadmap] torch_geometric.nn.aggr 🚀 Jun 7, 2022
@Padarn
Copy link
Contributor

Padarn commented Jul 11, 2022

I planned to pick up a couple of the tasks - hope you guys don't mind me editing the issue to make it clear what I plan on doing (I'll sick to smaller PRs if possible since I typically don't have much time during weeks)

@Padarn
Copy link
Contributor

Padarn commented Jul 17, 2022

Kernel fusion: Optimize aggregations, e.g., by computing multiple aggregations in parallel (at best discussed in a separate issue)

Do we have an open issue on this? I'd be interested to understand a bit more about what we're thinking here. Is it mostly for the case where we want to (for example) compute both a sum and mean?

@rusty1s
Copy link
Member Author

rusty1s commented Jul 17, 2022

Yes, indeed. There do not exist clear plans for implementation yet though. It will likely depend on PyTorch fusing this ops as part of TorchScript, or on us providing special CUDA kernels.

@rusty1s
Copy link
Member Author

rusty1s commented Aug 12, 2022

Thanks everyone for the hard work @lightaime @Padarn. I think that the final outcome looks fantastic - many cool things to promote in our upcoming release :)

@rusty1s rusty1s closed this as completed Aug 12, 2022
@rusty1s rusty1s unpinned this issue Aug 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants