-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for large graphs? #19
Comments
You can implement the message passing operation more efficiently using the
torch.scatter_add operator. You will need to provide an edge list to this
function, but this should be easy to pre-compute. Note that you will have
to enforce some kind of sparsity template on the graph, i.e. you will most
likel y have to consider only a subset of the N^2 edges in the graph as
10.000^2 is likely going to be way too much both in terms of memory and
computational complexity.
…On Sat 3. Aug 2019 at 07:07 Omar ***@***.***> wrote:
Many thanks for the interesting work.
Indeed, I am trying to use your model on large biological graphs (more
than 10K nodes) but I am facing memory limits.
Basically, you are using the one-hot encoding for all the edges in a fully
connected graph to exchange the messages and to facilitate the optimization
of the ELBO. For very large graphs such encoding is not an option.
I tried using sparse tensors but the missing strides for torch.matmul
(requires contiguous representation for the data) and the unsupported
broadcasting for matrix multiplication with torch.mm limited my efforts
to patch your implementation.
Do you have please an idea on how we could extend the application of your
model on large graphs?
Thank you very much in advance.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#19?email_source=notifications&email_token=ABYBYYF57MC2MZNUDBERBO3QCUG7RA5CNFSM4IJCHWBKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HDFRJ7A>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABYBYYADE64BVAO3USOTGHLQCUG7RANCNFSM4IJCHWBA>
.
|
Thanks again Thomas for the prompt reply. Best regards. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Many thanks for the interesting work.
Indeed, I am trying to use your model on large biological graphs (more than 10K nodes) but I am facing memory limits.
Basically, you are using the one-hot encoding for all the edges in a fully connected graph to exchange the messages and to facilitate the optimization of the ELBO. For very large graphs such encoding is not an option.
I tried using sparse tensors but the missing strides for torch.matmul (requires contiguous representation for the data) and the unsupported broadcasting for matrix multiplication with torch.mm limited my efforts to patch your implementation.
Do you have please an idea on how we could extend the application of your model on large graphs?
Thank you very much in advance.
The text was updated successfully, but these errors were encountered: