-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does spspmm operation support autograd? #45
Comments
That's the only function that does not have proper autograd support. Gradients for sparse-sparse matrix multiplication are quite difficult to obtain (since they are usually dense). I had a working, but slow implementation up to |
Hey! Thanks for your great work! I have installed the Thanks a lot again! |
Thank you so much for your question raising! It really troubles me for almost a week! |
Sorry for the inconveniences. I have plans to add |
Do you have any updates on autograd support? |
I'm parameterizing the weights of a sparse matrix to treat it as a locally connected network for a sparsely connected MLP implementation. Could I still run a backward pass to update these weights after calling matmul between this sparse matrix and a dense input? |
Nevermind, already seeing some nice implementations out there! |
Does spspmm still lack autograd support? |
This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved? |
Does spspmm still lack autograd support? @rusty1s .. it seems to use SparseTensor, which is supposed to be fully supported by autograd? |
Sadly yes :( |
Is there an alternative? It is difficult to get earlier versions of torch sparse that have this to work on newer cuda versions.. :( |
There isn‘t a workaround except for installing an earlier version. If you are interested, we can try to bring it back with your help. WDYT? |
@rusty1s sounds good, why don't we start with putting back your existing implementation? is it not better than having nothing? |
Here's the roadmap in order to achieve this:
|
* Add config setup for Kumo training pipeline * Add config setup for Kumo training pipeline * Update cfg, rm irrelevant configs * Add scripts for generating&sampling batch of configs * Fix workflow issues * Fix workflow issues * fix windows CI * update according to reviews * fix lint * rm unnecessary configs to fix bug * Update testing.yml Add dependencies for testing * Update gpu_testing.yml Add dependencies for testing * change output dir * fix GPU test * fix GPU test * update Co-authored-by: rusty1s <matthias.fey@tu-dortmund.de>
Is there an alternative? It is difficult to get earlier versions of torch sparse that have this to work on newer cuda versions.. :(
…________________________________
From: Matthias Fey ***@***.***>
Sent: 14 April 2022 22:05
To: rusty1s/pytorch_sparse ***@***.***>
Cc: Nanavati, Jay ***@***.***>; Comment ***@***.***>
Subject: Re: [rusty1s/pytorch_sparse] Does spspmm operation support autograd? (#45)
Sadly yes :(
—
Reply to this email directly, view it on GitHub<#45 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ARWCGGSZA7DZOLJ74WY6ZITVFCJBZANCNFSM4LEFGYZA>.
You are receiving this because you commented.Message ID: ***@***.***>
________________________________
AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA.
This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.com<https://www.astrazeneca.com>
|
With PyTorch 1.12, I assume you can also try to use the sparse-matrix multiplication from PyTorch directly. PyTorch recently integrated better sparse matrix support into its library :) |
Hi, you say autograd is supported for values tensors, but it seems it doesn't work in spspmm.
Like this:
And the answer is:
In my case, I want to parameterize the sparse adjacent matrix and feature matrix in GCN, so the inputs need to be both differentiable. I wonder if there're some bugs or just the way it is.
Regards.
The text was updated successfully, but these errors were encountered: