New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add PyTorch backend for soft-DTW #431
[WIP] Add PyTorch backend for soft-DTW #431
Conversation
All checks of tslearn main branch have passed. |
Codecov ReportPatch coverage:
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## main #431 +/- ##
==========================================
- Coverage 94.53% 93.02% -1.52%
==========================================
Files 62 67 +5
Lines 4743 5663 +920
==========================================
+ Hits 4484 5268 +784
- Misses 259 395 +136
☔ View full report in Codecov by Sentry. |
Do you think that in general |
Hello @kushalkolar, |
I am looking for ideas to convert the |
My problem is the following:
Then I can call this function using either a |
Some quick thoughts: wouldn't it be better to make a pure torch implementation, at least for the most compute intense steps, than have lots back and forth with numpy? sDTW could be a great loss function for some use cases and I wonder if constant back and forth between numpy and torch Tensors (or jax etc. Tensors) could slow things down. |
However, the I could remove all @rtavenar, do you know about how we could do for the auxiliary |
Hello @kushalkolar,
|
Even if I choose to use directly the |
Numba does not support a Likewise, we can not write in a
as |
Maybe you can just (at least for a start) have constant integer values that identify the backends. Would that help, or is the problem due to calling the Backend builder? |
Hello @rtavenar, the problem is due to calling the
|
The tests have passed under MacOS Python 3.9 (skipping the failing tests):
|
The tests have passed under MacOS Python 3.9 (skipping the failing tests). |
1 similar comment
The tests have passed under MacOS Python 3.9 (skipping the failing tests). |
I will now restore the last good commit bf7bb02 before investigating on the failing test under MacOS for Python 3.9 (Segmentation fault). |
29bed46
to
bf7bb02
Compare
The
as well as the test on MacOS for Python 3.9:
|
Only the |
…st.mark.skipif in test_metrics.py
The test for MacOS Python 3.9 has failed:
|
All checks have passed! |
Hello @rtavenar and @johannfaouzi, I did not find the reason of the
Other tests might also fail with a very low failing rate. Could you review my PR please? |
All checks have passed. |
This is ready for merge, congrats @YannCabanes on this huge effort! |
This PR plans to make compatible the files
soft_dtw_fast.py
andsoftdtw_variants.py
with the PyTorch backend.We take inspiration from the following GitHub repository:
https://github.com/Sleepwalking/pytorch-softdtw/blob/master/soft_dtw.py
Github repository of Mehran Maghoumi on Soft-DTW using Pytorch with cuda:
https://github.com/Maghoumi/pytorch-softdtw-cuda/blob/master/soft_dtw_cuda.py
An introduction to Dynamic Time Warping can be found at:
https://rtavenar.github.io/blog/dtw.html
An introduction about the differentiability of DTW and the case of soft-DTW can be found at:
https://rtavenar.github.io/blog/softdtw.html
We also take inspiration from the python package geomstats [JMLR:v21:19-027] (https://github.com/geomstats/geomstats/) about ML in Riemannian manifolds as a source of inspiration to implement the multiple backends functions.
References
[JMLR:v21:19-027] Nina Miolane, Nicolas Guigui, Alice Le Brigant, Johan Mathe, Benjamin Hou, Yann Thanwerdas, Stefan Heyder, Olivier Peltre, Niklas Koep, Hadi Zaatiti, Hatem Hajri, Yann Cabanes, Thomas Gerald, Paul Chauchat, Christian Shewmake, Daniel Brooks, Bernhard Kainz, Claire Donnat, Susan Holmes and Xavier Pennec. Geomstats: A Python Package for Riemannian Geometry in Machine Learning, Journal of Machine Learning Research, 2020, volume 21, number 223, pages 1-9, http://jmlr.org/papers/v21/19-027.html