You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope you don't mind these issues I am opening up. The exact DMD function is pretty slow for me because my data is about 15000 samples with 2500 features. I did some profiling and the lines involving transpose are the most expensive steps. I think julia arrays are memory-contiguous along the columns, so one solution would be to assume that the data is already given with time as the second index. Most datasets I have worked with are stored on disk in this way anyway.
I am not sure what the best choice is in terms of memory layout for the optimized DMD solvers though.
The text was updated successfully, but these errors were encountered:
nbren12
added a commit
to nbren12/RobustDMD.jl
that referenced
this issue
May 16, 2018
1. inlined the computation of the trapezoid rule, and avoided unnessary
transpose calls
2. Used an iterative SVD solver rather than full svd (50% improvement in run
time)
ResolvesUW-AMO#2
I think it's worth looking for more efficiencies in that code, especially re: larger matrices. One big thing would be to replace the SVD call in it by a randomized SVD call, a la Halko-Martinsson-Tropp.
On Wed, May 16, 2018 at 12:58 PM Travis Askham ***@***.***> wrote:
No worries, glad someone's using it!
I think it's worth looking for more efficiencies in that code, especially
re: larger matrices. One big thing would be to replace the SVD call in it
by a randomized SVD call, a la Halko-Martinsson-Tropp.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABUokiFBBtnc6pDGtkOexnCtg1-qC5iYks5tzITfgaJpZM4UBvVR>
.
I hope you don't mind these issues I am opening up. The exact DMD function is pretty slow for me because my data is about 15000 samples with 2500 features. I did some profiling and the lines involving
transpose
are the most expensive steps. I think julia arrays are memory-contiguous along the columns, so one solution would be to assume that the data is already given with time as the second index. Most datasets I have worked with are stored on disk in this way anyway.I am not sure what the best choice is in terms of memory layout for the optimized DMD solvers though.
The text was updated successfully, but these errors were encountered: