-
Notifications
You must be signed in to change notification settings - Fork 6
Classifier does not scale with system size #2
Comments
Hi @markus1978, Yes, the classification is definitely above linear in scaling. Even the simplest form of classification that just returns the dimensionality will require calculating the full distance matrix, which scales as O(n^2) with system size n. The code becomes especially slow when doing full classification for surfaces and 2D systems. Some of that time is controllable by tweaking the default settings to favor speed instead of accuracy. Could be e.g. done by lowering max_cell_size to ~9Å and using pos_tol=[0.5]. The rest of that time could be cut with a more optimized implementation and switching to C++ in a couple of critical places. If you really need to classify a lot of systems with around 1k atoms, I could do some profiling to identify the bottlenecks and improve on them. |
If you only require the dimensionality from the classifier (0D, 1D, 2D, 3D), you can check out this tutorial. Doing just this will be a lot faster than the full classification when surfaces and 2D materials are involved. |
@lauri.himanen@aalto.fi <lauri.himanen@aalto.fi> thank you very much. I'll
changed our implementation to your suggestion in the tutorial. I'll see
what happens, when we run it over larger systems again.
…On Sun, Mar 3, 2019 at 8:48 PM Lauri Himanen ***@***.***> wrote:
@markus1978 <https://github.com/markus1978>,
If you only require the dimensionality from the classifier (0D, 1D, 2D,
3D), you can check out this tutorial
<https://singroup.github.io/matid/tutorials/dimensionality.html>. Doing
just this will be a lot faster than the full classification when surfaces
and 2D materials are involved.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADVqXJI7zh1_KwjmaS4zGPLL68UzDUfWks5vTCb8gaJpZM4bbK-t>
.
|
Ok, great! Please keep me informed about the performance (reopen the issues if needed). The simple dimensionality detection should definitely not take hours for ~1000k atoms. In general, the more advanced classification performed by the |
matid.Classifier.classify
's execution time is definitely above linear to the system size. With 1k+ atoms in the system, we need 1h+ on a decent CPU.I'll try to look into it myself, but maybe you already have an idea or know about this limitation? Or is there a way to tune the behaviour. The
Classifier
class seems to have a plethora of thresholds and similar parameter?The text was updated successfully, but these errors were encountered: