DBSCAN seems not to use multiple processors (n_jobs argument ignored) #8003
DBSCAN seems not to use multiple processors (n_jobs argument ignored)
Steps/Code to Reproduce
import numpy as np
centers = [[1, 1], [-1, -1], [1, -1]]
X = StandardScaler().fit_transform(X)
db = DBSCAN(eps=0.3, min_samples=10, n_jobs=-1).fit(X)
answer is correct but the job should be split between processors, and time consumed should be significantly less.
seems to run on only one processor
The text was updated successfully, but these errors were encountered:
you can set
How many cores do you have? And can you report times for the default setting an for
In #4009, we failed to find an implementation in which parallelism in radius_neighbors for the spatial trees was effectively faster. Perhaps this needs further experimentation.…
On 8 December 2016 at 06:08, Andreas Mueller ***@***.***> wrote: you can set algorithm="brute" to use multiple cores but that will probably make it slower. The neighbors module decides it wants to use a tree, which we haven't parallelized yet. How many cores do you have? And can you report times for the default setting an for algorithm="brute"? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#8003 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AAEz6xUQvcXH4z7DigPYkcPgSikJYN8cks5rFwQigaJpZM4LG8pK> .