-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Facing exception while processing 1 Million data.. #5
Comments
If n_jobs > 1 then PyImpetus internally makes that many copies of the data. You can try setting n_jobs to 1 in order to stop PyImpetus from making multiple copies of the data. This will make the algorithm slower but you shouldn't run into above said error. |
Thanks for the quick response.. |
Please note that "n_jobs>1" case also includes "n_jobs=-1" |
Noted..I have tried n_jobs=70(cores) and n_jobs=-1(To consume all the cores).Both cases were throwing similar exception.. |
Yes, the reason being that with any value other than 1 for n_jobs will result in internal copies of the data which will definitely lead to memory error for such a large dataset. |
Thanks for the inputs |
Refer the exception below
@atif-hassan
The text was updated successfully, but these errors were encountered: