Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory limit in prediction? #265

Open
Song-Yuqi opened this issue May 23, 2023 · 2 comments
Open

Memory limit in prediction? #265

Song-Yuqi opened this issue May 23, 2023 · 2 comments

Comments

@Song-Yuqi
Copy link

Hello, I wonder if there is a parameter about max memory set for predict when using jupyter notebook like this:
x1_test=X_test.drop('Pixel',inplace=False,axis=1)
predictions1 = clf_svm1.predict(x1_test)

When I run the above prediction, I got the error message:
Canceled future for execute_request message before replies were done
The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details.

The clf_svm1 model is trained in this way:
from thundersvm import SVC
import joblib
import os

os.environ["CUDA_VISIBLE_DEVICES"] = "0"
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"

x1=X_train.drop('Pixel',inplace=False,axis=1)
y1=y_train.drop('Pixel',inplace=False,axis=1).values.ravel()

clf_svm1 = SVC(kernel = 'linear',random_state=0, probability= True, n_jobs=-1, gpu_id=0,max_mem_size=40000)
clf_svm1.fit(x1,y1)

joblib.dump(clf_svm1,dirs+'/SVM1.pkl') # save models

The training successed, the model is saved, but the prediction failed.
I saw someone using -m when predicting by command in terminal, but I couldn't find a parameter that can be set in the "predict" function when using jupyter. How can I fix this? Or is this error caused by another problem instead of memory?

@zeyiwen
Copy link
Collaborator

zeyiwen commented May 23, 2023

You can find the parameters on this page. Setting max_mem_size should work.

@Song-Yuqi
Copy link
Author

You can find the parameters on this page. Setting max_mem_size should work.

Thank you! I just changed the max_mem_size from 40000 to 10000, it still took almost the same time to train as before, and the prediction also worked well. But I still feel a little surprised that the same parameter can work when training but not when predicting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants