Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LightGBMError: GPU Tree Learner was not enabled in this build. #2222

Closed
zhuqunxi opened this issue Jun 6, 2019 · 7 comments
Closed

LightGBMError: GPU Tree Learner was not enabled in this build. #2222

zhuqunxi opened this issue Jun 6, 2019 · 7 comments

Comments

@zhuqunxi
Copy link

zhuqunxi commented Jun 6, 2019

image

How to fix it? Any document for this problem?

@zhuqunxi
Copy link
Author

zhuqunxi commented Jun 6, 2019

image
image

I am using : Win 10, Visual Studio 2015, GTX1060


I follower your link
https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html#build-gpu-version
and download the Cmake, Boost binaries 1.63.0 version (msvc-14.0-64.exe). Since I am also using Tensorflow and Kearas and Pytorch, the Cuda and Cudnn should be right.

But, still not work, what's going on?

@StrikerRUS
Copy link
Collaborator

Something is wrong with CMake + VS toolchain in your environment. Try to add -G "Visual Studio 14 2015" in your build command. Or reinstall VS.

image

@zhuqunxi
Copy link
Author

zhuqunxi commented Jun 7, 2019

image

if device == 'gpu': classifiers = [xgb.XGBClassifier(random_state=42, tree_method = 'gpu_hist'), lgbm.LGBMClassifier(seed = 42, device = 'gpu')] else: classifiers = [xgb.XGBClassifier(random_state=42, tree_method = 'hist'), lgbm.LGBMClassifier(seed = 42, device = 'cpu')]

Firstly, great thanks for your help an I have implemented the gpu version. But it is strange in LGBMClassifier.fit(X, y) -- (X shape = (500, 75), y shape = (500, )), using gpu method is so slow and even not end. Also, the Nvidia gpu did not work at all. But for the XGBClassifier, it runs well (using gpu much faster than using cpu)

@zhuqunxi
Copy link
Author

zhuqunxi commented Jun 7, 2019

image

image

image

image

image


Sorry to bother you again. I noticed that I have made a mistake in setting the parameters (gpu_platform_id = 1, gpu_device_id = 0). Because in my computer there are 2 gpus, e.x., 1) Intel gpu, 2) Nvidia gpu. But, as a result, in my testing experiment, it is really strange that the Lightgbm gpu ( 16.49 s) costs much time than the cpu version (10.66 s). Compared to XGBoost, it really works well, the gpu version (5.10 s) is much better than cpu (15.99 s).

@StrikerRUS
Copy link
Collaborator

Glad that you've managed to utilize your NVIDIA GPU!
Yeah, gpu_platform_id, gpu_device_id params are very important for GPU version, because bad values can mimic real GPU training or simply crash the whole application.

For GPU-version performance, please consider starting reading from this issue #768 and follow some links in it. Then, you may want to get familiar with some benchmarks: Laurae2/ml-perf#8, Laurae2/ml-perf#6, szilard/GBM-perf#11.

@zhuqunxi
Copy link
Author

zhuqunxi commented Jun 7, 2019

Thanks again for your great help! I'll read these information carefully.

@lock lock bot locked as resolved and limited conversation to collaborators Mar 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants