-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU errors during multiple Optuna trials #8198
Comments
Could you please share the version of xgboost and optuna? |
also, could you please share the error trace? I'm running the notebook right now, but haven't seen the error yet. |
This is what I'm getting from my code on Paperspace Gradient [W 2022-08-23 17:53:05,034] Trial 21 failed because of the following error: XGBoostError('[17:53:05] /opt/conda/envs/rapids/conda-bld/work/src/tree/updater_gpu_hist.cu:712: Exception in gpu_hist: [17:53:05] /opt/conda/envs/rapids/conda-bld/work/src/tree/param.h:219: Check failed: n_nodes != 0 (0 vs. 0) : \nStack trace:\n [bt] (0) /opt/conda/envs/rapids/lib/libxgboost.so(+0x42265f) [0x7fa4a504965f]\n [bt] (1) /opt/conda/envs/rapids/lib/libxgboost.so(+0x428454) [0x7fa4a504f454]\n [bt] (2) /opt/conda/envs/rapids/lib/libxgboost.so(xgboost::tree::GPUHistMakerDevice<xgboost::detail::GradientPairInternal >::GPUHistMakerDevice(int, xgboost::EllpackPageImpl const*, xgboost::common::Span<xgboost::FeatureType const, 18446744073709551615ul>, unsigned int, xgboost::tree::TrainParam, unsigned int, unsigned int, xgboost::BatchParam)+0x818) [0x7fa4a537dda8]\n [bt] (3) /opt/conda/envs/rapids/lib/libxgboost.so(xgboost::tree::GPUHistMakerSpecialised<xgboost::detail::GradientPairInternal >::InitDataOnce(xgboost::DMatrix*)+0x264) [0x7fa4a537e474]\n [bt] (4) /opt/conda/envs/rapids/lib/libxgboost.so(xgboost::tree::GPUHistMakerSpecialised<xgboost::detail::GradientPairInternal >::Update(xgboost::HostDeviceVector<xgboost::detail::GradientPairInternal >, xgboost::DMatrix, std::vector<xgboost::RegTree*, std::allocatorxgboost::RegTree* > const&)+0x232) [0x7fa4a5387d02]\n [bt] (5) /opt/conda/envs/rapids/lib/libxgboost.so(xgboost::gbm::GBTree::BoostNewTrees(xgboost::HostDeviceVector<xgboost::detail::GradientPairInternal >, xgboost::DMatrix, int, std::vector<std::unique_ptr<xgboost::RegTree, std::default_deletexgboost::RegTree >, std::allocator<std::unique_ptr<xgboost::RegTree, std::default_deletexgboost::RegTree > > >)+0x19c) [0x7fa4a4f67f2c]\n [bt] (6) /opt/conda/envs/rapids/lib/libxgboost.so(xgboost::gbm::GBTree::DoBoost(xgboost::DMatrix, xgboost::HostDeviceVector<xgboost::detail::GradientPairInternal >, xgboost::PredictionCacheEntry)+0x516) [0x7fa4a4f6c6f6]\n [bt] (7) /opt/conda/envs/rapids/lib/libxgboost.so(+0x35d5ea) [0x7fa4a4f845ea]\n [bt] (8) /opt/conda/envs/rapids/lib/libxgboost.so(XGBoosterUpdateOneIter+0x7c) [0x7fa4a4e31cbc]\n\n\n\nStack trace:\n [bt] (0) /opt/conda/envs/rapids/lib/libxgboost.so(+0x741467) [0x7fa4a5368467]\n [bt] (1) /opt/conda/envs/rapids/lib/libxgboost.so(xgboost::tree::GPUHistMakerSpecialised<xgboost::detail::GradientPairInternal >::Update(xgboost::HostDeviceVector<xgboost::detail::GradientPairInternal >, xgboost::DMatrix, std::vector<xgboost::RegTree*, std::allocatorxgboost::RegTree* > const&)+0x77d) [0x7fa4a538824d]\n [bt] (2) /opt/conda/envs/rapids/lib/libxgboost.so(xgboost::gbm::GBTree::BoostNewTrees(xgboost::HostDeviceVector<xgboost::detail::GradientPairInternal >, xgboost::DMatrix, int, std::vector<std::unique_ptr<xgboost::RegTree, std::default_deletexgboost::RegTree >, std::allocator<std::unique_ptr<xgboost::RegTree, std::default_deletexgboost::RegTree > > >)+0x19c) [0x7fa4a4f67f2c]\n [bt] (3) /opt/conda/envs/rapids/lib/libxgboost.so(xgboost::gbm::GBTree::DoBoost(xgboost::DMatrix, xgboost::HostDeviceVector<xgboost::detail::GradientPairInternal >, xgboost::PredictionCacheEntry)+0x516) [0x7fa4a4f6c6f6]\n [bt] (4) /opt/conda/envs/rapids/lib/libxgboost.so(+0x35d5ea) [0x7fa4a4f845ea]\n [bt] (5) /opt/conda/envs/rapids/lib/libxgboost.so(XGBoosterUpdateOneIter+0x7c) [0x7fa4a4e31cbc]\n [bt] (6) /opt/conda/envs/rapids/lib/python3.8/lib-dynload/../../libffi.so.8(+0x6a4a) [0x7fa4f5362a4a]\n [bt] (7) /opt/conda/envs/rapids/lib/python3.8/lib-dynload/../../libffi.so.8(+0x5fea) [0x7fa4f5361fea]\n [bt] (8) /opt/conda/envs/rapids/lib/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so(_ctypes_callproc+0x9d2) [0x7fa4f537bdb2]\n\n') Stack trace: |
Ah, could you please limit the max_depth to 29? With 1.6.2, this can be 30. Fixed in #8098 . |
I keep getting XGB GPU errors after a few Optuna trials. I tried my code in Windows, Ubuntu, and on Paperspace Gradient, with various GPUs, and same results.
So I tried a public notebook on Kaggle that apparently worked in the past (and unrelated to my data), and it appears to trigger the same errors after a few trials:
https://www.kaggle.com/code/tunguz/tps-mar-2021-xgb-gpu-le-optuna
The text was updated successfully, but these errors were encountered: