-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Latest version training crashes #10137
Comments
sklearn :1.3.0 |
Hi, thank you for raising the issue. Could you please provide a reproducible example that I can run on my machine? |
I have provided the training function, but the data is too large to upload successfully. |
Could you please provide the code in a more complete form so that I can run it with synthesized data? |
In addition, could you please share what you mean by crash? Is it running out of memory or is it running into segfault? |
Uploading xgb.zip… |
Feel free to close the issue if this is not related to XGBoost itself. |
|
By restart, do you mean the login session restart, or the whole OS got restarted? |
Restart the entire operating system |
Then that's beyond me. The OS has bugs, likely in the kernel, you may upgrade your OS, I don't think this is related to XGBoost. |
Feel free to reopen it if you need further information. |
My environment is Python 3.9 established through Conda. The default installation for XGBoost is 1.7.0, so there is no problem training with this version. I can train normally, but when I upgraded XGBoost to 2.0.3 and used the same code and data for training, I encountered a crash. And 100% reproduce the crash. My hardware is 2697AV4, with 1TB of DDR4-2400T memory and SSD solid-state drive. CPU mode training. The ntthread parameter is set to -1, with a dataset size of 89M and 195 features for binary classification.
My code function is as follows:
The text was updated successfully, but these errors were encountered: