You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@replacementAI Thank you for your feedback. Is there a way to tune the parameters of LightGBM in cross-validation when it comes to ranking models? I tried optuna.integration.lightgbm.LightGBMTuner but it also doesn't work for ranking scenarios.
Description
For the lambdarank objective, the Scikit-learn GroupKFold does not work. Is there a way to make this work? Below is a simple example.
Reproducible example
Which produces the following error:
Environment info
Sklearn: 1.1.3
LightGBM: 3.2.1
Additional Comments
The code I pasted is inspired by the solution given in #1137 which refers to https://github.com/Microsoft/LightGBM/blob/4df7b21dcf2ca173a812f9667e30a21ef827104e/python-package/lightgbm/engine.py#L267-L274. However, this does not work in our case.
Any help would be appreciated.
The text was updated successfully, but these errors were encountered: