You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're using Optuna to search for the best hyperparameters for a SetFit model. We’ve defined a hyperparameter search space and set n_trials=20 for optimization. However, we've noticed that across all trials, the F1 score remains exactly the same, even though different hyperparameters are being tested in each trial.
This makes it difficult to identify which hyperparameter combinations actually contribute to better performance. We’ve double-checked that the objective function is returning the F1 score and that the dataset and evaluation logic are properly defined.
Has anyone else faced a similar issue? Is there something specific about how SetFit interacts with Optuna that we might be missing? Any advice or troubleshooting suggestions would be appreciated.