Skip to content

Consistent F1 Score Across All Trials with Optuna Hyperparameter Tuning #606

@amit-plutoflume

Description

@amit-plutoflume

Hi,

We're using Optuna to search for the best hyperparameters for a SetFit model. We’ve defined a hyperparameter search space and set n_trials=20 for optimization. However, we've noticed that across all trials, the F1 score remains exactly the same, even though different hyperparameters are being tested in each trial.

This makes it difficult to identify which hyperparameter combinations actually contribute to better performance. We’ve double-checked that the objective function is returning the F1 score and that the dataset and evaluation logic are properly defined.

Has anyone else faced a similar issue? Is there something specific about how SetFit interacts with Optuna that we might be missing? Any advice or troubleshooting suggestions would be appreciated.

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions