You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In 'Learning to prompt for continual learning' paper, I understand 'FT-seq-Frozen' in Table 1 as a naive prompt tuning at the input token feature.
To implement the FT-seq-Frozen setting in CIFAR100, I set prompt pool_size as 1.
The result shows Acc@1 81.49 with Forgetting 6.3667.
Any point that I missed?
How did you set the hyperparamters for FT-seq-Frozen?
Specifically, did you set the argument 'train_mask = False' for FT-seq-Frozen?
The text was updated successfully, but these errors were encountered:
In 'Learning to prompt for continual learning' paper, I understand 'FT-seq-Frozen' in Table 1 as a naive prompt tuning at the input token feature.
To implement the FT-seq-Frozen setting in CIFAR100, I set prompt pool_size as 1.
The result shows Acc@1 81.49 with Forgetting 6.3667.
Any point that I missed?
How did you set the hyperparamters for FT-seq-Frozen?
Specifically, did you set the argument 'train_mask = False' for FT-seq-Frozen?
The text was updated successfully, but these errors were encountered: