You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thanks for your interesting work!
I am trying to reproduce the zero-shot experiments in the paper recently, but like #19 (comment) , it gets mIoU much lower than yours.
Here is my scripts:
train_lseg_zs.py:
from modules.lseg_module_zs import LSegModuleZS
from utils import do_training, get_default_argument_parser
if __name__ == "__main__":
parser = LSegModuleZS.add_model_specific_args(get_default_argument_parser())
args = parser.parse_args()
do_training(args, LSegModuleZS)
When I make base_lr=0.09, it gets higher mIoU. So could you please provide your whole hyperparameters and how many epochs for training respectively? Thanks a lot.
We provide the train script for ADE20k dataset, and you could easily revise it for zero-shot experiments. As for FSS-1000, it should be very easy to reproduce the results. As for COCO and PASCAL datasets, due to very few classes, you need early stop (you should be able to get the optimal results using the models from epoch 0-3) and do a hyper-parameter sweep to find the best learning rate, the optimal lr should be smaller than the lr of FSS-1000.
Hi! Thanks for your interesting work!
I am trying to reproduce the zero-shot experiments in the paper recently, but like #19 (comment) , it gets mIoU much lower than yours.
Here is my scripts:
train_lseg_zs.py:
command:
Default aruguments: base_lr=0.004, weight_decay=1e-4, momentum=0.9
I wonder where the problem is. And could you please share your training scripts for the zero-shot experiment?
The text was updated successfully, but these errors were encountered: