Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

My reproduce results is slightly lower #27

Closed
YanFangCS opened this issue Dec 17, 2021 · 3 comments
Closed

My reproduce results is slightly lower #27

YanFangCS opened this issue Dec 17, 2021 · 3 comments

Comments

@YanFangCS
Copy link

Hello, I use the config file you provide to reproduce on Pascal Voc dataset. But I got somehow slightly lower results in multiple dataset split setting. My reproduce result are as following.
截屏2021-12-17 上午10 03 57
And config file used in my experiment is as following.

{
    "name": "CAC",
    "experim_name": "cac_datalist0_1of8_3",
    "dataset": "voc",
    "data_dir": ###,
    "datalist": 3,
    "n_gpu": 4,
    "n_labeled_examples": 10582,
    "diff_lrs": true,
    "ramp_up": 0.1,
    "unsupervised_w": 30,
    "ignore_index": 255,
    "lr_scheduler": "Poly",
    "use_weak_lables":false,
    "weakly_loss_w": 0.4,
    "pretrained": true,
    "random_seed": 42,

    "model":{
        "supervised": false,
        "semi": true,
        "supervised_w": 1,

        "sup_loss": "CE",

        "layers": 101,
        "downsample": true,
        "proj_final_dim": 128,
        "out_dim": 256,
        "backbone": "deeplab_v3+",
        "pos_thresh_value": 0.75,
        "weight_unsup": 0.1,
        "epoch_start_unsup": 5,
        "selected_num": 3200,
        "temp": 0.1,
        "step_save": 2,
        "stride": 8
    },


    "optimizer": {
        "type": "SGD",
        "args":{
            "lr": 0.01,
            "weight_decay": 1e-4,
            "momentum": 0.9
        }
    },

    "train_supervised": {
        "batch_size": 8,
        "crop_size": 320,
        "shuffle": true,
        "base_size": 400,
        "scale": true,
        "augment": true,
        "flip": true,
        "rotate": false,
        "blur": false,
        "split": "train_supervised",
        "num_workers": 8
    },

    "train_unsupervised": {
        "batch_size": 8,
        "crop_size": 320,
        "shuffle": true,
        "base_size": 400,
        "scale": true,
        "augment": true,
        "flip": true,
        "rotate": false,
        "blur": false,
        "split": "train_unsupervised",
        "num_workers": 8,
        "iou_bound": [0.1, 1.0],
        "stride": 8
    },

    "val_loader": {
        "batch_size": 4,
        "val": true,
        "split": "val",
        "shuffle": false,
        "num_workers": 4
    },

    "trainer": {
        "epochs": 80,
        "save_dir": "saved/",
        "save_period": 1,
  
        "monitor": "max Mean_IoU",
        "early_stop": 100,
        
        "tensorboardX": true,
        "log_dir": "saved/",
        "log_per_iter": 20,

        "val": true,
        "val_per_epochs": 1
    }
}

Could you give me some advice about how to correctly reproduce your results?
Thanks a lot.

@X-Lai
Copy link
Collaborator

X-Lai commented Dec 24, 2021

Thanks for your interest in our work. I think the performance gap may come from n_gpu, batch_size or selected_num. If you want to reproduce the results reported in our paper, I recommend you to use the given configs without any change.

@X-Lai
Copy link
Collaborator

X-Lai commented Dec 24, 2021

Besides, note that the results reported in our paper are averaged from three runs on different datalists.

@YanFangCS
Copy link
Author

Thanks for your suggestion, I will try again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants