Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FPENet is difficult to reproduce #3

Open
changwenkai101 opened this issue Jan 1, 2020 · 6 comments
Open

FPENet is difficult to reproduce #3

changwenkai101 opened this issue Jan 1, 2020 · 6 comments

Comments

@changwenkai101
Copy link

changwenkai101 commented Jan 1, 2020

Hi, this is a good project.
I tried it, and the overall installation and training were very simple and straightforward. I experimented with FPENet, but the final result was larger than the original one. specifically:
I modified the hyperparameters to the original:

model = FPENet
dataset = cityscapes
input_size = 512, 1024
classes = 19
train_type = train
max_epochs = 400
lr_schedule = poly
The loss used CrossEntropy2d:
That is, line 133 of train: criteria = CrossEntropyLoss2dLabelSmooth (weight = weight, ignore_label = ignore_label), changed to: CrossEntropyLoss2d function, while other settings remain unchanged, using an RTX2080Ti GPU, training for 9h

But in the end, mIOU is only 46% on the val set, and the original effect is 60–70% on the test set, but I feel that there should not be such a big gap between val and test. I checked some output and noticed that the model parameters printed by train.py were 0.12M, but the original model was 0.4M. At first, I thought the model was wrong, but after checking the paper, I felt that your implementation was correct. Then I used torchsummary in the model to see that the model was 0.44M, so I didn't know what went wrong.

Maybe FPENet itself is difficult to reproduce? (Although this is common in AI papers).
Has anyone used this project to reproduce and roughly achieve the effect of an original model? Can you discuss and share the parameters and strategies set?

@xiaoyufenfei
Copy link
Owner

I also encounter this problem,FPENet result is not so good

@changwenkai101
Copy link
Author

@xiaoyufenfei Have other models been tested? Are there results that are close to or consistent with the description of the original article?

@dcrmg
Copy link

dcrmg commented Jan 4, 2020

It's better to regenerate “camvid_inform.pkl” or "cityscapes_inform.pkl" according to your local dataset

@xiaoyufenfei
Copy link
Owner

So, I want to know you detail suggestion ?

@dcrmg
Copy link

dcrmg commented Jan 4, 2020

So, I want to know you detail suggestion ?

I'm not sure the file "cityscapes_inform.pkl" is correct or not, it got "data['classWeights']: [ 1.4705521 9.505282 10.492059 10.492059 10.492059 10.492059
10.492059 10.492059 10.492059 10.492059 10.492059 10.492059
10.492059 10.492059 10.492059 10.492059 10.492059 10.492059
5.131664 ]", and i got “data['classWeights']: [ 2.5959933 6.7415504 3.5354059 9.8663225 9.690899 9.369352
10.289121 9.953208 4.3097677 9.490387 7.674431 9.396905
10.347791 6.3927646 10.226669 10.241062 10.280587 10.396974
10.055647 ]”
Also, the code:
dataset_list = os.path.join(dataset, '_trainval_list.txt')
maybe should be:
dataset_list = dataset + '_trainval_list.txt'
in dataset_builder.py on line 10

@xiaoyufenfei
Copy link
Owner

Is there any difference ? you can try it, I want to know the result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants