Skip to content
This repository has been archived by the owner on Sep 9, 2024. It is now read-only.

Validation accuracy is incorrect when using fully convolutional models #24

Open
nshaud opened this issue Nov 16, 2020 · 4 comments
Open
Assignees
Labels
bug Something isn't working
Milestone

Comments

@nshaud
Copy link
Owner

nshaud commented Nov 16, 2020

No description provided.

@nshaud nshaud added the bug Something isn't working label Nov 16, 2020
@nshaud nshaud added this to the 0.1.0 milestone Nov 16, 2020
@nshaud nshaud self-assigned this Nov 16, 2020
@mengxue-rs
Copy link

Hi, which one is you refering to? I would like to try it.

@nshaud
Copy link
Owner Author

nshaud commented Nov 20, 2020

@snowzm you can try lee model which is fully convolutional IIRC. Validation accuracy is grossly incorrect in this case.

@mengxue-rs
Copy link

mengxue-rs commented Nov 26, 2020

@nshaud I think this issue may relate to your choice of the normalization method. More details see Experiment Reports.pdf

@mengxue-rs
Copy link

mengxue-rs commented Nov 30, 2020

@nshaud there may be two reasons about this issue.

  1. in the line 1225 of the val function of the models.py, if out.item() in ignored_labels: may be corrected as if pred.item() in ignored_labels:. This results in the incorrect calculation of the validation accuracy;
  2. wrongly used normalization makes the training loss being still high, you could try the SNB normalization (see my recent pulled request).
    There are three experimental pictures about above statements (10% samples per class on the Indian Pines data set). When using original settings, I got the first row images. Then I got the second row images if I did 1). Finally, I got the last row images if I did 1) and 2).

1
2
3

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants