New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
lq, hq data split #2
Comments
Aha, there must be some problems/misunderstandings. Firstly, the data split is followed by the ground-truth label (which is the csv file you mentioned). Moreover, you say you have not found the data classified wrongly. It seems impossible, since the accuracy of MCF-Net on the test set is not 100% (which is around 80%~90%, and the grade for "usable" is the most challenging task). Be sure that you are testing on the EyeQ test set, and then calculate the ACC on this dataset. |
Here are some suggestions for this problem: (1) First, calculate the ACC on the EyeQ test dataset, and compare the results with the original paper. I guess there may be a performance gap between your implementation and the official implementation. (2) Second, check the weights in the network, be sure that you have loaded the correct weights from the pre-trained checkpoint. (3) Thirdly, check the pre-processing (especially the normalization), be sure all of these should be similar to the original MCF-Net. (4) Lastly, the FIQA only focuses on Good/Not Good, which is not related to the "Rejected" grade. Using the torch.argmax() to get the predicted label, and sum over the number of samples which are graded as "Good", formulated by X. The total number of samples is formulated as Y. The FIQA is simply calculated by X/Y. |
Aha so the quality labels of the "EyeQ/data/Label_EyeQ_train.csv" and "EyeQ/data/Label_EyeQ_test.csv" are the ground truth not the result of MCF Net! Thanks a million How did you split your data to train/val/test? In your github, it is said on readme.md "Split the dataset into train/val/test according to the EyePACS challenge." but, in the kaggle challenge, I only found test/train separation 馃様 |
We followed the DR classification task dataset split in my teammate's work (https://arxiv.org/pdf/2110.14160.pdf). I think you can just split 20% of the training data as the validation set, which seems also okay. |
Aha I see |
Hello again :) I have more questions about the data split you made. I read your teammate's work you mentioned (https://arxiv.org/pdf/2110.14160.pdf) but found out that the total number is different from the EyeQ data. - the total number in EyePACS: 88,702 and the total number in EyeQ when using only "usable" for low quality: 23,252. So could I get how you split your data (with a data name list for each train/val/test)? If you can, I will give you my email! Thanks a lot always, you are a huge help to me. |
We have followed the train/test split provided by the official EyeQ dataset, which can be found at https://github.com/HzFu/EyeQ/tree/master/data. However, as there was no validation dataset available, we have created a validation set by splitting the training set. Nevertheless, we have updated the data split in the repository, which you can now access. :) |
The label "1" is "good", while label "0" is "usable". |
Oh wow..! 馃ズ馃ズ |
Hello, I found out that the label on the csv file you uploaded is slightly different from the EyeQ version.(https://github.com/HzFu/EyeQ). your version > Could you please tell me how did you get the labels? |
I used the label in EyeQ V1, but EyeQ has been updated to V2 now. You can check this branch https://github.com/HzFu/EyeQ/tree/95c63a743a68b1665d7ecb1e050a2d5b4f0f3408 for more details in V1. |
Aha I see thanks! |
I apologize for any confusion. Upon reviewing my workspace, I discovered that the number of "good" images is 16818, as compared to 16817 in EyeQ. Additionally, the number of "usable" images is 6436, versus 6435 in EyeQ. It appears that there is only a one-image discrepancy between versions 1 and 2. As the CSV file I uploaded is only utilized for the public split, I will investigate whether there are any issues with these files. |
The "bad" label shown in the bash represents the "usable" grades in EyeQ. |
I am sorry, it was my mistake.. |
Hello I came again馃榿
I was looking at your paper and came up with one question about how you split the lq and hq data. As mentioned in your paper and your github, you split your hq and lq data by score of eyeQ/MCFNet. - LQ for "usable" and HQ for "good"-.
But in the table in the paper, Original FIQA is not zero which means some pictures are marked as "good".
How could this possible?
Did you used "EyeQ/data/Label_EyeQ_train.csv" quality level? I did and found out that some datas mark as "usable" to be "good" or "reject" when I tested with MCF net.
The text was updated successfully, but these errors were encountered: