Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training process #3

Closed
jiejie22997 opened this issue Jul 11, 2022 · 7 comments
Closed

training process #3

jiejie22997 opened this issue Jul 11, 2022 · 7 comments

Comments

@jiejie22997
Copy link

Hello,
i got questions for the training process.

  1. i just want to confirm that the provided baseline model is from the first training stated in the paper, right?
  2. can we run start.sh for the second training to reproduce results in the paper?

thanks in advance.

@jiejie22997 jiejie22997 changed the title questions training process Jul 11, 2022
@haibo-qiu
Copy link
Owner

Hi there,

  1. The provided model_p4_baseline_9938_8205_3610.pth.tar model is the Baseline stated in Sec.4.2.
  2. Yes, running start.sh with model_p4_baseline_9938_8205_3610.pth.tar as pretrained model (default in this func) is supposed to reproduce our results.

@jiejie22997
Copy link
Author

Thanks for the response!
also, im wondering how i can access to LFW-SM and O-LFW. i tried to add mask to LFW but not all images can be masked successfully. For MFR2 and RMF2 stated in the paper, is it referring to the same datasets?

@haibo-qiu
Copy link
Owner

Hi @jiejie22997

For LFW-SM, if the original paper does not provide the download link, you can use this repo to apply masks. Indeed, it may fail on some images but should work on most face images in my memory. I cannot find this dataset due to moving to a new place :-(

As for O-LFW, I emailed the first author of BFL to get the dataset.

Sorry for the confusing typo of MFR2 and RMF2, they are the same dataset MFR2. I will fix this typo asap in the next version of our paper.

@jiejie22997
Copy link
Author

Thanks for the reply!
I feel confused about the AR datasets evaluation. In the image below, why do we need the fc2? Is it a typo? Thanks
4a86e1848b0dc35dde6c566e29fd144

@haibo-qiu
Copy link
Owner

Hi @jiejie22997 ,

This is because the odd number images are the flipped version of even number images, and you can refer to the below code.

if index % 2 == 1:
img = transforms.functional.hflip(img)

So fc = fc1 + fc2 can be regarded as an augmented representation that includes both original and flipped image features.

@jiejie22997
Copy link
Author

I see, thank you!

@haibo-qiu
Copy link
Owner

Feel free to reopen this issue if you have any further questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants