Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reverse Results in TESTING #3

Closed
zero-suger opened this issue May 13, 2024 · 5 comments
Closed

Reverse Results in TESTING #3

zero-suger opened this issue May 13, 2024 · 5 comments

Comments

@zero-suger
Copy link

zero-suger commented May 13, 2024

Hi ~

First of all thank you such a hard work.

Currently, I am trying to TRAIN the FAS code with my custom dataset. As you explained on page, I extracted faces, made .pkl files then I made txt file to my image paths and as in code i made my txt file '{image_path} 0' in training and started TRAINING. In TRAINING the console showed 99 accurancy. But then I TESTED but for real faces it is giving all scores 0.00456, 0.023, etc, for spoof faces TESTING model is giving high scores 0.98, 0.87, 1, etc.

This is what i am not understanding. I did wrong in txt file labeling or missing something? Please help me to solve this out. Thank you !!!

@Xianhua-He
Copy link
Owner

@zero-suger
Thank you very much for your interest in our work.

This code is designed for competition, Label: live=0, fake=1. The label may be the opposite of the label in your dataset. You need to check and modify your code. Note that in data augmentation, live samples will be converted into spoof samples through SDSC data augmentation.

I hope the above answers are helpful to you. If you have any other questions, you are welcome to raise an issue.

@zero-suger
Copy link
Author

Thank you to reply~

I searched and TRAINED the model as you explained, and I have one more extra question, when I TESTED model with Screen Attacks (without any artifacts) or new type of attacks like half faces the model's prediction is too high .91 ~ real face. I read the paper and understand the model can catch artifacts and some jitter effects bcz of 2 generative models inside. Right? or I missed something ?

Thank you :)

@Xianhua-He
Copy link
Owner

@zero-suger
The method proposed in our paper is only effective in detecting specific "unseen" attack types. Our method is effective when we forget "unseen" attack samples through specified data augmentation methods.

According to our company's past implementation experience in FAS tasks, data-driven is more important than model methods. Screen Attacks and other attack types, you should focus on collecting more high-quality data to be more effective. You can also forget more attack samples to train the model to detect Digital attacks.

By the way, we implemented the FAS model in actual business scenarios, and the training data reached 20 million. I hope the above answers are helpful to you.

@Xianhua-He
Copy link
Owner

@zero-suger
Artifacts are what we obtain through data augmentation, allowing the model to more easily capture the small artifacts on the face of digital attacks. The method of constructing artifacts is similar to: https://arxiv.org/abs/2204.08376

@zero-suger
Copy link
Author

Thank you, in advance. I got it (:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants