-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to use the net without training #1
Comments
Hi, Thanks for your interests in our work. To get started, you need to download all the checkpoints of StyleGAN2, StyleGAN2 encoder, and our trained models. Please check README for the Google Drive links. I have added the codes for singe image inference. Specifically, run the following commands with models trained on BP4D/DISFA:
Meanwhile, you can also check our other work (https://github.com/ihp-lab/LibreFace) which is an open-source tool for action unit detection and facial expression analysis. Let me know if you have any other questions. |
First of all, thank you very much for updating your project for my request. It is certainly helpful :) However, I sadly could not run it yet:
I checked the versions, but it should be correct as you mentioned in README:
I will also ask my supervisor about it, since it might be due to running it as a slurm job. But you could already know this issue and how to fix it. In any case, thank you again for responding earlier and adding the feature. |
Hi SM-Jack, From my personal feeling, you have a broken install of pytorch or cuda. Maybe print out torch.cuda.is_available() to see if your environment is fine or not. Otherwise I would suggest ask your supervisor to solve the issue since I am not sure what hardwares you are using. Best, |
Hi,
thank you for this contribution.
It might be a trivial question, but how can I apply FG-Net directly to any image/video?
For my bachlor thesis, I want to evaluate a model based on Action Units. However, I have only little experience so far of running third-party code. I hope someone will help me with that :)
So far, I can assume
But now, which file can I use to get action units detected? Is there any file yet or do I have to build my own (based on eval_interpreter.py?) ?
I'm looking forward to some advice :)
The text was updated successfully, but these errors were encountered: