You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that the training code of the encode network for transforming the face image to boundary maps does not include in this project, you have provided the pretrained model, can you provide me the code for producing the ground-true boundary maps? Thanks!!!
The text was updated successfully, but these errors were encountered:
I think you can get the boundary map using the pretrained model - v8_net_boundary_detection.pth. You can find the code that turns image into boundary maps in transformer_model.py (in 'init_Bound') or face2boundary2face_model.py ('self.netBoundary' does the job).
@rosebbb emmmm. sorry to bother you! can you provide me the script for transforming the ground true landmarks to the boundary maps instead of a test model? Thank you!
I don't find such a script in this repository. The model here takes face images as input and outputs boundary maps, which is what you talked about in your first question. If you want to work on landmarks maybe take a look at another paper by the author on https://wywu.github.io/projects/LAB/LAB.html
It seems that the training code of the encode network for transforming the face image to boundary maps does not include in this project, you have provided the pretrained model, can you provide me the code for producing the ground-true boundary maps? Thanks!!!
The text was updated successfully, but these errors were encountered: