Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cfg file for running the demo of MobileNetv2 and MobileNetV3 models #47

Open
DanielXu123 opened this issue May 9, 2023 · 6 comments
Open

Comments

@DanielXu123
Copy link

First of all, thanks for sharing the excellent models. I've tested the resnet101 based models, it works perfect. And I noticed that you guys also provide the mobilenet-v3-large model (mobilenetv3-large-1cd25616.pth) in ~/PIPNet/lib.

I'd like to test the effect of this model, but can't find the related cfg file, seems that only Resnet related cfg files have been provided.
Could you provide the 'cfg' file and also the 'data_name' which may needed in calculating the get_meanface function.

@jhb86253817
Copy link
Owner

Hi, Here is the cfg file of mobilenetv3 on 300W:

class Config():
    def __init__(self):
        self.det_head = 'pip'
        self.net_stride = 32
        self.batch_size = 16
        self.init_lr = 0.0001
        self.num_epochs = 120
        self.decay_steps = [60, 100]
        self.input_size = 256
        self.backbone = 'mobilenet_v3'
        self.pretrained = True
        self.criterion_cls = 'l2'
        self.criterion_reg = 'l1'
        self.cls_loss_weight = 10
        self.reg_loss_weight = 1
        self.num_lms = 68
        self.save_interval = self.num_epochs
        self.num_nb = 10
        self.use_gpu = True
        self.gpu_id = 0

For get_meanface function, all backbones share the same file, and the data name corresponds to training data, e.g., datat_300W, WFLW, etc.

@DanielXu123
Copy link
Author

I may have confused.
Have you offered the facial landmark model trained on MobileNetv3 or MobileNet-v2?
The model mentioned above "mobilenet-v3-large model (mobilenetv3-large-1cd25616.pth) in ~/PIPNet/lib " is the pretrained model, not the final facial landmark model, is that right ?

@DanielXu123
Copy link
Author

Will you provid the trained models of MobileNet-V3 and MobileNet-v2 ? It woild be very appreciated.

@jhb86253817
Copy link
Owner

I did not provide trained model for mobilenets previously. And the one named '"mobilenet-v3-large model (mobilenetv3-large-1cd25616.pth)" is an ImageNet pretrained model for initialization.
I just updated the trained models by adding mobilenet v2 and v3 trained on 300W and WFLW. You may check the shared Google Drive in the ReadMe.

@DanielXu123
Copy link
Author

DanielXu123 commented May 10, 2023

Thx!!
I will test the model. Thanks for your sharing.
For the model you trained on WFLW, the config file should be like this:


class Config():
    def __init__(self):
        self.det_head = 'pip'
        self.net_stride = 32
        self.batch_size = 16
        self.init_lr = 0.0001
        self.num_epochs = 120
        self.decay_steps = [60, 100]
        self.input_size = 256
        self.backbone = 'mobilenet_v3'
        self.pretrained = True
        self.criterion_cls = 'l2'
        self.criterion_reg = 'l1'
        self.cls_loss_weight = 10
        self.reg_loss_weight = 1
        self.num_lms = 98
        self.save_interval = self.num_epochs
        self.num_nb = 10
        self.use_gpu = True
        self.gpu_id = 0

Is that right ?
Again, thanks for your awasome work.

@jhb86253817
Copy link
Owner

Yes, I think so. The only difference is the number of landmarks.
Your're welcome :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants