Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about lnet i want to transfer to ncnn. #17

Closed
zys1994 opened this issue May 22, 2019 · 25 comments
Closed

about lnet i want to transfer to ncnn. #17

zys1994 opened this issue May 22, 2019 · 25 comments

Comments

@zys1994
Copy link

zys1994 commented May 22, 2019

about lnet i want to transfer to ncnn. about my code is that.

#include <iostream>
#include <stdio.h>
#include <algorithm>
#include <vector>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "platform.h"
#include "net.h"
#include "mat.h"

class Noop : public ncnn::Layer {};
DEFINE_LAYER_CREATOR(Noop)

int main() {
    ncnn::Net net;

    net.register_custom_layer("LinearRegressionOutput", Noop_layer_creator);
    net.register_custom_layer("Custom", Noop_layer_creator);
    net.load_param("../model/L106.param");
    net.load_model("../model/L106.bin");


    cv::Mat img = cv::imread("../test_img/AE_.jpg");

    cv::resize(img, img, cv::Size(96, 96));
    cv::Mat img_cp =img.clone();
    cv::cvtColor(img, img, CV_BGR2RGB);  //mxnet支持的也是rgb
    unsigned char *rgbdata = img.data;
    ncnn::Mat in = ncnn::Mat::from_pixels(rgbdata, ncnn::Mat::PIXEL_RGB, 96, 96);
    //ncnn::Mat in = ncnn::Mat::from_pixels(rgbdata, ncnn::Mat::PIXEL_BGR, 96, 96);
    const float mean_vals[3] = {127.5f, 127.5f, 127.5f};
    const float norm_vals[3] = {0.0078125f, 0.0078125f, 0.0078125f};
    in.substract_mean_normalize(mean_vals, norm_vals);


    ncnn::Mat out;

    ncnn::Extractor ex = net.create_extractor();
    //ex.set_light_mode(true);
    ex.input("data", in);


    double t_start = cv::getTickCount();
    ex.extract("conv6_3", out);
    double t_end = cv::getTickCount();
    float costTime = (t_end - t_start) / cv::getTickFrequency();
    std::cout << "cost Time :" << costTime << std::endl;


    std::cout << "Hello, World!" << std::endl;
    ncnn::Mat out_flatterned = out.reshape(out.w * out.h * out.c);
    for (int i = 0; i < 106; i++)
    {
        cv::Point pt = cv::Point(96 * out_flatterned[i * 2], 96 * out_flatterned[i * 2 + 1]);
        //cv::Point pt = cv::Point(96 * out_flatterned[i ], 96 * out_flatterned[i+106]);
        cv::circle(img_cp, pt, 2, cv::Scalar(0, 0, 250), 2);
    }
    cv::imshow("show",img_cp);
    cv::waitKey(0);
  
    return 0;
}

show

The result is not correct, Can you share your experience with me ?

@aidlearning
Copy link
Owner

you don't need this :
net.register_custom_layer("LinearRegressionOutput", Noop_layer_creator);
net.register_custom_layer("Custom", Noop_layer_creator);
because forward don't need this layer!

@zys1994
Copy link
Author

zys1994 commented May 22, 2019

if i comment it, it will get wrong.

 layer LinearRegressionOutput not exists or registered

it doesn't matter. i turns mxnet model to ncnn. i wonder why i get wrong result. the index landmark is wrong? or the means and scales is wrong?

@aidlearning
Copy link
Owner

you must delete the layers after conv6_3 because of the layers LinearRegressionOutput is for training and don't need in forward.

@aidlearning
Copy link
Owner

I mean you open the L106.param ,and delete the layers after conv6_3 in the file.

@aidlearning
Copy link
Owner

if you like the project ,pls give me a star,thank u!

@aidlearning
Copy link
Owner

7767517
66 67
Input data 0 1 data
Convolution conv1 1 1 data conv1 0=32 1=3 11=3 5=1 6=864
BatchNorm bn1 1 1 conv1 bn1 0=32
PReLU prelu1 1 1 bn1 prelu1 0=32
ConvolutionDepthWise conv2_dw 1 1 prelu1 conv2_dw 0=32 1=2 11=2 5=1 6=128 7=32
BatchNorm bn2_dw 1 1 conv2_dw bn2_dw 0=32
PReLU prelu2_dw 1 1 bn2_dw prelu2_dw 0=32
Convolution conv2_sep 1 1 prelu2_dw conv2_sep 0=32 1=1 11=1 5=1 6=1024
BatchNorm bn2_sep 1 1 conv2_sep bn2_sep 0=32
PReLU prelu2_sep 1 1 bn2_sep prelu2_sep 0=32
ConvolutionDepthWise conv3_dw 1 1 prelu2_sep conv3_dw 0=32 1=3 11=3 3=2 13=2 5=1 6=288 7=32
BatchNorm bn3_dw 1 1 conv3_dw bn3_dw 0=32
PReLU prelu3_dw 1 1 bn3_dw prelu3_dw 0=32
Convolution conv3_sep 1 1 prelu3_dw conv3_sep 0=64 1=1 11=1 5=1 6=2048
BatchNorm bn3_sep 1 1 conv3_sep bn3_sep 0=64
PReLU prelu3_sep 1 1 bn3_sep prelu3_sep 0=64
ConvolutionDepthWise conv4_dw 1 1 prelu3_sep conv4_dw 0=64 1=2 11=2 5=1 6=256 7=64
BatchNorm bn4_dw 1 1 conv4_dw bn4_dw 0=64
PReLU prelu4_dw 1 1 bn4_dw prelu4_dw 0=64
Convolution conv4_sep 1 1 prelu4_dw conv4_sep 0=64 1=1 11=1 5=1 6=4096
BatchNorm bn4_sep 1 1 conv4_sep bn4_sep 0=64
PReLU prelu4_sep 1 1 bn4_sep prelu4_sep 0=64
ConvolutionDepthWise conv5_dw 1 1 prelu4_sep conv5_dw 0=64 1=3 11=3 3=2 13=2 5=1 6=576 7=64
BatchNorm bn5_dw 1 1 conv5_dw bn5_dw 0=64
PReLU prelu5_dw 1 1 bn5_dw prelu5_dw 0=64
Convolution conv5_sep 1 1 prelu5_dw conv5_sep 0=64 1=1 11=1 5=1 6=4096
BatchNorm bn5_sep 1 1 conv5_sep bn5_sep 0=64
PReLU prelu5_sep 1 1 bn5_sep prelu5_sep 0=64
ConvolutionDepthWise conv6_dw 1 1 prelu5_sep conv6_dw 0=64 1=2 11=2 5=1 6=256 7=64
BatchNorm bn6_dw 1 1 conv6_dw bn6_dw 0=64
PReLU prelu6_dw 1 1 bn6_dw prelu6_dw 0=64
Convolution conv6_sep 1 1 prelu6_dw conv6_sep 0=64 1=1 11=1 5=1 6=4096
BatchNorm bn6_sep 1 1 conv6_sep bn6_sep 0=64
PReLU prelu6_sep 1 1 bn6_sep prelu6_sep 0=64
ConvolutionDepthWise conv7_dw 1 1 prelu6_sep conv7_dw 0=64 1=3 11=3 3=2 13=2 5=1 6=576 7=64
BatchNorm bn7_dw 1 1 conv7_dw bn7_dw 0=64
PReLU prelu7_dw 1 1 bn7_dw prelu7_dw 0=64
Convolution conv7_sep 1 1 prelu7_dw conv7_sep 0=128 1=1 11=1 5=1 6=8192
BatchNorm bn7_sep 1 1 conv7_sep bn7_sep 0=128
PReLU prelu7_sep 1 1 bn7_sep prelu7_sep 0=128
ConvolutionDepthWise conv8_dw 1 1 prelu7_sep conv8_dw 0=128 1=2 11=2 5=1 6=512 7=128
BatchNorm bn8_dw 1 1 conv8_dw bn8_dw 0=128
PReLU prelu8_dw 1 1 bn8_dw prelu8_dw 0=128
Convolution conv8_sep 1 1 prelu8_dw conv8_sep 0=128 1=1 11=1 5=1 6=16384
BatchNorm bn8_sep 1 1 conv8_sep bn8_sep 0=128
PReLU prelu8_sep 1 1 bn8_sep prelu8_sep 0=128
ConvolutionDepthWise conv9_dw 1 1 prelu8_sep conv9_dw 0=128 1=3 11=3 3=2 13=2 5=1 6=1152 7=128
BatchNorm bn9_dw 1 1 conv9_dw bn9_dw 0=128
PReLU prelu9_dw 1 1 bn9_dw prelu9_dw 0=128
Convolution conv9_sep 1 1 prelu9_dw conv9_sep 0=256 1=1 11=1 5=1 6=32768
BatchNorm bn9_sep 1 1 conv9_sep bn9_sep 0=256
PReLU prelu9_sep 1 1 bn9_sep prelu9_sep 0=256
ConvolutionDepthWise conv10_dw 1 1 prelu9_sep conv10_dw 0=256 1=2 11=2 5=1 6=1024 7=256
BatchNorm bn10_dw 1 1 conv10_dw bn10_dw 0=256
PReLU prelu10_dw 1 1 bn10_dw prelu10_dw 0=256
Convolution conv10_sep 1 1 prelu10_dw conv10_sep 0=256 1=1 11=1 5=1 6=65536
BatchNorm bn10_sep 1 1 conv10_sep bn10_sep 0=256
PReLU prelu10_sep 1 1 bn10_sep prelu10_sep 0=256
ConvolutionDepthWise conv11_dw 1 1 prelu10_sep conv11_dw 0=256 1=3 11=3 5=1 6=2304 7=256
BatchNorm bn11_dw 1 1 conv11_dw bn11_dw 0=256
PReLU prelu11_dw 1 1 bn11_dw prelu11_dw 0=256
Convolution conv11_sep 1 1 prelu11_dw conv11_sep 0=256 1=1 11=1 5=1 6=65536
BatchNorm bn11_sep 1 1 conv11_sep bn11_sep 0=256
PReLU prelu11_sep 1 1 bn11_sep prelu11_sep 0=256
InnerProduct conv6_3 1 1 prelu11_sep conv6_3 0=212 1=1 2=54272
BatchNorm bn6_3

like this!

@zys1994
Copy link
Author

zys1994 commented May 22, 2019

the result is the same incorrect. your means and scales is [127.5, 127.5, 127.5], [0.0078,0.0078, 0.0078]?

@zys1994
Copy link
Author

zys1994 commented May 22, 2019

Do you mind sharing your L106.bin with me to make sure whether i transfer mxnet to ncnn correctly or not. My email is zyslongjuanfeng@gmail.com

@aidlearning
Copy link
Owner

aidlearning commented May 22, 2019

yes!pls give me a star

@zys1994
Copy link
Author

zys1994 commented May 22, 2019

i had given your star when i see your excellent work!

@zys1994
Copy link
Author

zys1994 commented May 22, 2019

i have not received the email.
What's the problem?

@aidlearning
Copy link
Owner

aidlearning commented May 22, 2019

I have send it by postmaster@aidlearning.net.may be blocked by gmail,i sent u again at once!

@zys1994
Copy link
Author

zys1994 commented May 22, 2019

try 1158007644@qq.com, thanks

@aidlearning
Copy link
Owner

ok,i email u again, pls check it

@zys1994
Copy link
Author

zys1994 commented May 22, 2019

thanks for your model. i have used your model. but it get incorrect result.

@zys1994
Copy link
Author

zys1994 commented May 22, 2019

this is some problem for gmail, 1158007644@qq.com is ok.

@aidlearning
Copy link
Owner

my models is fine, we use it in projects ,it is working good...

@zys1994
Copy link
Author

zys1994 commented May 22, 2019

ok, i use "bn6_3" and get a good result. The SampleLnet106 is conv6-3, leading me a wrong direction. Do you use "bn6_3"?

@aidlearning
Copy link
Owner

aidlearning commented May 22, 2019

yes! I use the bn6_3

@aidlearning aidlearning added good first issue Good for newcomers help wanted Extra attention is needed question Further information is requested and removed help wanted Extra attention is needed labels May 23, 2019
@Tomhouxin
Copy link

有个问题请教:

  1. 在使用106关键点检测模型之前是否用了mtcnn检测人脸
  2. mtcnn ---》ncnn模型转换是怎么转的

@aidlearning
Copy link
Owner

用了mtcnn人脸模型,用ncnn直接就可以转啊

@Tomhouxin
Copy link

这个mtcnn是zuoqing用maxnet训练的吗,然后maxnet2ncnn?

@qidiso
Copy link
Collaborator

qidiso commented May 31, 2019 via email

@aidlearning aidlearning removed the question Further information is requested label Jun 27, 2019
@aidlearning aidlearning removed the good first issue Good for newcomers label Aug 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants
@qidiso @Tomhouxin @zys1994 @aidlearning and others