-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong classification model output #29
Comments
Can you provide the model for debugging? |
Model link: https://drive.google.com/drive/folders/1--ZzTFtdDkgRxsezZfW65is5KVBfJh8I?usp=sharing Full c++ inference program #include <vector>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/ml.hpp>
#include <iterator>
#include <fstream>
#include "dirent.h"
#include "net.h"
using namespace cv::ml;
using namespace std;
using namespace cv;
/* Function is taken from one of NCNN examples */
static int print_topk(const std::vector<float>& cls_scores, int topk)
{
// partial sort topk with index
int size = cls_scores.size();
std::vector<std::pair<float, int> > vec;
vec.resize(size);
for (int i = 0; i < size; i++)
{
vec[i] = std::make_pair(cls_scores[i], i);
}
std::partial_sort(vec.begin(), vec.begin() + topk, vec.end(),
std::greater<std::pair<float, int> >());
// print topk and score
for (int i = 0; i < topk; i++)
{
float score = vec[i].first;
int index = vec[i].second;
fprintf(stderr, "%d = %f\n", index, score);
}
return 0;
}
int main()
{
// reading image in cv Mat format
Mat image = imread("eyes.jpg"); //read image
// Converting image from BGR to RGB
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
// Resizing image to 128x128
Mat img;
resize(image, img, Size(128, 128));
// Convert image to float by and normalize by dividing it by 255
img.convertTo(img,CV_32F);
img /= 255.0;
// Load ncnn model
ncnn::Net smallCNN;
smallCNN.load_param("cnn128.param");
smallCNN.load_model("cnn128.bin");
// converting cv::Mat image to ncnn::Mat
ncnn::Mat in = ncnn::Mat::from_pixels(img.data, ncnn::Mat::PIXEL_RGB, img.cols, img.rows);
// Give input to ncnn model
ncnn::Extractor ex = smallCNN.create_extractor();
ex.input("conv2d_8_input_blob", in);
ncnn::Mat out;
ex.extract("dense_5_Softmax_blob", out);
vector<float> v1;
std::vector<float> cls_scores;
cls_scores.resize(out.w);
for (int j = 0; j < out.w; j++)
{
cls_scores[j] = out[j];
std::cout<<"score: "<<out[j];
}
print_topk(cls_scores, 3);
return 0;
} |
Thanks! Can you also provide the original keras model comparing? |
Uploaded kerass model same folder here: https://drive.google.com/drive/folders/1--ZzTFtdDkgRxsezZfW65is5KVBfJh8I?usp=sharing |
Here is the keras2ncnn debug mode output (by inputing random data), it seems like the network forward path is completely identical, so the problem may be in the pre-processing stage of the network. Can you try to extract the conv2d_8_input and conv2d_8 layers and comparing them to the image in the line pred = model.predict(image)[0]
|
Something like this happened before... but I just not able to find where the accuracy mismatched. |
yes, same for me too. But will try to print the output of first layer in both keras and NCNN, then we might be able to solve it. Will let you know if I get any success ! Thanks though |
Maybe you can also give me a test picture and expected output |
Image is updated in the same drive link: https://drive.google.com/drive/folders/1--ZzTFtdDkgRxsezZfW65is5KVBfJh8I?usp=sharing Expected output: 0 where 0 is the label index. |
Cool, let me have a look. |
How did you print these values ? If possible can you please help me with the program which prints it ? |
It's keras2ncnn's build-in debug mode, by running: python3 -mkeras2ncnn -i MODEL_FILE.h5 -d However, it still contains a lot of bug. I am thinking of using docker for it. You can refer this code snap for your program: keras2ncnn/keras2ncnn/keras_debugger.py Lines 32 to 51 in e3f90c4
and this for loading: keras2ncnn/keras2ncnn/keras_debugger.py Lines 308 to 309 in e3f90c4
|
One thing I noticed is this, in keras the dimensions of input are Keras is 4D and NCNN is 3D, do you know how to convert the NCNN image to 4D ? |
While using the debugger I am getting this error python3 -mkeras2ncnn -i cnn128.h5 -d error ValueError: ('Unrecognized keyword arguments:', dict_keys(['ragged'])) |
ncnn does not have 4D tensor and the batch dim. The dim of the keras is 4-D because of the batch, you dont need that in the ncnn. So no worrying, it's correct.
Can you paste the full ouptut log? It seems like not the issue from the debugger |
Converted model from keras to NCNN. Pre-processing used in keras(python) is as follows
I tried to replicate the steps in c++ for NCNN model
when run the code, as an output I am getting this value
where 8 is the label and 1.000000 is the probability. correct label is 0 and not 8
can anyone please help to understand what went wrong in input processing ?
The text was updated successfully, but these errors were encountered: