Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to understand output of 25 head pose estimator #9

Closed
vpenades opened this issue Jun 8, 2020 · 4 comments
Closed

Trying to understand output of 25 head pose estimator #9

vpenades opened this issue Jun 8, 2020 · 4 comments

Comments

@vpenades
Copy link

vpenades commented Jun 8, 2020

Hi again!

I am trying to use 25 head pose estimator; I'be been able to load and run the frozen_inference_graph.pb model, but I don't know how to decode the tensor output.

The model has an image input of 128x128 pixels.... and outputs an array of 136 float values.

The python code seems to imply there's several output tensors: scores, boxes, etc.... but I only see this 136 float output tensor.

So, how do I decode it?

As a side note: I am looking for a fast way of detecting faces on an image, at distances of up to 5 metres, so BlazeFace is not suitable for me. I don't need landmarks or any other kind of information, just face rectangles on an image.... given you have a collection of face detection models, which one do you think is the best for that task?

Thanks in advance!

@PINTO0309
Copy link
Owner

PINTO0309 commented Jun 8, 2020

@vpenades
Copy link
Author

vpenades commented Jun 8, 2020

Okey, I can see now that I was trying to use the Pose_Estimator in place of the Face_Detector.

Now, I tried to use the face detection models on Emgu.TF (C#) and I am having these problems:

If I load ssdlite_mobilenet_v2_face_300_integer_quant_with_postprocess.tflite on Emgu.TF.Lite it seems it points to the incorrect input tensor. Instead of pointing to normalized_input_image_tensor, it points to BoxPredictor_4/BoxEncodingPredictor/weights which is not good to use...

Then, I tried to use the PB model at 25_head_pose_estimation\01_float32\01_face_detector, but when I try to load it, it gives me this error:

Op type not registered 'TFLite_Detection_PostProcess' in binary running on DESKTOP-XXXXXXXX. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.)

Also, keep in mind my knowledge of Python is very limited, I'm trying to use some of your models using C#.

@PINTO0309
Copy link
Owner

PINTO0309 commented Jun 8, 2020

TFLite_Detection_PostProcess is a custom OP for Tensorflow Lite. The derived version of Tensorflow Lite probably won't run. If you don't need a custom OP, use the Object Detection API to train and just generate a .pb file without specifying "--add_postprocessing_op=True".

https://github.com/tensorflow/models/tree/master/research/object_detection

$ python3 object_detection/export_tflite_ssd_graph.py \
    --pipeline_config_path=pipeline.config \
    --trained_checkpoint_prefix=model.ckpt-44548 \
    --output_directory=export #\
#    --add_postprocessing_op=True

https://qiita.com/PINTO/items/865250ee23a15339d556#4-2-7-1-generating-a-pb-file-with-post-process

@PINTO0309
Copy link
Owner

Closed due to lack of progress.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants