Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Object Detection Bounding Box gone wrong #5772

Closed
arafattehsin opened this issue Apr 28, 2021 · 6 comments
Closed

Object Detection Bounding Box gone wrong #5772

arafattehsin opened this issue Apr 28, 2021 · 6 comments

Comments

@arafattehsin
Copy link

arafattehsin commented Apr 28, 2021

System information

  • Windows 10:
  • .NET Core 3.1:

Issue

  • I created an object detection model with fantastic accuracy
  • All works good. However, when I try to map the bounding box coordinates, I see that the coordinates returned are not correct by the model (or may be I am totally wrong and I should know how to make it work)
  • The bounding box should have been drawn correctly around the detected items.

Source code / logs

This is the Preview I get (before training)

image

This is what I get after training.

image

This is what I get for a different image (after training).

image

Note: I know they are delicious. 🍩

However, after passing the same image to model, I get the coordinates which are mapped like this;

trifle-5

Nothing fancy in terms of code;

 var boundingBoxes = predictionResult.BoundingBoxes;

    foreach (BoundingBox box in boundingBoxes)
    {
      // read image file
      Image oldImg = Image.FromFile(destinationPath);
    
      using (Graphics g = Graphics.FromImage(oldImg))
      {
          // Create pen.
          Pen redPen = new Pen(Color.Red, 3);
    
          // Draw Rectangle
          Rectangle rectangle = Rectangle.FromLTRB(Convert.ToInt32(box.Left), Convert.ToInt32(box.Top), Convert.ToInt32(box.Right), Convert.ToInt32(box.Bottom));
    
          g.DrawRectangle(redPen, rectangle);
      }
    .
    .
    .
    .
    
    }

Please tell me that I am absolutely wrong by correcting me or posting a correct solution. Thanks! 🙏

@JakeRadMSFT
Copy link
Contributor

Hello!
It looks like you're using Model Builder. Would you be up for trying out our preview build? I believe this issue is resolved there and a public release is coming soon.

@arafattehsin
Copy link
Author

Hey @JakeRadMSFT thanks for your response. Do we have an update on the Model Builder lately? I can try in an hour and get back to you then.

@arafattehsin
Copy link
Author

Hey @JakeRadMSFT - I just checked that I have the latest version:

image

It matches with the latest atom.xml;

image

May I know if I am doing anything wrong?

@LittleLittleCloud
Copy link
Contributor

Hi @arafattehsin

The coordinate values in BoudingBox are normalized to 800L * 600H dimension. This is because in the training pipeline all images are resized to 800 * 600. So you will need to do a re-mapping for BoundingBox coordinate values so they can match the custom image size. Specifically, for BoundingBox.Left and BoundingBox.Right, you need to divide it by 800 and multiply the width of the custom image. And for BoundingBox.Top and BoundingBox.Bottom, divide it by 600 and multiply the height of your image.

@LittleLittleCloud
Copy link
Contributor

@JakeRadMSFT @briacht
Looks like the normalized bounding boxes are mentioned nowhere so maybe we should add a comment to output class?

@arafattehsin
Copy link
Author

Hey @LittleLittleCloud - you're the star. It works fine. Thanks a lot!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants