You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
They're actually completely different models, so I'm not surprised to see that the results don't line up perfectly. The online demo model was trained several years ago, and the model weights released alongside the project were obtained via fine-tuning with data collected from the online demo.
If the returned detection is clipping the extremities of your character, this can result in very bad segmentations maps. If you want to be certain to avoid this, I would recommend you pad the detected bounding box (e.g. add 15 pixels to each side) and use that in the downstream segmentation and pose estimation steps.
I used the same image run with
image_to_annotations.py
and the web demo, but the result from the web demo is bigger and looks better.Did I miss something or does the API
/predictions/drawn_humanoid_detector
require args?image_to_annotations.py
Demo (https://sketch.metademolab.com)
The text was updated successfully, but these errors were encountered: