New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some confusion about GaitEdge forward #92
Comments
Thanks for your interest.
|
Thanks for reply! Then during detecting , there is no need to supervise the training of the segmentation network,So only input the RGB image and ratio is enough? |
GaitEdge's model does not include a detection process. |
Sorry , I means |
The synthetic silhouette is composed of the binary interior (untrainable) and float edge (trainable), where the former comes from the silhouette dataset during the inference stage. |
But if I want to put it into practical use , I only can get RGB image from the VideoCapture. Or I will do segmentation again. |
If you want to use it for practical purposes, then you need a detection model to get the bbox of the human body, and another trained segmentation model to get the input |
The edge is extracted by conducting the erosion and dilation operations on silhouette, meaning we need to segment the RGB image first and feed it into GaitEdge. |
Thanks for your reply! |
In the inference of the GaitEdge , why it still need
silhouette
as input ? Can't it get silhouette from the output of the Segmentation U-Net? Also ,I don't understand why it need to inputratios
. Isn't the end to end network just an RGB image input?The text was updated successfully, but these errors were encountered: