Skip to content
This repository has been archived by the owner on Jan 25, 2023. It is now read-only.

Classification stage #15

Open
miguelml99 opened this issue Jul 9, 2021 · 0 comments
Open

Classification stage #15

miguelml99 opened this issue Jul 9, 2021 · 0 comments

Comments

@miguelml99
Copy link

miguelml99 commented Jul 9, 2021

I am using this hand tracking samples project (actually a forked version of it for Intelsense D400) for my Bachelors' thesis but I have encountered a problem.
My idea was to build a hand gesture recognition system that could tell what gesture is the system receiving as an input. I was hoping that the output could provide the label for the gestures' name (or the name of the dataset it belongs to at least), just as in this "dsamples" project but using the hand tracking system instead.

However, from what I have experienced with your hand-tracking project, the output resulting from the classification layer is a series of values describing fingers' angles and hand orientation mainly.

Is there a way in which the system could be trained so that the output of the classification stage provides the label for the gesture dataset name (as if we wanted to find out the gesture category each input belongs to)?? Maybe there's something that I am missing and there is actually a way of doing it.

I have already generated several datasets regarding different hand poses using realtime-annotator.cpp, and I have also tried training the cnn with those datasets simultaneously (with the train-cnn.cpp ), however, I haven't yet found the way so extract those dataset labels from the depth image input of a hand-gesture.

I would really appreciate any help on this topic I'm kind of stuck in this step and have been working on this for several months now.

Thanks in advance

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant