-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Gesture recognition algorithm MTUT on NVGesture dataset #1380
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1380 +/- ##
==========================================
+ Coverage 83.21% 84.35% +1.14%
==========================================
Files 225 232 +7
Lines 19006 19308 +302
Branches 3395 3472 +77
==========================================
+ Hits 15815 16287 +472
+ Misses 2359 2154 -205
- Partials 832 867 +35
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
0736019
to
8eb6198
Compare
|
||
model = dict( | ||
type='GestureRecognizer', | ||
modality=['rgb'], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this config for rgb only?
Then the config file path gesture_sview_rgbd_vid
is a little bit confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. As stated in /configs/hand/gesture_sview_rgbd_vid/mtut/nvgesture/i3d_nvgesture.md
, MTUT supports multi-modal training and uni-modal inference. This config is mainly used for demo and webcam with RGB videos.
dict: Evaluation results for evaluation metric. | ||
""" | ||
metrics = metric if isinstance(metric, list) else [metric] | ||
allowed_metrics = ['AP'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AP or mAP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AP. Is it less confusing if I change 'mAP' to 'AP_mean'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me in general. Unittests are required before merging.
@Ben-Louis CI failed. |
I see. Unittest fails because of incompatibility of |
…en-mmlab#1380) * add nvgesture dataset * fix nvgesture pipelines * update gesture datasets * add ModelSetEpochHook * nvgesture dataset support multi-GPU evalutation * add i3d+mtut model * add nvgesture i3d configs * webcam add hand detector * gesture recognition with bbox * add hand detector config * fix gesture recognizer init bug * webcam/gesture - recognizer runs successfully * delete unnecessary comment * fix lint error in gesture configs * add nvgesture category info * webcam/gesture - display gesture recognition result * add gesture recognition related docs * update gesture related comments * update light hand det model in demo doc * update gesture recognition configs and results * auto modify model-index.yml * stabilize ssa loss in mtut * add multi-input node comment * synchronize tools/webcam with master * add gesture task-name mapping * move gesture configs to configs/hand/ * fix a bug in demo (open-mmlab#1373) * update gesture datasets * add ModelSetEpochHook * nvgesture dataset support multi-GPU evalutation * add i3d+mtut model * add nvgesture i3d configs * webcam add hand detector * gesture recognition with bbox * add hand detector config * fix gesture recognizer init bug * webcam/gesture - recognizer runs successfully * delete unnecessary comment * fix lint error in gesture configs * add nvgesture category info * webcam/gesture - display gesture recognition result * add gesture recognition related docs * update gesture related comments * update light hand det model in demo doc * update gesture recognition configs and results * auto modify model-index.yml * stabilize ssa loss in mtut * add multi-input node comment * synchronize tools/webcam with master * add gesture task-name mapping * move gesture configs to configs/hand/ * solve conflict in mmdet_modelzoo.md * add gesture recogition into webcam * hand gesture inference config explanation * add gesture recognizer node in __init__.py * add gesture webcam readme * Adjust inference tracking min keypoints (open-mmlab#1398) * Adjust inference tracking min keypoints * Special case for min_keypoints <= 0 doesn't seem to be required * remove unnecessary transformer utils (open-mmlab#1405) * add nvgesture dataset * fix nvgesture pipelines * update gesture datasets * add ModelSetEpochHook * nvgesture dataset support multi-GPU evalutation * add i3d+mtut model * add nvgesture i3d configs * webcam add hand detector * gesture recognition with bbox * add hand detector config * fix gesture recognizer init bug * webcam/gesture - recognizer runs successfully * delete unnecessary comment * fix lint error in gesture configs * add nvgesture category info * webcam/gesture - display gesture recognition result * add gesture recognition related docs * update gesture related comments * update light hand det model in demo doc * update gesture recognition configs and results * auto modify model-index.yml * stabilize ssa loss in mtut * add multi-input node comment * synchronize tools/webcam with master * add gesture task-name mapping * move gesture configs to configs/hand/ * fix grammer errors in docs * fix a lint error in doc * update nvgesture evaluation * add introduction and assertion to TemporalPooling * generalize NVGestureRandomFlip * add unittests for gesture pipelines * delete duplicated config * add gesture inference unittest * add gesture dataset unittest * fix gesture inference unittest error * add backbone I3D unittest * add mtut head unittest * fix mtut head unittest error * add gesture recognizer unittest Co-authored-by: Yining Li <liyining0712@gmail.com> Co-authored-by: Philipp Allgeuer <5592992+pallgeuer@users.noreply.github.com>
…en-mmlab#1380) * add nvgesture dataset * fix nvgesture pipelines * update gesture datasets * add ModelSetEpochHook * nvgesture dataset support multi-GPU evalutation * add i3d+mtut model * add nvgesture i3d configs * webcam add hand detector * gesture recognition with bbox * add hand detector config * fix gesture recognizer init bug * webcam/gesture - recognizer runs successfully * delete unnecessary comment * fix lint error in gesture configs * add nvgesture category info * webcam/gesture - display gesture recognition result * add gesture recognition related docs * update gesture related comments * update light hand det model in demo doc * update gesture recognition configs and results * auto modify model-index.yml * stabilize ssa loss in mtut * add multi-input node comment * synchronize tools/webcam with master * add gesture task-name mapping * move gesture configs to configs/hand/ * fix a bug in demo (open-mmlab#1373) * update gesture datasets * add ModelSetEpochHook * nvgesture dataset support multi-GPU evalutation * add i3d+mtut model * add nvgesture i3d configs * webcam add hand detector * gesture recognition with bbox * add hand detector config * fix gesture recognizer init bug * webcam/gesture - recognizer runs successfully * delete unnecessary comment * fix lint error in gesture configs * add nvgesture category info * webcam/gesture - display gesture recognition result * add gesture recognition related docs * update gesture related comments * update light hand det model in demo doc * update gesture recognition configs and results * auto modify model-index.yml * stabilize ssa loss in mtut * add multi-input node comment * synchronize tools/webcam with master * add gesture task-name mapping * move gesture configs to configs/hand/ * solve conflict in mmdet_modelzoo.md * add gesture recogition into webcam * hand gesture inference config explanation * add gesture recognizer node in __init__.py * add gesture webcam readme * Adjust inference tracking min keypoints (open-mmlab#1398) * Adjust inference tracking min keypoints * Special case for min_keypoints <= 0 doesn't seem to be required * remove unnecessary transformer utils (open-mmlab#1405) * add nvgesture dataset * fix nvgesture pipelines * update gesture datasets * add ModelSetEpochHook * nvgesture dataset support multi-GPU evalutation * add i3d+mtut model * add nvgesture i3d configs * webcam add hand detector * gesture recognition with bbox * add hand detector config * fix gesture recognizer init bug * webcam/gesture - recognizer runs successfully * delete unnecessary comment * fix lint error in gesture configs * add nvgesture category info * webcam/gesture - display gesture recognition result * add gesture recognition related docs * update gesture related comments * update light hand det model in demo doc * update gesture recognition configs and results * auto modify model-index.yml * stabilize ssa loss in mtut * add multi-input node comment * synchronize tools/webcam with master * add gesture task-name mapping * move gesture configs to configs/hand/ * fix grammer errors in docs * fix a lint error in doc * update nvgesture evaluation * add introduction and assertion to TemporalPooling * generalize NVGestureRandomFlip * add unittests for gesture pipelines * delete duplicated config * add gesture inference unittest * add gesture dataset unittest * fix gesture inference unittest error * add backbone I3D unittest * add mtut head unittest * fix mtut head unittest error * add gesture recognizer unittest Co-authored-by: Yining Li <liyining0712@gmail.com> Co-authored-by: Philipp Allgeuer <5592992+pallgeuer@users.noreply.github.com>
Motivation
Add hand gesture recognition function to mmpose.
Modification
BC-breaking (Optional)
Use cases (Optional)
Checklist
Before PR:
After PR: