Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about one paper for citation #43

Closed
MengHao666 opened this issue Sep 22, 2020 · 3 comments
Closed

about one paper for citation #43

MengHao666 opened this issue Sep 22, 2020 · 3 comments

Comments

@MengHao666
Copy link
Contributor

Hi , thanks for making such repo.
I have one question here:
Why do you mark "HOT-Net: Non-Autoregressive Transformer for 3D Hand-Object Pose Estimation. " as MM20 paper. I could not find the citation format -Bib Tex in Google Schoolar.
Could u explain it ? Thanks a lot.

@Janus-Shiau
Copy link
Collaborator

Here is the main-track paper-list of MM'2020. You can find Hot-Net in the paper-list.
https://2020.acmmm.org/main-track-list.html

@MengHao666
Copy link
Contributor Author

Here is the main-track paper-list of MM'2020. You can find Hot-Net in the paper-list.
https://2020.acmmm.org/main-track-list.html

Hi ,thanks for fast reply.
Do u know how to cite it in BibTeX format ? I can not find such thing.

@Janus-Shiau
Copy link
Collaborator

FYI.

@inproceedings{10.1145/3394171.3413555,
author = {Wu, Zhenyu and Hoang, Duc and Lin, Shih-Yao and Xie, Yusheng and Chen, Liangjian and Lin, Yen-Yu and Wang, Zhangyang and Fan, Wei},
title = {MM-Hand: 3D-Aware Multi-Modal Guided Hand Generation for 3D Hand Pose Synthesis},
year = {2020},
isbn = {9781450379885},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3394171.3413555},
doi = {10.1145/3394171.3413555},
abstract = {Estimating the 3D hand pose from a monocular RGB image is important but challenging. A solution is training on large-scale RGB hand images with accurate 3D hand keypoint annotations. However, it is too expensive in practice. Instead, we develop a learning-based approach to synthesize realistic, diverse, and 3D pose-preserving hand images under the guidance of 3D pose information. We propose a 3D-aware multi-modal guided hand generative network (MM-Hand), together with a novel geometry-based curriculum learning strategy. Our extensive experimental results demonstrate that the 3D-annotated images generated by MM-Hand qualitatively and quantitatively outperform existing options. Moreover, the augmented data can consistently improve the quantitative performance of the state-of-the-art 3D hand pose estimators on two benchmark datasets. The code will be available at https://github.com/ScottHoang/mm-hand.},
booktitle = {Proceedings of the 28th ACM International Conference on Multimedia},
pages = {2508–2516},
numpages = {9},
keywords = {curriculum learning, 3d hand-pose, multi-modal, conditional generative adversarial nets},
location = {Seattle, WA, USA},
series = {MM '20}
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants