In recent years, we worked on Sign Language Recognition and proposed some Deep Learning-based models. We primarily proposed a generative-based model, using Restricted Boltzmann Machine (RBM), to static sign language recognition. Details of the proposed model can be found here:
- Multi-Modal Deep Hand Sign Language Recognition in Still Images Using Restricted Boltzmann Machine (https://www.mdpi.com/1099-4300/20/11/809)
After that, we moved to word signs, including a video sign. To cope with different challenges in video processing, we proposed some models as follows:
-
Video-based isolated hand sign language recognition using a deep cascaded model (https://link.springer.com/article/10.1007%2Fs11042-020-09048-5)
-
Hand sign language recognition using multi-view hand skeleton (https://www.sciencedirect.com/science/article/abs/pii/S0957417420301615)
-
Hand pose aware multimodal isolated sign language recognition (https://link.springer.com/article/10.1007/s11042-020-09700-0)
-
Real-time isolated hand sign language recognition using deep networks and SVD (https://link.springer.com/article/10.1007/s12652-021-02920-8)
Furthermore, we presented a taxonomy to categorize the proposed models for isolated and continuous sign language recognition, discussing applications, datasets, hybrid models, complexity, and future lines of research in the field.
- Sign language recognition: A deep survey (https://www.sciencedirect.com/science/article/abs/pii/S095741742030614X)
Now, to make a mutual communication between the hearing and hearing-impaired people, we are working on the Sing Language Production with a reverse functionality compared to Sign Language Recognition. To this end, we presented a survey to briefly summarize recent achievements in SLP, discussing their advantages, limitations, and future directions of research: Sign Language Production: A Review (https://arxiv.org/abs/2103.15910)
Any comments or doubts, please feel free to contact me (razirastgoo@gmail.com).