Skip to content
View praveena2j's full-sized avatar
🎯
Focusing
🎯
Focusing

Highlights

  • Pro
Block or Report

Block or report praveena2j

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
praveena2j/README.md

Hi there 👋

praveena2j

I am a post-doctoral researcher at the Computer Research Institute of Montreal (CRIM). I did PhD in artificial intelligence (focused on computer vision and affective computing) at LIVIA lab, ETS Montreal, Canada under the supervision of Prof. Eric Granger and Prof. Patrick Cardinal in 2023. In my thesis, I have worked on developing weakly supervised learning (multiple instance learning) models for facial expression recognition in videos and novel attention models for audio-visual fusion in dimensional emotion recognition.

Before my PhD, I had 5 years of industrial research experience in computer vision, working for giant companies as well as start-ups including Samsung Research India, Synechron India and upGradCampus India. I also had the privilege of working with Prof. R. Venkatesh Babu at Indian Institute of Science, Bangalore on crowd flow analysis in videos. I did my Masters at Indian Institute of Technology Guwahati.

I'm interested in computer vision, affective computing, deep learning, and multimodal video understanding models. Most of my research revolves around video analytics, weakly supervised learning, facial behavior analysis, and audio-visual fusion.




Praveen's GitHub stats

Connect with me

Popular repositories

  1. JointCrossAttentional-AV-Fusion JointCrossAttentional-AV-Fusion Public

    ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition

    Python 31 9

  2. Cross-Attentional-AV-Fusion Cross-Attentional-AV-Fusion Public

    FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition

    Python 23 4

  3. Joint-Cross-Attention-for-Audio-Visual-Fusion Joint-Cross-Attention-for-Audio-Visual-Fusion Public

    IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"

    Python 23 4

  4. RecurrentJointAttentionwithLSTMs RecurrentJointAttentionwithLSTMs Public

    ICASSP 2023: "Recursive Joint Attention for Audio-Visual Fusion in Regression Based Emotion Recognition"

    Python 7

  5. WSDAOR WSDAOR Public

    IVC : Deep domain adaptation with ordinal regression for pain assessment using weakly-labeled videos

    Python 5 2

  6. action-recognition-pytorch action-recognition-pytorch Public

    Forked from IBM/action-recognition-pytorch

    This is the pytorch implementation of some representative action recognition approaches including I3D, S3D, TSN and TAM.

    Python 2 1