Skip to content

A model that recognizes emotion from speech analyzes acoustic and linguistic cues to classify the speaker's emotional state.

Notifications You must be signed in to change notification settings

Madhur272/EmotionRecognition

Repository files navigation

EmotionRecognition

An emotion recognition model from speech is a computational system that examines spoken language's grammatical patterns and auditory characteristics to determine the speaker's emotional state—such as happiness, sorrow, anger, or neutrality.

This study aimed to develop a machine learning model that could identify emotions in everyday communication between people. These days, customisation is required for everything we come into contact with daily.

So why not have an emotion detector that assesses your feelings and makes recommendations for you in the future based on how you're feeling? Several sectors may utilise this to provide a variety of services. For example, depending on your feelings, a marketing organisation may recommend things to you. The automobile industry may use this to identify a person's emotions and modify the speed of autonomous vehicles to prevent crashes.

The dataset used contains 24 actors, each with multiple recordings. These recordings are in waveform and uploaded in 4 different datasets. Another test file is uploaded which can be used to manually test the model and determine the emotion of the speaker.

About

A model that recognizes emotion from speech analyzes acoustic and linguistic cues to classify the speaker's emotional state.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published