- [VA]: Voice Assistant
- [HE]: Hi Emotion
- [MC]: MicCentral
- Added maintenance related files.
- Applied Aimybox Framework (framework for voice assistant).
- Added kaldi plugin for voice trigger. But not working.
- Made voice assistant to speak first on activity start.
- Added
NotificationListenerservice for reading notifications.
- Added utility function related for pusing notifications.
- Moved notification-read related codes on
MainActivitytoNotificationListener.
- Implemented new VA without Aimybox Framework. Now uses
SpeechRecognizerandTextToSpeech.
- Make phone vibrate before alerting user for unreplied notification.
- Roughly implemented state machine of application.
- Initialized project HiEmotion.
- Added
WavRecorderclass that records and (will) detects human voice. - Roughly using ZCR (Zero Crossing Rate) and Energy (sum of
data[i]**2) for detection.
- Applied JTransform (FFT) library to eliminate noises on audio for better detection.
- Need more discovery about library & FFT.
- Remove JTransform, Applied TarsosDSP which has pitch detection.
- Add python socket server & MLP model where input=audio features, output=emotion (softmax).
- Add MicCentral application.
- Reason for adding MC
- Android allows only one application to access mic at a time.
- In this constraint, Hi Emotion would not work while VA is active.
- Thus, MC is added to propagate mic recorded buffer (file) to other applications.