Rooms can get very loud at large events like these, making it hard to hear our music. Manually adjusting the volume before/after every disturbance can get aggravating, so we made an app that does it automatically.
When you open the web app, it starts scanning and analyzing the ambient noise using our machine learning model, which determines the surrounding noise profile and adjusts the music volume accordingly.
We built the front end using React, the back end with Node, and our ML model with Python/SciKit-Learn. We deployed our app with firebase and signed up for a for.tech domain that we never ended up recieving.
Relevant data sets to train our model were scarce. Analyzing the ambient music in real time was also pretty difficult.
We finished creating our app with several different complex components which is a feat in itself. We managed to create a unique yet universally useful feature in a brief span of time.
We learned how to develop a multi-back-end-language application, some universal modern browser APIs, efficient data filtering, node streams, and react.
We want to implement visual analysis as well, to optimize equalizer settings for the room size. We also want to develop emotion analysis of songs to optimize settings based on song type as well creating playlists automatically.
- SciKit-Learn
- Python
- React
- Firebase
- Node
- Flask