Hi! hearsitter is an app that assists deaf parents who are raising young children that helps them address problems they might face during childcare. The app listens to the surroundings and alerts parents about potential risks during parenting.
This is introduction page, you can click below links to see our cool project code.
Visit Our Website>> https://hearsitter.site
ML>> https://github.com/kimdj98/hearsitter-ML
Mobile>> https://github.com/gdsc-ys/hearsitter-flutter
Server>> https://github.com/jimmy0006/hearsitter-server-main
Google Solution Challenge is an annual contest that invites students from GDSC communities to create solutions for local community problems using Google technologies.
We use Flutter to make mobile application, go Fiber to make main server, python Tensorflow to make ML server. Servers run on the Google Cloud Platform.
The app's inspiration came from the Seoul Nong School, a public school for students with hearing disabilities, where a deaf teacher informed us about the hardships of raising a young child while experiencing hearing difficulties.
In comparison to other disabilities, deaf people are more likely to get married and raise children. However, the probability of their children not having a hearing disability is higher. This means that deaf parents face more difficulties during parenting. As such, providing deaf parents assistance tools, such as Hearsitter, is extremely important.
-
You can select the sounds you want the app to recognize.
-
You can also use a table that gives relative descriptions about how loud certain decibels are, since deaf people may struggle to understand what certain sounds mean solely based on the decibel numbers.
-
The app is able to recognize several different types of sounds, such as an infant crying, glass breaking, car horn honking, fire alarms and much more.
-
You can also receive notifications through smart watches.
Preview | Home screen | History Screen | Decibel Scale Screen |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
Goal 4 | Goal 10 | Goal 11 |
---|---|---|
Quality Education | Reduced Inequalities | Sustainable Cities |
![]() |
![]() |
![]() |
The mobile app sends the sound to server, received from the real-time audio stream in seconds.
The main server sends requests to series of ML server, balancing requests appropriately. Main server and ml sever connect via gRPC. And ml server made into a docker image, so easy to increase the number of ml servers.
The ml server analyzes this and delivers the results to the main server.
For classification task we used EfficientAT model. When choosing model the aspects we have focused on was speed and performance. Transformer has been a good choice for audio tagging performance, however it lacks in inference time. Instead, EfficientAT uses Knowledge Distillation from Transformers with lightweight CNN for fast inference time and high performance.
App store recieved result data in local DB using SQLite. App show data and display notifications.
DongJae Kim ML |
Juii Kim Mobile |
YoungMin Jin Server |
HyoJeong Park Web Frontend |
Please email gdsc.yonsei.hearsitter@gmail.com