Skip to content
This repository has been archived by the owner on Oct 16, 2023. It is now read-only.

Add support for audio ops (Spectrogram & MFCC) in Coreml and add trained coreml model #103

Merged
merged 21 commits into from Apr 5, 2019

Conversation

timorohner
Copy link
Contributor

This PR contains the following additions to the app:

  • Added NodeCapture class to extract raw audio in the form of PCMBuffers from microphone
  • Added Coreml support for Operations needed to be able to use the same Neural Network as on Android.
  • Added trained model
  • Changed CryingDetectionService to not use MicrophoneTracker but rather the aforementioned Neural Network

As indicated, this PR is Work in Progress. Specifically the following still has to be done:

  • Clean up code in various places
  • Extract Prediction from CryingDetectionService and create a proper CoreMLModel Service
  • Extract some magic numbers from code and add them to a configuration file
  • Write/Verify tests for all those changes.
  • Fix bug that was potentially introduced due to the changes made in this PR (=> See file PeerConnectionFactoryProtocol.swift:30 to see the the temporary fix that I found)

@timorohner timorohner added help wanted Extra attention is needed WIP labels Mar 13, 2019
Copy link
Contributor

@PrzemyslawCholewaDev PrzemyslawCholewaDev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code looks effitient, good job!
My main 2 issues are short variable names and lack of documentation. Coming from scripting languages like python I understand the urge to implement stuff quickly and caring only if it works. Hovewer now this code is becoming part of a bigger project with interconnected parts. It's very likely that somebody will come back to this code in a couple of months to change something, so it's crutial to make it as understandable as possible.
Because of that, apart from the comments I would like to ask you to add commented doccumentation of the classes and methods. Commenting individual lines inside a method is great too, but doing it for methods in general tells you if you should look into this method or not. Try to give as much info as you can in there - not only saying what this methods does (if it's named correctly, it should convey this by itself), but also what does it mean, how is it connected to other things and more.
Let me know what you think :)

@akashivskyy akashivskyy removed the help wanted Extra attention is needed label Apr 4, 2019
@akashivskyy akashivskyy removed the WIP label Apr 5, 2019
@akashivskyy akashivskyy changed the title [WIP] Add support for audio ops (Spectrogram & MFCC) in Coreml and add trained coreml model Add support for audio ops (Spectrogram & MFCC) in Coreml and add trained coreml model Apr 5, 2019
@akashivskyy akashivskyy dismissed PrzemyslawCholewaDev’s stale review April 5, 2019 12:42

Zamach stanu, przejmuję tego PR. 😆

@akashivskyy akashivskyy merged commit d0fdae0 into develop Apr 5, 2019
@akashivskyy akashivskyy deleted the coreml branch April 5, 2019 12:43
@PrzemyslawCholewaDev
Copy link
Contributor

Woah, that's illegal!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
3 participants