Switch branches/tags
Nothing to show
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.


< Exermote Preprocessing and Training | Exermote Sgan >

Exermote Inference

The model is already built, so let's put it to work!

Google Cloud ML

Before WWDC 2017 and CoreML I couldn't find a proper way for doing inference diretly on my iPhone. That's why I implemented my model on Google Cloud ML. Acceleration Data was sent 10 times per second to the cloud, while receiving inference results in the same frequency. This worked suprisingly well! At least for one minute, then it appeared that the iPhone was blocking any further network requests. What a lucky coincidence that Apple introduced CoreML a short time later :)


Since I already exported the model as .mlmodel file, implementing it was quite easy. The interesting line below is let predictionOutput = try _predictionModel.prediction(input: input), because that is where calculation is done. Actually initialization of model inputs was the hardest part and as you can see below it is done in a not very swifty way. Let's hope that this is due the beta status of CoreML.

func makePredictionRequest(evaluationStep: EvaluationStep) {
        let data = _currentScaledMotionArrays.reduce([], +) //result is of type [Double] with 480 elements
        do {
            let accelerationsMultiArray = try MLMultiArray(shape:[40,1,12], dataType:MLMultiArrayDataType.double)
            for (index, element) in data.enumerated() {
                accelerationsMultiArray[index] = NSNumber(value: element)
            let hiddenStatesMultiArray = try MLMultiArray(shape: [32], dataType: MLMultiArrayDataType.double)
            for index in 0..<32 {
                hiddenStatesMultiArray[index] = NSNumber(integerLiteral: 0)
            let input = PredictionModelInput(accelerations: accelerationsMultiArray, lstm_1_h_in: hiddenStatesMultiArray, lstm_1_c_in: hiddenStatesMultiArray, lstm_2_h_in: hiddenStatesMultiArray, lstm_2_c_in: hiddenStatesMultiArray)
            let predictionOutput = try _predictionModel.prediction(input: input)
            if let scores = [predictionOutput.scores[0], predictionOutput.scores[1], predictionOutput.scores[2], predictionOutput.scores[3]] as? [Double] {
                evaluationStep.exercise = decodePredictionRequest(scores: scores)
            } else {
                print("Could not cast predictionOutput.scores to [Double].")
        catch {

The result of my project is a pretty stable exercise recognizer! :)

It will take some time until .gif is loaded. Have a look on youtube for the raw video with sound.