Skip to content

Object detection on iOS mobile device with Vision and Core ML(mlmodel)

Notifications You must be signed in to change notification settings

popCain/ObjectDetection_iOS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 

Repository files navigation

ObjectDetection_iOS

Object detection on iOS mobile device with Vision and Core ML(mlmodel)

Shibuya Scramble Crossing Live Camera

Test on iphone 8

Core ML Framework


Core ML supports Vision for analyzing images, Natural Language for processing text, Speech for converting audio to text, and Sound Analysis for identifying sounds in audio. Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders,optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption.

Core ML Models

Coding Process(Detection Reference

  1. Set Up Live Capture
  2. Initialize Request(Make a request)
  3. VNImageRequestHandler(Handle the request)
  4. CompletionHandler(Process the results)

About

Object detection on iOS mobile device with Vision and Core ML(mlmodel)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages