Skip to content

Anne-ClaireFouchier/MLforAR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Deep learning in Unity

Barracuda

Barracuda (documentation) is a lightweight cross-platform library for neural network inference.

Unity_Detection2AR is a great (and one a very few) example for using Barracuda. You can use Detection2AR_apk.apk to run the simulation.

They make use of arfoundation-samples, which demonstrates the functionnalities of the ARFoundation package, allowing for AR functionnalities.

Here is how deep learning networks are added into Unity through Barracuda :

The model needs to be converted into ONNX or Barracuda format. These two formats support different operators.

Instructions here to convert from Pytorch, TensorFlow or Keras to ONNX.

Instructions here to convert TensorFlow models to Barracuda format.

If the network has the right format, it will appear as follow : Capture d’écran 2021-06-14 à 11 58 11

The detection happens in the script PhoneARCamera Capture d’écran 2021-06-14 à 12 11 47

Plugins

With the following Plugins, I was able to run the examples but I was not able to integrate ARFoundation in order to put AR augmentations with ARFoundation to the scenes.

Mediapipe plugin

MediaPipeUnityPlugin implements the following Mediapipe functionnalities :

  • Face Detection
  • Face Mesh
  • Iris
  • Hands
  • Pose
  • Holistic (with iris)
  • Hair Segmentation
  • Object Detection
  • Box Tracking
  • Instant Motion Tracking
  • Objectron

It does not recognize my camera on the phone and I wan not able to add any AR functionalities.

TensorFlow Plugin

tf-lite-unity-sample the following functionalities into Unity :

  • TensorFlow
    • MNIST
    • SSD Object Detection
    • DeepLab
    • PoseNet
    • Style Transfer
    • Text Classification
    • Bert Question and Answer
    • Super Resolution
  • MediaPipe
    • Hand Tracking
    • Blaze Face
    • Face Mesh
    • Blaze Pose (Upper body)
  • MLKit
    • Blaze Pose (Full body)
  • Meet Segmentation

Debug

Follow https://answers.unity.com/questions/1320966/android-debug-usb.html

adb devices -l

adb logcat -s Unity DEBUG

Deep learning with Android Studio

AR Core Android Studio

Documentation

ARCore API

TensorFlow Lite for Android

Depth estimation

We evaluated two real-time monocular depth estimation networks: MobilePyDnet and FastDepth. Monocular because we cannot expect all smartphones to have more sensors than at least one camera. These networks work particularly well indoors.

Please use view3Ddepth.py to create the pointcloud that can later be vizualized in MeshLab.

MobilePyDnet

MobilePyDnet is based on the PyDnet architecture, and trained with knowledge distillation of MiDaS.

To get a single inference, use their single_inference folder. It is in TensorFlow 1. You do not need to migrate it to TensorFlow 2, just create a virtual environment.

distillation

FastDepth

To use FastDepth, also create a virtual environment.

Distortions

Data_augmentation/Distortions.ipynb gathers distortions from CollabAR (paper) and imgaug.augmenters.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published