Library is currently in alpha and the API might change
A wrapper library for the new CameraX API and Firebase MLKit with built in Material Design barcode and object detection built fully in Kotlin.
Setup your app with your firebase project.
DO NOT ADD THE google-services.json
FILE TO YOUR APP
See https://firebase.google.com/docs/android/setup for how to do that
Setup Firebase in your app manually
See https://firebase.google.com/docs/projects/multiprojects#use_multiple_projects_in_your_application for additional details
val firebaseOptions = FirebaseOptions.Builder()
.setApiKey("apiKey") // Found in your google-services.json file
.setApplicationId("applicationId") // Found in your google-services.json file called mobilesdk_app_id
.setProjectId("projectId") // Found in your google-services.json file
.build()
FirebaseApp.initializeApp(this, firebaseOptions, "myApp")
val firebaseApp = FirebaseApp.getInstance("myApp")
Getting started with MLCamera is very simple, First add these dependencies to your app's build.gradle
implementation "androidx.camera:camera-camera2:1.0.0-beta01"
implementation "androidx.camera:camera-view:1.0.0-alpha08"
implementation 'com.google.firebase:firebase-ml-vision-barcode-model:16.0.2'
Next in your app's Application
class add the CameraX provider
class App:Application(), CameraXConfig.Provider {
override fun getCameraXConfig(): CameraXConfig {
return Camera2Config.defaultConfig()
}
}
Create your Activity layout by adding the camera PreviewView
and the GraphicOverlay
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<androidx.camera.view.PreviewView
android:id="@+id/preview_view"
android:layout_height="match_parent"
android:layout_width="match_parent"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" >
</androidx.camera.view.PreviewView>
<com.tycz.mlcamera.GraphicOverlay
android:id="@+id/overlay"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
</androidx.constraintlayout.widget.ConstraintLayout>
In your Activity you setup MLCamera with juts a few lines (Dont forget you need to ask for camera permissions and declare them in your Manifest)
if(ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED){
_mlCamera = MLCamera.Builder(this)
.setLifecycleOwner(this)
.setImageAnalyzer(analyzer)
.build()
_mlCamera.addFutureListener(Runnable {
_mlCamera.setupCamera(windowManager,preview_view)
},ContextCompat.getMainExecutor(this))
}
MLCamera has a few built-in image processors for detecting barcodes and objects
Based off of the suggested material design guidelines
https://material.io/collections/machine-learning/barcode-scanning.html#
and taken from the firebase example but migrated to use CameraX
https://github.com/firebase/mlkit-material-android
The barcode scanner scans the first barcode it finds in the scan area and returns it
val analyzer = MaterialBarcodeAnalyzer(overlay,firebaseApp).apply {
barcodeResultListener = this@MainActivity // Optional if you want callbacks from the image analyzer at different steps along the way with information
}
then add the analyzer to the MLCamera builder
.setImageAnalyzer(analyzer)
Based off of the suggested material design guidelines
https://material.io/collections/machine-learning/object-detection-live-camera.html
and taken from the firebase example but migrated to use CameraX
https://github.com/firebase/mlkit-material-android
The analyzer displays up to 5 objects on the screen by default with a white dot on the object. When you put the camera reticle over one of the dots it becomes selected.
val analyzer = MaterialObjectAnalyzer(overlay,true,firebaseApp).apply {
objectDetectionListener = this@MainActivity
}
This anlyzer detects all barcodes visible on the screen and draws a box around the barcode on the screen.
val analyzer = BasicBarcodeAnalyzer(overlay, firebaseApp)
You can subscribe to one of the callbacks in the BarcodeListener
interface
onBarcodesDetected(barcodes:List<FirebaseVisionBarcode>)
Which gives you the raw barcode data the Firebase detector returned
This analyzer is similar to the Material Object Analyzer in that it can detect either up to 5 objects or a single most prominent object. The only difference is that this just draws a box around the detected object on the screen.
val analyzer = BasicObjectAnalyzer(overlay,true,firebaseApp)
You can subscribe to one of the callbacks in the ObjectDetectionListener
interface
fun multipleObjectsDetected(objects:List<FirebaseVisionObject>)
Which returns all the raw data returned by the Firebase detector
MLCamera can also support custom analyzers if needed, you just have to have a class that extends ImageAnalysis.Analyzer
and pass it to the MLCamera builder
.setImageAnalyzer(analyzer)
class MyCustomAnalyzer(private val graphicOverlay: GraphicOverlay): ImageAnalysis.Analyzer {
override fun analyze(image: ImageProxy){
}
}