Fetching contributors…
Cannot retrieve contributors at this time
1099 lines (798 sloc) 96.4 KB SDK for Android

Build Status SDK for Android enables you to perform scans of various barcodes in your app. You can integrate the SDK into your app simply by following the instructions below and your app will be able to scan and process data from the following barcode standards:

Using in your app requires a valid license. You can obtain a trial license by registering to Microblink dashboard. After registering, you will be able to generate a license for your app. License is bound to package name of your app, so please make sure you enter the correct package name when asked.

For more information on how to integrate SDK into your app read the instructions below. Make sure you read the latest [Release notes](Release\ for most recent changes and improvements.

Table of contents

Android integration instructions

The package contains Android Archive (AAR) that contains everything you need to use the library. Besides AAR, package also contains a demo project that contains following modules:

  • Pdf417MobiSample shows how to use simple Intent-based API to scan a single barcode.
  • Pdf417MobiCustomUISample demonstrates advanced SDK integration within a custom scan activity and shows how RecognizerRunnerFragment can be used to embed default UI into your activity.
  • Pdf417MobiDirectAPISample demonstrates how to perform scanning of Android Bitmaps

Source code of all demo apps is given to you to show you how to perform integration of SDK into your app. This source code and all of the resources are at your disposal. You can use these demo apps as a basis for creating your own app, or you can copy/paste code and/or resources from demo apps into your app and use them as you wish without even asking us for permission. is supported in Android SDK version 16 (Android 4.1) or later.

The library contains one activity: BarcodeScanActivity. It is responsible for camera control and recognition. You can also create your own scanning UI - you just need to embed RecognizerRunnerView into your activity and pass activity's lifecycle events to it and it will control the camera and recognition process. For more information, see Embedding RecognizerRunnerView into custom scan activity.

Quick Start

Quick start with demo app

  1. Open Android Studio.
  2. In Quick Start dialog choose Import project (Eclipse ADT, Gradle, etc.).
  3. In File dialog select Pdf417MobiDemo folder.
  4. Wait for project to load. If Android studio asks you to reload project on startup, select Yes.

Integrating into your project using Maven

Maven repository for SDK is: If you do not want to perform integration via Maven, simply skip to Android Studio integration instructions or Eclipse integration instructions.

Using gradle or Android Studio

In your build.gradle you first need to add maven repository to repositories list:

repositories {
	maven { url '' }

After that, you just need to add as a dependency to your application (make sure, transitive is set to true):

dependencies {
    implementation('') {
    	transitive = true

Import Javadoc to Android Studio

Android studio 3.0 should automatically import javadoc from maven dependency. If that doesn't happen, you can do that manually by following these steps:

  1. In Android Studio project sidebar, ensure project view is enabled
  2. Expand External Libraries entry (usually this is the last entry in project view)
  3. Locate entry, right click on it and select Library Properties...
  4. A Library Properties pop-up window will appear
  5. Click the second + button in bottom left corner of the window (the one that contains + with little globe)
  6. Window for definining documentation URL will appear
  7. Enter following address:
  8. Click OK

Using android-maven-plugin

Android Maven Plugin v4.0.0 or newer is required.

Open your pom.xml file and add these directives as appropriate:



Android studio integration instructions

  1. In Android Studio menu, click File, select New and then select Module.

  2. In new window, select Import .JAR or .AAR Package, and click Next.

  3. In File name field, enter the path to LibPdf417Mobi.aar and click Finish.

  4. In your app's build.gradle, add dependency to LibPdf417Mobi and appcompat-v7:

    dependencies {
        implementation project(':LibPdf417Mobi')
        implementation ""

Import Javadoc to Android Studio

  1. In Android Studio project sidebar, ensure project view is enabled
  2. Expand External Libraries entry (usually this is the last entry in project view)
  3. Locate LibPdf417Mobi-unspecified entry, right click on it and select Library Properties...
  4. A Library Properties pop-up window will appear
  5. Click the + button in bottom left corner of the window
  6. Window for choosing JAR file will appear
  7. Find and select LibPdf417Mobi-javadoc.jar file which is located in root folder of the SDK distribution
  8. Click OK

Eclipse integration instructions

We do not provide Eclipse integration demo apps. We encourage you to use Android Studio. We also do not test integrating with Eclipse. If you are having problems with, make sure you have tried integrating it with Android Studio prior to contacting us.

However, if you still want to use Eclipse, you will need to convert AAR archive to Eclipse library project format. You can do this by doing the following:

  1. In Eclipse, create a new Android library project in your workspace.
  2. Clear the src and res folders.
  3. Unzip the LibPdf417Mobi.aar file. You can rename it to zip and then unzip it using any tool.
  4. Copy the classes.jar to libs folder of your Eclipse library project. If libs folder does not exist, create it.
  5. Copy the contents of jni folder to libs folder of your Eclipse library project.
  6. Replace the res folder on library project with the res folder of the LibPdf417Mobi.aar file.

You’ve already created the project that contains almost everything you need. Now let’s see how to configure your project to reference this library project.

  1. In the project you want to use the library (henceforth, "target project") add the library project as a dependency
  2. Open the AndroidManifest.xml file inside LibPdf417Mobi.aar file and make sure to copy all permissions, features and activities to the AndroidManifest.xml file of the target project.
  3. Copy the contents of assets folder from LibPdf417Mobi.aar into assets folder of target project. If assets folder in target project does not exist, create it.
  4. Clean and Rebuild your target project
  5. Add appcompat-v7 library to your workspace and reference it by target project (modern ADT plugin for Eclipse does this automatically for all new android projects).

Performing your first scan

  1. Before starting a recognition process, you need to obtain a license from Microblink dashboard. After registering, you will be able to generate a trial license for your app. License is bound to package name of your app, so please make sure you enter the correct package name when asked.

    After creating a license, you will have the option to download the license as a file that you must place within your application's assets folder. You must ensure that license key is set before instantiating any other classes from the SDK, otherwise you will get an exception at runtime. Therefore, we recommend that you extend Android Application class and set the license in its onCreate callback in the following way:

    public class MyApplication extends Application {
        public void onCreate() {
            MicroblinkSDK.setLicenseFile("path/to/license/file/within/assets/dir", this);
  2. In your main activity, create recognizer objects that will perform image recognition, configure them and store them into RecognizerBundle object. You can see more information about available recognizers and about RecognizerBundle in chapter RecognizerBundle and available recognizers. For example, to scan PDF417 2D barcode, you can configure your recognizer object in the following way:

    public class MyActivity extends Activity {
        private Pdf417Recognizer mRecognizer;
        private RecognizerBundle mRecognizerBundle;
        protected void onCreate(Bundle bundle) {
            // setup views, as you would normally do in onCreate callback
            // create Pdf417Recognizer
            mRecognizer = new Pdf417Recognizer();
            // bundle recognizers into RecognizerBundle
            mRecognizerBundle = new RecognizerBundle(mRecognizer);
  3. You can start recognition process by starting BarcodeScanActivity activity by creating BarcodeUISettings and calling ActivityRunner.startActivityForResult method:

    // method within MyActivity from previous step
    public void startScanning() {
        // Settings for BarcodeScanActivity Activity
        BarcodeUISettings settings = new BarcodeUISettings(mRecognizerBundle);
        // tweak settings as you wish
        // Start activity
        ActivityRunner.startActivityForResult(this, MY_REQUEST_CODE, settings);
  4. After BarcodeScanActivity activity finishes the scan, it will return to the calling activity or fragment and will call its method onActivityResult. You can obtain the scanning results in that method.

    protected void onActivityResult(int requestCode, int resultCode, Intent data) {
        super.onActivityResult(requestCode, resultCode, data);
        if (requestCode == MY_REQUEST_CODE) {
            if (resultCode == BarcodeScanActivity.RESULT_OK && data != null) {
                // load the data into all recognizers bundled within your RecognizerBundle
                // now every recognizer object that was bundled within RecognizerBundle
                // has been updated with results obtained during scanning session
                // you can get the result by invoking getResult on recognizer
                Pdf417Recognizer.Result result = mRecognizer.getResult();
                if (result.getResultState() == Recognizer.Result.State.Valid) {
                    // result is valid, you can use it however you wish

    For more information about available recognizers and RecognizerBundle, see RecognizerBundle and available recognizers.

Advanced integration instructions

This section covers more advanced details of integration.

  1. First part will discuss the methods for checking whether is supported on current device.
  2. Second part will cover the possible customizations when using UI provided by the SDK.
  3. Third part will describe how to embed RecognizerRunnerView into your activity with the goal of creating a custom UI for scanning, while still using camera management capabilites of the SDK.
  4. Fourth part will describe how to use the RecognizerRunner singleton (Direct API) for recognition directly from android bitmaps without the need of camera or to recognize camera frames that are obtained by custom camera management.
  5. Fifth part will describe how to subscribe to and handle processing events when using either RecognizerRunnerView or RecognizerRunner.

Checking if is supported requirements

Even before settings the license key, you should check if is supported on current device. This is required because the is a native library that needs to be loaded by the JVM and it is possible that it doesn't support CPU architecture of the current device. Attempt of calling any methods from the SDK that rely on native code, such as license check, on a device with unsupported CPU architecture will cause a crash of your app. requires Android 4.1 as the minimum android version. For best performance and compatibility, we recommend Android 5.0 or newer.

Camera video preview resolution also matters. In order to perform successful scans, camera preview resolution cannot be too low. Minimum camera preview resolution in order to perform a scan is 480p. It must be noted that camera preview resolution is not the same as the video record resolution, although on most devices those are the same. However, there are some devices that allow recording of HD video (720p resolution), but do not allow high enough camera preview resolution (for example, Sony Xperia Go supports video record resolution at 720p, but camera preview resolution is only 320p - does not work on that device). is native application, written in C++ and available for multiple platforms. Because of this, cannot work on devices that have obscure hardware architectures. We have compiled native code only for most popular Android ABIs. See Processor architecture considerations for more information about native libraries in and instructions how to disable certain architectures in order to reduce the size of final app.

Checking for support in your app

To check whether the is supported on the device, you can do it in the following way:

// check if is supported on the device
RecognizerCompatibilityStatus status = RecognizerCompatibility.getRecognizerCompatibilityStatus(this);
if (status == RecognizerCompatibilityStatus.RECOGNIZER_SUPPORTED) {
    Toast.makeText(this, " is supported!", Toast.LENGTH_LONG).show();
} else if (status == RecognizerCompatibilityStatus.NO_CAMERA) {
    Toast.makeText(this, " is supported only via Direct API!", Toast.LENGTH_LONG).show();
} else {
	Toast.makeText(this, " is not supported! Reason: " +, Toast.LENGTH_LONG).show();

However, some recognizers require camera with autofocus. If you try to start recognition with those recognizers on a device that does not have a camera with autofocus, you will get an error. To prevent that, you can check whether certain recognizer requires autofocus by calling its requiresAutofocus method.

If you already have an array of recognizers, you can easily filter out all recognizers that require autofocus from array using the following code snippet:

Recognizer[] recArray = ...;
if(!RecognizerCompatibility.cameraHasAutofocus(CameraType.CAMERA_BACKFACE, this)) {
	recArray = RecognizerUtils.filterOutRecognizersThatRequireAutofocus(recArray);

This utility method basically iterates over the given array of recognizers and throws out each recognizer that returns true from its requiresAutofocus method.

UI customizations of built-in activities and fragments

This section will discuss supported appearance and behaviour customizations of built-in activities and will show how to use RecognizerRunnerFragment with provided built-in scanning overlays to get the built-in UI experience within any part of your app.

Using built-in scan activity for performing the scan

As shown in first scan example, you need to create a settings object that is associated with the activity you wish to use. Attempt to start built-in activity directly via custom-crafted Intent will result with either crashing the app or with undefined behaviour of the scanning procedure.

List of available built-in scan activities in are listed in section Built-in activities and fragments.

Using RecognizerRunnerFragment within your activity

If you want to integrate UI provided by our built-in activity somewhere within your activity, you can do so by using RecognizerRunnerFragment. Any activity that will host the RecognizerRunnerFragment must implement ScanningOverlayBinder interface. Attempt of adding RecognizerRunnerFragment to activity that does not implement the aforementioned interface will result in a ClassCastException. This design is in accordance with the recommendation for communication between fragments.

The ScanningOverlayBinder is responsible for returning non-null implementation of ScanningOverlay - class that will manage UI on top of RecognizerRunnerFragment. It is not recommended to create your own implementation of ScanningOverlay as effort to do so might be equal or even greater to creating your custom UI implementation in the recommended way.

Here is the minimum example for activity that hosts the RecognizerRunnerFragment:

public class MyActivity extends Activity implements RecognizerRunnerFragment.ScanningOverlayBinder, ScanResultListener {
    private Pdf417Recognizer mRecognizer;
    private RecognizerBundle mRecognizerBundle;
    private BarcodeOverlayController mScanOverlay = createOverlay();
    private RecognizerRunnerFragment mRecognizerRunnerFragment;
    protected void onCreate(Bundle savedInstanceState) {
        if (null == savedInstanceState) {
            // create fragment transaction to replace with RecognizerRunnerFragment
            mRecognizerRunnerFragment = new RecognizerRunnerFragment();
            FragmentTransaction fragmentTransaction = getFragmentManager().beginTransaction();
            fragmentTransaction.replace(, mRecognizerRunnerFragment);
        } else {
            // obtain reference to fragment restored by Android within super.onCreate() call
            mRecognizerRunnerFragment = (RecognizerRunnerFragment) getFragmentManager().findFragmentById(;
    public ScanningOverlay getScanningOverlay() {
        return mScanningOverlay;
    public void onScanningDone(@NonNull RecognitionSuccessType successType) {
        // pause scanning to prevent new results while fragment is being removed
        // now you can remove the RecognizerRunnerFragment with new fragment transaction
        // and use result within mRecognizer safely without the need for making a copy of it
        // if not paused, as soon as this method ends, RecognizerRunnerFragments continues
        // scanning. Note that this can happen even if you created fragment transaction for
        // removal of RecognizerRunnerFragment - in the time between end of this method
        // and beginning of execution of the transaction. So to ensure result within mRecognizer
        // does not get mutated, ensure calling pauseScanning() as shown above.
    private BarcodeOverlayController createOverlay() {
        // create Pdf417Recognizer
        mRecognizer = new Pdf417Recognizer();
        // bundle recognizers into RecognizerBundle
        mRecognizerBundle = new RecognizerBundle(mRecognizer);
        // Settings for BarcodeOverlayController overlay
        BarcodeUISettings settings = new BarcodeUISettings(mRecognizerBundle);
        return new BarcodeOverlayController(settings, this);

Also please refer to demo apps provided with the SDK for more detailed example and make sure your host activity's orientation is set to nosensor or has configuration changing enabled (i.e. is not restarted when configuration change happens). For more information, check this section.

Built-in activities and overlays

Within SDK there are several built-in activities and scanning overlays that you can use to perform scanning.

BarcodeScanActivity and BarcodeOverlayController

BarcodeOverlayController is overlay for RecognizerRunnerFragment best suited for performing scanning of various barcodes. BarcodeScanActivity contains RecognizerRunnerFragment with BarcodeOverlayController, which can be used out of the box to perform scanning using the default UI.

Changing appearance of built-in activities and scanning overlays

Built-in activities and overlays use resources from the res folder within LibPdf417Mobi.aar to display its contents. If you need a fully customised UI, we recommend creating completely custom scanning procedure (either activity or fragment), as described here. However, if you just want to slightly change the appearance of built-in activity or overlay, you can do that by overriding appropriate resource values, however this is strictly not recommended, as it can have unknown effects on the appearance of the UI component. If you think that some part of our built-in UI component should be configurable in a way that it currently is not, please let us know and we will consider adding that configurability into appropriate settings object.

Translation and localization

Strings used within built-in activities and overlays can be localized to any language. If you are using RecognizerRunnerView (see this chapter for more information) in your custom scan activity or fragment, you should handle localization as in any other Android app. RecognizerRunnerView does not use strings nor drawables, it only uses assets from assets/microblink folder. Those assets must not be touched as they are required for recognition to work correctly.

However, if you use our built-in activities or overlays, they will use resources packed within LibPdf417Mobi.aar to display strings and images on top of the camera view. We have already prepared strings for several languages which you can use out of the box. You can also modify those strings, or you can add your own language.

To use a language, you have to enable it from the code:

  • To use a certain language, you should call method LanguageUtils.setLanguageAndCountry(language, country, context). For example, you can set language to Croatian like this:

     // define language
     LanguageUtils.setLanguageAndCountry("hr", "", this);

Adding new language can easily be translated to other languages. The res folder in LibPdf417Mobi.aar archive has folder values which contains strings.xml - this file contains english strings. In order to make e.g. croatian translation, create a folder values-hr in your project and put the copy of strings.xml inside it (you might need to extract LibPdf417Mobi.aar archive to access those files). Then, open that file and translate the strings from English into Croatian.

Changing strings in the existing language

To modify an existing string, the best approach would be to:

  1. Choose a language you want to modify. For example Croatian ('hr').
  2. Find strings.xml in folder res/values-hr of the LibPdf417Mobi.aar archive
  3. Choose a string key which you want to change. For example: <string name="MBBack">Back</string>
  4. In your project create a file strings.xml in the folder res/values-hr, if it doesn't already exist
  5. Create an entry in the file with the value for the string which you want. For example: <string name="MBBack">Natrag</string>
  6. Repeat for all the string you wish to change

Embedding RecognizerRunnerView into custom scan activity

This section discusses how to embed RecognizerRunnerView into your scan activity and perform scan.

  1. First make sure that RecognizerRunnerView is a member field in your activity. This is required because you will need to pass all activity's lifecycle events to RecognizerRunnerView.
  2. It is recommended to keep your scan activity in one orientation, such as portrait or landscape. Setting sensor as scan activity's orientation will trigger full restart of activity whenever device orientation changes. This will provide very poor user experience because both camera and native library will have to be restarted every time. There are measures against this behaviour that are discussed later.
  3. In your activity's onCreate method, create a new RecognizerRunnerView, set RecognizerBundle containing recognizers that will be used by the view, define CameraEventsListener that will handle mandatory camera events, define ScanResultListener that will receive call when recognition has been completed and then call its create method. After that, add your views that should be layouted on top of camera view.
  4. Override your activity's onStart, onResume, onPause, onStop and onDestroy methods and call RecognizerRunnerView's lifecycle methods start, resume, pause, stop and destroy. This will ensure correct camera and native resource management. If you plan to manage RecognizerRunnerView's lifecycle independently of host's lifecycle, make sure the order of calls to lifecycle methods is the same as is with activities (i.e. you should not call resume method if create and start were not called first).

Here is the minimum example of integration of RecognizerRunnerView as the only view in your activity:

public class MyScanActivity extends Activity implements ScanResultListener, CameraEventsListener {
    private static final int PERMISSION_CAMERA_REQUEST_CODE = 69;
    private RecognizerRunnerView mRecognizerRunnerView;
    private Pdf417Recognizer mRecognizer;
    private RecognizerBundle mRecognizerBundle;
    protected void onCreate(Bundle savedInstanceState) {
        // create Pdf417Recognizer
        mRecognizer = new Pdf417Recognizer();
        // bundle recognizers into RecognizerBundle
        mRecognizerBundle = new RecognizerBundle(mRecognizer);				
        // create RecognizerRunnerView
        mRecognizerRunnerView = new RecognizerRunnerView(this);

        // associate RecognizerBundle with RecognizerRunnerView
        // scan result listener will be notified when scanning is complete
        // camera events listener will be notified about camera lifecycle and errors
    protected void onStart() {
        // you need to pass all activity's lifecycle methods to RecognizerRunnerView
    protected void onResume() {
        // you need to pass all activity's lifecycle methods to RecognizerRunnerView

    protected void onPause() {
        // you need to pass all activity's lifecycle methods to RecognizerRunnerView

    protected void onStop() {
        // you need to pass all activity's lifecycle methods to RecognizerRunnerView

    protected void onDestroy() {
        // you need to pass all activity's lifecycle methods to RecognizerRunnerView

    public void onConfigurationChanged(Configuration newConfig) {
        // you need to pass all activity's lifecycle methods to RecognizerRunnerView
    public void onScanningDone(@NonNull RecognitionSuccessType successType) {
        // this method is from ScanResultListener and will be called when scanning completes
        // you can obtain scanning result by calling getResult on each
        // recognizer that you bundled into RecognizerBundle.
        // for example:
        Pdf417Recognizer.Result result = mRecognizer.getResult();
        if (result.getResultState() == Recognizer.Result.State.Valid) {
            // result is valid, you can use it however you wish
        // Note that mRecognizer is stateful object and that as soon as
        // scanning either resumes or its state is reset
        // the result object within mRecognizer will be changed. If you
        // need to create a immutable copy of the result, you can do that
        // by calling clone() on it, for example:

        Pdf417Recognizer.Result immutableCopy = result.clone();
        // After this method ends, scanning will be resumed and recognition
        // state will be retained. If you want to prevent that, then
        // you should call:
        // Note that reseting recognition state will clear internal result
        // objects of all recognizers that are bundled in RecognizerBundle
        // associated with RecognizerRunnerView.

        // If you want to pause scanning to prevent receiving recognition
        // results or mutating result, you should call:
        // if scanning is paused at the end of this method, it is guaranteed
        // that result within mRecognizer will not be mutated, therefore you
        // can avoid creating a copy as described above
        // After scanning is paused, you will have to resume it with:
        // boolean in resumeScanning method indicates whether recognition
        // state should be automatically reset when resuming scanning - this
        // includes clearing result of mRecognizer

    public void onCameraPreviewStarted() {
        // this method is from CameraEventsListener and will be called when camera preview starts

    public void onCameraPreviewStopped() {
        // this method is from CameraEventsListener and will be called when camera preview stops

    public void onError(Throwable exc) {
         * This method is from CameraEventsListener and will be called when 
         * opening of camera resulted in exception or recognition process
         * encountered an error. The error details will be given in exc
         * parameter.
    public void onCameraPermissionDenied() {
    	 * Called in Android 6.0 and newer if camera permission is not given
    	 * by user. You should request permission from user to access camera.
    	 requestPermissions(new String[]{Manifest.permission.CAMERA}, PERMISSION_CAMERA_REQUEST_CODE);
    	  * Please note that user might have not given permission to use 
    	  * camera. In that case, you have to explain to user that without
    	  * camera permissions scanning will not work.
    	  * For more information about requesting permissions at runtime, check
    	  * this article:
    public void onAutofocusFailed() {
	     * This method is from CameraEventsListener will be called when camera focusing has failed. 
	     * Camera manager usually tries different focusing strategies and this method is called when all 
	     * those strategies fail to indicate that either object on which camera is being focused is too 
	     * close or ambient light conditions are poor.
    public void onAutofocusStarted(Rect[] areas) {
	     * This method is from CameraEventsListener and will be called when camera focusing has started.
	     * You can utilize this method to draw focusing animation on UI.
	     * Areas parameter is array of rectangles where focus is being measured. 
	     * It can be null on devices that do not support fine-grained camera control.

    public void onAutofocusStopped(Rect[] areas) {
	     * This method is from CameraEventsListener and will be called when camera focusing has stopped.
	     * You can utilize this method to remove focusing animation on UI.
	     * Areas parameter is array of rectangles where focus is being measured. 
	     * It can be null on devices that do not support fine-grained camera control.

Scan activity's orientation

If activity's screenOrientation property in AndroidManifest.xml is set to sensor, fullSensor or similar, activity will be restarted every time device changes orientation from portrait to landscape and vice versa. While restarting activity, its onPause, onStop and onDestroy methods will be called and then new activity will be created anew. This is a potential problem for scan activity because in its lifecycle it controls both camera and native library - restarting the activity will trigger both restart of the camera and native library. This is a problem because changing orientation from landscape to portrait and vice versa will be very slow, thus degrading a user experience. We do not recommend such setting.

For that matter, we recommend setting your scan activity to either portrait or landscape mode and handle device orientation changes manually. To help you with this, RecognizerRunnerView supports adding child views to it that will be rotated regardless of activity's screenOrientation. You add a view you wish to be rotated (such as view that contains buttons, status messages, etc.) to RecognizerRunnerView with addChildView method. The second parameter of the method is a boolean that defines whether the view you are adding will be rotated with device. To define allowed orientations, implement OrientationAllowedListener interface and add it to RecognizerRunnerView with method setOrientationAllowedListener. This is the recommended way of rotating camera overlay.

However, if you really want to set screenOrientation property to sensor or similar and want Android to handle orientation changes of your scan activity, then we recommend to set configChanges property of your activity to orientation|screenSize. This will tell Android not to restart your activity when device orientation changes. Instead, activity's onConfigurationChanged method will be called so that activity can be notified of the configuration change. In your implementation of this method, you should call changeConfiguration method of RecognizerView so it can adapt its camera surface and child views to new configuration.

Using Direct API for recognition of Android Bitmaps and custom camera frames

This section will describe how to use direct API to recognize android Bitmaps without the need for camera. You can use direct API anywhere from your application, not just from activities.

  1. First, you need to obtain reference to RecognizerRunner singleton using getSingletonInstance.
  2. Second, you need to initialize the recognizer runner.
  3. After initialization, you can use singleton to process Android bitmaps or images that are built from custom camera frames. Currently, it is not possible to process multiple images in parallel.
  4. Do not forget to terminate the recognizer runner singleton after usage (it is a shared resource).

Here is the minimum example of usage of direct API for recognizing android Bitmap:

public class DirectAPIActivity extends Activity implements ScanResultListener {
    private RecognizerRunner mRecognizerRunner;
    private Pdf417Recognizer mRecognizer;
    private RecognizerBundle mRecognizerBundle;
    protected void onCreate(Bundle savedInstanceState) {
        // initialize your activity here
        // create Pdf417Recognizer
        mRecognizer = new Pdf417Recognizer();
        // bundle recognizers into RecognizerBundle
        mRecognizerBundle = new RecognizerBundle(mRecognizer);
        try {
            mRecognizerRunner = RecognizerRunner.getSingletonInstance();
        } catch (FeatureNotSupportedException e) {
            Toast.makeText(this, "Feature not supported! Reason: " + e.getReason().getDescription(), Toast.LENGTH_LONG).show();
        mRecognizerRunner.initialize(this, mRecognizerBundle, new DirectApiErrorListener() {
            public void onRecognizerError(Throwable t) {
                Toast.makeText(DirectAPIActivity.this, "There was an error in initialization of Recognizer: " + t.getMessage(), Toast.LENGTH_SHORT).show();
    protected void onResume() {
        // start recognition
        Bitmap bitmap = BitmapFactory.decodeFile("/path/to/some/file.jpg");
        mRecognizerRunner.recognize(bitmap, Orientation.ORIENTATION_LANDSCAPE_RIGHT, this);

    protected void onDestroy() {

    public void onScanningDone(@NonNull RecognitionSuccessType successType) {
        // this method is from ScanResultListener and will be called 
        // when scanning completes
        // you can obtain scanning result by calling getResult on each
        // recognizer that you bundled into RecognizerBundle.
        // for example:
        Pdf417Recognizer.Result result = mRecognizer.getResult();
        if (result.getResultState() == Recognizer.Result.State.Valid) {
            // result is valid, you can use it however you wish

Understanding DirectAPI's state machine

DirectAPI's RecognizerRunner singleton is actually a state machine which can be in one of 3 states: OFFLINE, READY and WORKING.

  • When you obtain the reference to RecognizerRunner singleton, it will be in OFFLINE state.
  • You can initialize RecognizerRunner by calling initialize method. If you call initialize method while RecognizerRunner is not in OFFLINE state, you will get IllegalStateException.
  • After successful initialization, RecognizerRunner will move to READY state. Now you can call any of the recognize* methods.
  • When starting recognition with any of the recognize* methods, RecognizerRunner will move to WORKING state. If you attempt to call these methods while RecognizerRunner is not in READY state, you will get IllegalStateException
  • Recognition is performed on background thread so it is safe to call all RecognizerRunner's methods from UI thread
  • When recognition is finished, RecognizerRunner first moves back to READY state and then calls the onScanningDone method of the provided ScanResultListener.
  • Please note that ScanResultListener's onScanningDone method will be called on background processing thread, so make sure you do not perform UI operations in this calback. Also note that until the onScanningDone method completes, RecognizerRunner will not perform recognition of another image, even if any of the recognize* methods have been called just after transitioning to READY state. This is to ensure that results of the recognizers bundled within RecognizerBundle associated with RecognizerRunner are not modified while possibly being used within onScanningDone method.
  • By calling terminate method, RecognizerRunner singleton will release all its internal resources. Note that even after calling terminate you might receive onScanningDone event if there was work in progress when terminate was called.
  • terminate method can be called from any RecognizerRunner singleton's state
  • You can observe RecognizerRunner singleton's state with method getCurrentState

Using DirectAPI while RecognizerRunnerView is active

Both RecognizerRunnerView and RecognizerRunner use the same internal singleton that manages native code. This singleton handles initialization and termination of native library and propagating recognizers to native library. It is possible to use RecognizerRunnerView and RecognizerRunner together, as internal singleton will make sure correct synchronization and correct recognition settings are used. If you run into problems while using RecognizerRunner in combination with RecognizerRunnerView, let us know!

Handling processing events with RecognizerRunner and RecognizerRunnerView

This section will describe how you can subscribe to and handle processing events when using RecognizerRunner or RecognizerRunnerView. Processing events, also known as Metadata callbacks are purely intended for giving processing feedback on UI or to capture some debug information during development of your app using SDK. For that reason, built-in activities and fragments do not support subscribing and handling of those events from third parties - they handle those events internally. If you need to handle those events by yourself, you need to use either RecognizerRunnerView or RecognizerRunner.

Callbacks for all events are bundled together into the MetadataCallbacks object. Both RecognizerRunner and RecognizerRunnerView have methods which allow you to set all your callbacks.

We suggest that you check for more information about available callbacks and events to which you can handle in the javadoc for MetadataCallbacks class.

Note about setMetadataCallbacks method

Please note that both those methods need to pass information about available callbacks to the native code and for efficiency reasons this is done at the time setMetadataCallbacks method is called and not every time when change occurs within the MetadataCallbacks object. This means that if you, for example, set QuadDetectionCallback to MetadataCallbacks after you already called setMetadataCallbacks method, the QuadDetectionCallback will not be registered with the native code and you will not receive its events.

Similarly, if you, for example, remove the QuadDetectionCallback from MetadataCallbacks object after you already called setMetadataCallbacks method, your app will crash with NullPointerException when our processing code attempts to invoke the method on removed callback (which is now set to null). We deliberately do not perform null check here because of two reasons:

  • it is inefficient
  • having null callback, while still being registered to native code is illegal state of your program and it should therefore crash

Remember, each time you make some changes to MetadataCallbacks object, you need to apply those changes to to your RecognizerRunner or RecognizerRunnerView by calling its setMetadataCallbacks method.

RecognizerBundle and available recognizers

RecognizerBundle is an object which wraps the Recognizers and defines settings about how recognition should be performed. Besides that, RecognizerBundle makes it possible to transfer Recognizer objects between different activities, which is required when using built-in activities to perform scanning, as described in first scan section, but is also handy when you need to pass Recognizer objects between your activities.

This section will first describe what is a Recognizer and how it should be used to perform recognition of the images, videos and camera stream. Next, we will describe how RecognizerBundle can be used to tweak the recognition procedure and to transfer Recognizer objects between activities. Finally, we will give a list of all available Recognizer objects and give a brief description of each Recognizer, its purpose and recommendations how it should be used to get best performance and user experience.

The Recognizer concept

The Recognizer is the basic unit of processing within the SDK. Its main purpose is to process the image and extract meaningful information from it. As you will see later, the SDK has lots of different Recognizer objects that have various purposes.

Each Recognizer has a Result object, which contains the data that was extracted from the image. The Result object is a member of corresponding Recognizer object its lifetime is bound to the lifetime of its parent Recognizer object. If you need your Result object to outlive its parent Recognizer object, you must make a copy of it by calling its method clone().

Every Recognizer is a stateful object, that can be in two states: idle state and working state. While in idle state, you can tweak Recognizer object's properties via its getters and setters. After you bundle it into a RecognizerBundle and use either RecognizerRunner or RecognizerRunnerView to run the processing with all Recognizer objects bundled within RecognizerBundle, it will change to working state where the Recognizer object is being used for processing. While being in working state, you cannot tweak Recognizer object's properties. If you need to, you have to create a copy of the Recognizer object by calling its clone(), then tweak that copy, bundle it into a new RecognizerBundle and use reconfigureRecognizers to ensure new bundle gets used on processing thread.

While Recognizer object works, it changes its internal state and its result. The Recognizer object's Result always starts in Empty state. When corresponding Recognizer object performs the recognition of given image, its Result can either stay in Empty state (in case Recognizer failed to perform recognition), move to Uncertain state (in case Recognizer performed the recognition, but not all mandatory information was extracted) or move to Valid state (in case Recognizer performed recognition and all mandatory information was successfully extracted from the image).

As soon as one Recognizer object's Result within RecognizerBundle given to RecognizerRunner or RecognizerRunnerView changes to Valid state, the onScanningDone callback will be invoked on same thread that performs the background processing and you will have the opportunity to inspect each of your Recognizer objects' Results to see which one has moved to Valid state.

As already stated in section about RecognizerRunnerView, as soon as onScanningDone method ends, the RecognizerRunnerView will continue processing new camera frames with same Recognizer objects, unless paused. Continuation of processing or resetting recognition will modify or reset all Recognizer objects's Results. When using built-in activities, as soon as onScanningDone is invoked, built-in activity pauses the RecognizerRunnerView and starts finishing the activity, while saving the RecognizerBundle with active Recognizer objects into Intent so they can be transferred back to the calling activities.


The RecognizerBundle is wrapper around Recognizers objects that can be used to transfer Recognizer objects between activities and to give Recognizer objects to RecognizerRunner or RecognizerRunnerView for processing.

The RecognizerBundle is always constructed with array of Recognizer objects that need to be prepared for recognition (i.e. their properties must be tweaked already). The varargs constructor makes it easier to pass Recognizer objects to it, without the need of creating a temporary array.

The RecognizerBundle manages a chain of Recognizer objects within the recognition process. When a new image arrives, it is processed by the first Recognizer in chain, then by the second and so on, iterating until a Recognizer object's Result changes its state to Valid or all of the Recognizer objects in chain were invoked (none getting a Valid result state). If you want to invoke all Recognizers in the chain, regardless of whether some Recognizer object's Result in chain has changed its state to Valid or not, you can allow returning of multiple results on a single image.

You cannot change the order of the Recognizer objects within the chain - no matter the order in which you give Recognizer objects to RecognizerBundle, they are internally ordered in a way that provides best possible performance and accuracy. Also, in order for SDK to be able to order Recognizer objects in recognition chain in a best way possible, it is not allowed to have multiple instances of Recognizer objects of the same type within the chain. Attempting to do so will crash your application.

Passing Recognizer objects between activities

Besides managing the chain of Recognizer objects, RecognizerBundle also manages transferring bundled Recognizer objects between different activities within your app. Although each Recognizer object, and each its Result object implements Parcelable interface, it is not so straight forward to put those objects into Intent and pass them around between your activities and services for two main reasons:

  • Result object is tied to its Recognizer object, which manages lifetime of the native Result object.
  • Result object often contains large data blocks, such as images, which cannot be transferred via Intent because of Android's Intent transaction data limit.

Although the first problem can be easily worked around by making a copy of the Result and transfer it independently, the second problem is much tougher to cope with. This is where, RecognizerBundle's methods saveToIntent and loadFromIntent come to help, as they ensure the safe passing of Recognizer objects bundled within RecognizerBundle between activities according to policy defined with method setIntentDataTransferMode:

  • if set to STANDARD, the Recognizer objects will be passed via Intent using normal Intent transaction mechanism, which is limited by Android's Intent transaction data limit. This is same as manually putting Recognizer objects into Intent and is OK as long as you do not use Recognizer objects that produce images or other large objects in their Results.
  • if set to OPTIMISED, the Recognizer objects will be passed via internal singleton object and no serialization will take place. This means that there is no limit to the size of data that is being passed. This is also the fastest transfer method, but it has a serious drawback - if Android kills your app to save memory for other apps and then later restarts it and redelivers Intent that should contain Recognizer objects, the internal singleton that should contain saved Recognizer objects will be empty and data that was being sent will be lost. You can easily provoke that condition by choosing No background processes under Limit background processes in your device's Developer options, and then switch from your app to another app and then back to your app.
  • if set to PERSISTED_OPTIMISED, the Recognizer objects will be passed via internal singleton object (just like in OPTIMISED mode) and will additionaly be serialized into a file in your application's private folder. In case Android restarts your app and internal singleton is empty after re-delivery of the Intent, the data will be loaded from file and nothing will be lost. The files will be automatically cleaned up when data reading takes place. Just like OPTIMISED, this mode does not have limit to the size of data that is being passed and does not have a drawback that OPTIMISED mode has, but some users might be concerned about files to which data is being written.
    • These files will contain end-user's private data, such as image of the object that was scanned and the extracted data. Also these files may remain saved in your application's private folder until the next successful reading of data from the file.
    • If your app gets restarted multiple times, only after first restart will reading succeed and will delete the file after reading. If multiple restarts take place, you must implement onSaveInstanceState and save bundle back to file by calling its saveState method. Also, after saving state, you should ensure that you clear saved state in your onResume, as onCreate may not be called if activity is not restarted, while onSaveInstanceState may be called as soon as your activity goes to background (before onStop), even though activity may not be killed at later time.
    • If saving data to file in private storage is a concern to you, you should use either OPTIMISED mode to transfer large data and image between activities or create your own mechanism for data transfer. Note that your application's private folder is only accessible by your application and your application alone, unless the end-user's device is rooted.

List of available recognizers

This section will give a list of all Recognizer objects that are available within SDK, their purpose and recommendations how they should be used to get best performance and user experience.

Frame Grabber Recognizer

The FrameGrabberRecognizer is the simplest recognizer in SDK, as it does not perform any processing on the given image, instead it just returns that image back to its FrameCallback. Its Result never changes state from Empty.

This recognizer is best for easy capturing of camera frames with RecognizerRunnerView. Note that Image sent to onFrameAvailable are temporary and their internal buffers all valid only until the onFrameAvailable method is executing - as soon as method ends, all internal buffers of Image object are disposed. If you need to store Image object for later use, you must create a copy of it by calling clone.

Also note that FrameCallback interface extends Parcelable interface, which means that when implementing FrameCallback interface, you must also implement Parcelable interface.

This is especially important if you plan to transfer FrameGrabberRecognizer between activities - in that case, keep in mind that the instance of your object may not be the same as the instance on which onFrameAvailable method gets called - the instance that receives onFrameAvailable calls is the one that is created within activity that is performing the scan.

Success Frame Grabber Recognizer

The SuccessFrameGrabberRecognizer is a special Recognizer that wraps some other Recognizer and impersonates it while processing the image. However, when the Recognizer being impersonated changes its Result into Valid state, the SuccessFrameGrabberRecognizer captures the image and saves it into its own Result object.

Since SuccessFrameGrabberRecognizer impersonates its slave Recognizer object, it is not possible to give both concrete Recognizer object and SuccessFrameGrabberRecognizer that wraps it to same RecognizerBundle - doing so will have the same result as if you have given two instances of same Recognizer type to the RecognizerBundle - it will crash your application.

This recognizer is best for use cases when you need to capture the exact image that was being processed by some other Recognizer object at the time its Result became Valid. When that happens, SuccessFrameGrabber's Result will also become Valid and will contain described image. That image can then be retreived with getSuccessFrame() method.

PDF417 recognizer

The Pdf417Recognizer is recognizer specialised for scanning PDF417 2D barcodes. This recognizer can recognize only PDF417 2D barcodes - for recognition of other barcodes, please refer to BarcodeRecognizer.

This recognizer can be used in any context, but it works best with the BarcodeScanActivity, which has UI best suited for barcode scanning.

Barcode recognizer

The BarcodeRecognizer is recognizer specialised for scanning various types of barcodes. This recognizer should be your first choice when scanning barcodes as it supports lots of barcode symbologies, including the PDF417 2D barcodes, thus making PDF417 recognizer possibly redundant, which was kept only for its simplicity.

As you can see from javadoc, you can enable multiple barcode symbologies within this recognizer, however keep in mind that enabling more barcode symbologies affect scanning performance - the more barcode symbologies are enabled, the slower the overall recognition performance. Also, keep in mind that some simple barcode symbologies that lack proper redundancy, such as Code 39, can be recognized within more complex barcodes, especially 2D barcodes, like PDF417.

This recognizer can be used in any context, but it works best with the BarcodeScanActivity, which has UI best suited for barcode scanning.

Embedding inside another SDK

When creating your own SDK which depends on, you should consider following cases: licensing model supports two types of licenses:

  • application licenses
  • library licenses.

Application licenses

Application licenses are bound to application's package name. This means that each app must have its own license in order to be able to use This model is appropriate when integrating directly into app, however if you are creating SDK that depends on, you would need separate license for each of your clients using your SDK. This is not practical, so you should contact us at and we can provide you a library license.

Library licenses

Library license keys are bound to licensee name. You will provide your licensee name with your inquiry for library license. Unlike application licenses, library licenses must be set together with licensee name:

public class MyApplication extends Application {
    public void onCreate() {
        MicroblinkSDK.setLicenseFile("path/to/license/file/within/assets/dir", "licensee", this);

Ensuring the final app gets all resources required by

At the time of writing this documentation, Android does not have support for combining multiple AAR libraries into single fat AAR. The problem is that resource merging is done while building application, not while building AAR, so application must be aware of all its dependencies. There is no official Android way of "hiding" third party AAR within your AAR.

This problem is usually solved with transitive Maven dependencies, i.e. when publishing your AAR to Maven you specify dependencies of your AAR so they are automatically referenced by app using your AAR. Besides this, there are also several other approaches you can try:

  • you can ask your clients to reference in their app when integrating your SDK
  • since the problem lies in resource merging part you can try avoiding this step by ensuring your library will not use any component from that uses resources (i.e. built-in activities, fragments and views, except RecognizerRunnerView). You can perform custom UI integration while taking care that all resources (strings, layouts, images, ...) used are solely from your AAR, not from Then, in your AAR you should not reference LibPdf417Mobi.aar as gradle dependency, instead you should unzip it and copy its assets to your AAR’s assets folder, its classes.jar to your AAR’s lib folder (which should be referenced by gradle as jar dependency) and contents of its jni folder to your AAR’s src/main/jniLibs folder.
  • Another approach is to use 3rd party unofficial gradle script that aim to combine multiple AARs into single fat AAR. Use this script at your own risk and report issues to its developers - we do not offer support for using that script.
  • There is also a 3rd party unofficial gradle plugin which aims to do the same, but is more up to date with latest updates to Android gradle plugin. Use this plugin at your own risk and report all issues with using to its developers - we do not offer support for using that plugin.

Processor architecture considerations is distributed with both ARMv7, ARM64, x86 and x86_64 native library binaries.

ARMv7 architecture gives the ability to take advantage of hardware accelerated floating point operations and SIMD processing with NEON. This gives a huge performance boost on devices that have ARMv7 processors. Most new devices (all since 2012.) have ARMv7 processor so it makes little sense not to take advantage of performance boosts that those processors can give. Also note that some devices with ARMv7 processors do not support NEON instruction sets, most popular being those based on NVIDIA Tegra 2. Since these devices are old by today's standard, does not support them. For the same reason, does not support devices with ARMv5 (armeabi) architecture.

ARM64 is the new processor architecture that most new devices use. ARM64 processors are very powerful and also have the possibility to take advantage of new NEON64 SIMD instruction set to quickly process multiple pixels with single instruction.

x86 architecture gives the ability to obtain native speed on x86 android devices, like Asus Zenfone 4. Without that, will not work on such devices, or it will be run on top of ARM emulator that is shipped with device - this will give a huge performance penalty.

x86_64 architecture gives better performance than x86 on devices that use 64-bit Intel Atom processor.

However, there are some issues to be considered:

  • ARMv7 build of native library cannot be run on devices that do not have ARMv7 compatible processor (list of those old devices can be found here)
  • ARMv7 processors does not understand x86 instruction set
  • x86 processors do not understand neither ARM64 nor ARMv7 instruction sets
  • however, some x86 android devices ship with the builtin ARM emulator - such devices are able to run ARM binaries but with performance penalty. There is also a risk that builtin ARM emulator will not understand some specific ARM instruction and will crash.
  • ARM64 processors understand ARMv7 instruction set, but ARMv7 processors do not understand ARM64 instructions.
    • NOTE: as of year 2018, some android devices that ship with ARM64 processor do not have full compatibility with ARMv7. This is mostly due to incorrect configuration of Android's 32-bit subsystem by the vendor, however Google has announced that as od August 2019 all apps on PlayStore that contain native code will need to have native support for 64-bit processors (this includes ARM64 and x86_64) - this is in anticipation of future Android devices that will support 64-bit code only, i.e. that will have ARM64 processors that do not understand ARMv7 instruction set.
  • if ARM64 processor executes ARMv7 code, it does not take advantage of modern NEON64 SIMD operations and does not take advantage of 64-bit registers it has - it runs in emulation mode
  • x86_64 processors understand x86 instruction set, but x86 processors do not understand x86_64 instruction set
  • if x86_64 processor executes x86 code, it does not take advantage of 64-bit registers and use two instructions instead of one for 64-bit operations

LibPdf417Mobi.aar archive contains ARMv7, ARM64, x86 and x86_64 builds of native library. By default, when you integrate into your app, your app will contain native builds for all processor architectures. Thus, will work on ARMv7, ARM64, x86 and x86_64 devices and will use ARMv7 features on ARMv7 devices and ARM64 features on ARM64 devices. However, the size of your application will be rather large.

Reducing the final size of your app

If your final app is too large because of, you can decide to create multiple flavors of your app - one flavor for each architecture. With gradle and Android studio this is very easy - just add the following code to build.gradle file of your app:

android {
  splits {
    abi {
      enable true
      include 'x86', 'armeabi-v7a', 'arm64-v8a', 'x86_64'
      universalApk true

With that build instructions, gradle will build four different APK files for your app. Each APK will contain only native library for one processor architecture and one APK will contain all architectures. In order for Google Play to accept multiple APKs of the same app, you need to ensure that each APK has different version code. This can easily be done by defining a version code prefix that is dependent on architecture and adding real version code number to it in following gradle script:

// map for the version code
def abiVersionCodes = ['armeabi-v7a':1, 'arm64-v8a':2, 'x86':3, 'x86_64':4]


android.applicationVariants.all { variant ->
    // assign different version code for each output
    variant.outputs.each { output ->
        def filter = output.getFilter(OutputFile.ABI)
        if(filter != null) {
            output.versionCodeOverride = abiVersionCodes.get(output.getFilter(OutputFile.ABI)) * 1000000 + android.defaultConfig.versionCode

For more information about creating APK splits with gradle, check this article from Google.

After generating multiple APK's, you need to upload them to Google Play. For tutorial and rules about uploading multiple APK's to Google Play, please read the official Google article about multiple APKs.

Removing processor architecture support in gradle without using APK splits

If you will not be distributing your app via Google Play or for some other reasons you want to have single APK of smaller size, you can completely remove support for certaing CPU architecture from your APK. This is not recommended due to consequences.

To remove certain CPU arhitecture, add following statement to your android block inside build.gradle:

android {
	packagingOptions {
		exclude 'lib/<ABI>/'

where <ABI> represents the CPU architecture you want to remove:

  • to remove ARMv7 support, use exclude 'lib/armeabi-v7a/'
  • to remove x86 support, use exclude 'lib/x86/'
  • to remove ARM64 support, use exclude 'lib/arm64-v8a/'
  • to remove x86_64 support, use exclude 'lib/x86_64/'

You can also remove multiple processor architectures by specifying exclude directive multiple times. Just bear in mind that removing processor architecture will have side effects on performance and stability of your app. Please read this for more information.

Removing processor architecture support in Eclipse

This section assumes that you have set up and prepared your Eclipse project from LibPdf417Mobi.aar as described in chapter Eclipse integration instructions.

If you are using Eclipse, removing processor architecture support gets really complicated. Eclipse does not support APK splits and you will either need to remove support for some processors or create several different library projects from LibPdf417Mobi.aar - each one for specific processor architecture.

Native libraryies in eclipse library project are located in subfolder libs:

  • libs/armeabi-v7a contains native libraries for ARMv7 processor arhitecture
  • libs/x86 contains native libraries for x86 processor architecture
  • libs/arm64-v8a contains native libraries for ARM64 processor architecture
  • libs/x86_64 contains native libraries for x86_64 processor architecture

To remove a support for processor architecture, you should simply delete appropriate folder inside Eclipse library project:

  • to remove ARMv7 support, delete folder libs/armeabi-v7a
  • to remove x86 support, delete folder libs/x86
  • to remove ARM64 support, delete folder libs/arm64-v8a
  • to remove x86_64 support, delete folder libs/x86_64

Consequences of removing processor architecture

However, removing a processor architecture has some consequences:

  • by removing ARMv7 support will not work on devices that have ARMv7 processors.
  • by removing ARM64 support, will not use ARM64 features on ARM64 device
    • also, some future devices may ship with ARM64 processors that will not support ARMv7 instruction set. Please see this note for more information.
  • by removing x86 support, will not work on devices that have x86 processor, except in situations when devices have ARM emulator - in that case, will work, but will be slow and possibly unstable
  • by removing x86_64 support, will not use 64-bit optimizations on x86_64 processor, but if x86 support is not removed, should work

Our recommendation is to include all architectures into your app - it will work on all devices and will provide best user experience. However, if you really need to reduce the size of your app, we recommend releasing separate version of your app for each processor architecture. It is easiest to do that with APK splits.

Combining with other native libraries

If you are combining library with some other libraries that contain native code into your application, make sure you match the architectures of all native libraries. For example, if third party library has got only ARMv7 and x86 versions, you must use exactly ARMv7 and x86 versions of with that library, but not ARM64. Using these architectures will crash your app in initialization step because JVM will try to load all its native dependencies in same preferred architecture and will fail with UnsatisfiedLinkError.


Integration problems

In case of problems with integration of the SDK, first make sure that you have tried integrating it into Android Studio by following integration instructions. Althought we do provide Eclipse ADT integration integration instructions, we officialy do not support Eclipse ADT anymore. Also, for any other IDEs unfortunately you are on your own.

If you have followed Android Studio integration instructions and are still having integration problems, please contact us at

SDK problems

In case of problems with using the SDK, you should do as follows:

Licencing problems

If you are getting "invalid licence key" error or having other licence-related problems (e.g. some feature is not enabled that should be or there is a watermark on top of camera), first check the ADB logcat. All licence-related problems are logged to error log so it is easy to determine what went wrong.

When you have determine what is the licence-relate problem or you simply do not understand the log, you should contact us When contacting us, please make sure you provide following information:

  • exact package name of your app (from your AndroidManifest.xml and/or your build.gradle file)
  • licence that is causing problems
  • please stress out that you are reporting problem related to Android version of SDK
  • if unsure about the problem, you should also provide excerpt from ADB logcat containing licence error

Other problems

If you are having problems with scanning certain items, undesired behaviour on specific device(s), crashes inside or anything unmentioned, please do as follows:

  • enable logging to get the ability to see what is library doing. To enable logging, put this line in your application:


    After this line, library will display as much information about its work as possible. Please save the entire log of scanning session to a file that you will send to us. It is important to send the entire log, not just the part where crash occured, because crashes are sometimes caused by unexpected behaviour in the early stage of the library initialization.

  • Contact us at describing your problem and provide following information:

    • log file obtained in previous step
    • high resolution scan/photo of the item that you are trying to scan
    • information about device that you are using - we need exact model name of the device. You can obtain that information with any app like this one
    • please stress out that you are reporting problem related to Android version of SDK

Frequently asked questions and known problems

Here is a list of frequently asked questions and solutions for them and also a list of known problems in the SDK and how to work around them.

In demo everything worked, but after switching to production license I get InvalidLicenseKeyException as soon as I construct specific Recognizer object

Each license key contains information about which features are allowed to use and which are not. This exception indicates that your production license does not allow using of specific Recognizer object. You should contact support to check if provided licence is OK and that it really contains all features that you have purchased.

I get InvalidLicenseKeyException with trial license key

Whenever you construct any Recognizer object or any other object that derives from Entity, a check whether license allows using that object will be performed. If license is not set prior constructing that object, you will get InvalidLicenseKeyException. We recommend setting license as early as possible in your app, ideally in onCreate callback of your Application singleton.

When my app starts, I get exception telling me that some resource/class cannot be found or I get ClassNotFoundException

This usually happens when you perform integration into Eclipse project and you forget to add resources or native libraries into the project. You must alway take care that same versions of both resources, assets, java library and native libraries are used in combination. Combining different versions of resources, assets, java and native libraries will trigger crash in SDK. This problem can also occur when you have performed improper integration of SDK into your SDK. Please read how to embed inside another SDK.

When my app starts, I get UnsatisfiedLinkError

This error happens when JVM fails to load some native method from native library. If performing integration into Eclipse project make sure you have the same version of all native libraries and java wrapper. If performing integration into Android studio and this error happens, make sure that you have correctly combined SDK with third party SDKs that contain native code. If this error also happens in our integration demo apps, then it may indicate a bug in the SDK that is manifested on specific device. Please report that to our support team.

I've added my callback to MetadataCallbacks object, but it is not being called

Make sure that after adding your callback to MetadataCallbacks you have applied changes to RecognizerRunnerView or RecognizerRunner as described in this section.

I've removed my callback to MetadataCallbacks object, and now app is crashing with NullPointerException

Make sure that after removing your callback from MetadataCallbacks you have applied changes to RecognizerRunnerView or RecognizerRunner as described in this section.

In my onScanningDone callback I have the result inside my Recognizer, but when scanning activity finishes, the result is gone

This usually happens when using RecognizerRunnerView and forgetting to pause the RecognizerRunnerView in your onScanningDone callback. Then, as soon as onScanningDone happens, the result is mutated or reset by additional processing that Recognizer performs in the time between end of your onScanningDone callback and actual finishing of the scanning activity. For more information about statefulness of the Recognizer objects, check this section.

I am using built-in activity to perform scanning and after scanning finishes, my app crashes with IllegalStateException stating Data cannot be saved to intent because its size exceeds intent limit.

This usually happens when you use Recognizer that produces image or similar large object inside its Result and that object exceeds the Android intent transaction limit. You should enable different intent data transfer mode. For more information about this, check this section. Also, instead of using built-in activity, you can use RecognizerRunnerFragment with built-in scanning overlay.

After scanning finishes, my app freezes

This usually happens when you attempt to transfer standalone Result that contains images or similar large objects via Intent and the size of the object exceeds Android intent transaction limit. Depending on the device, you will get either TransactionTooLargeException, a simple message BINDER TRANSACTION FAILED in log and your app will freeze or you app will get into restart loop. We recommend that you use RecognizerBundle and its API for sending Recognizer objects via Intent in a more safe manner (check this section for more information). However, if you really need to transfer standalone Result object (e.g. Result object obtained by cloning Result object owned by specific Recognizer object), you need to do that using global variables or singletons within your application. Sending large objects via Intent is not supported by Android.

Additional info

Complete API reference can be found in Javadoc.

For any other questions, feel free to contact us at