Skip to content
OCR SDK for iOS
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
HelloOCR_iOS
.gitignore
LICENSE
README.md

README.md

OCR Framework

Aumentia

  • Real time OCR
  • arm64 support
  • BITCODE enabled
  • Compabitle with XCode8, Swift 3 and iOS 10

********************** HOW TO Objective C **********************

Add OCR SDK framework and bundle:

  • OCRAumentia.framework
  • OCRAumentiaBundle.bundle

Add the following system frameworks and libraries:

  • AssetsLibrary.framework
  • AVFoundation.framework
  • Accelerate.framework
  • CoreMedia.framework
  • CoreVideo.framework
  • QuartzCore.framework
  • CoreGraphics.framework
  • Foundation.framework
  • UIKit.framework
  • libiconv.dylib
  • libstdc++.6.0.9.dylib

Disable Testability

Build Settings -> Enable Testability -> No

Import the library

#import <OCRAumentia/OCRAumentia.h>

Objective C++

The .m file where you include the framework must be compiled supporting cpp, so change the “File Type” to “Objective-C++ Source“ or just rename it to .mm extension.

Init the framework.

// Tesseract path
NSString *resourcePath      = [[NSBundle mainBundle] resourcePath];
NSString *pathToTessData    = [NSString stringWithFormat:@"%@/%@", resourcePath, @"OCRAumentiaBundle.bundle"];

// Set your API Key
// Set language to English ("eng")
// Set chars whitelist
self.ocr = [[ocrAPI alloc] init:@"8a83bebc51535accf9c31abdb66efc5d60e7b2ad" path:pathToTessData lang:@"eng" chars:@"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"];

You can add more languages. Just add the one you like from https://github.com/tesseract-ocr/tessdata to OCRAumentiaBundle.bundle/tessdata

Send frames (for real time) or a single UIImage to analyze

Frame:

// Start the process

// image is a CVImageBufferRef
[self.ocr processRGBFrame:image result:^(UIImage *resImage)
{
    dispatch_async(dispatch_get_main_queue(), ^{

        // Display the analysed frame with bounding boxes around the matched chars
        UIImageView *resView    = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 180, 240)];
        resView.image           = resImage;
        [self.view addSubview: resView];

        NSLog(@"Finished!");
    });
    }
    wordsBlock:^(NSMutableDictionary *wordsDistDict)
    {
        // Get matched words and the confidence. A value between 0 to 100.
        for(NSString *key in [wordsDistDict allKeys])
        {
        NSLog(@"Matched word %@ with confidence %@", key, [wordsDistDict objectForKey:key]);
    }
} resSize:0];

UIImage:

    // Image to analyse
    UIImage *image = [UIImage imageNamed:@"pic1.jpg"];

    NSLog(@"Analysing...");

    // Start the process
    [self.ocr processUIImage:image result:^(UIImage *resImage)
    {
        dispatch_async(dispatch_get_main_queue(), ^{

            // Display the analysed frame with bounding boxes around the matched chars
            UIImageView *resView    = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 180, 240)];
            resView.image           = resImage;
            [self.view addSubview: resView];

            NSLog(@"Finished!");
        });
    }
    wordsBlock:^(NSMutableDictionary *wordsDistDict)
    {
        // Get matched words and the confidence. A value between 0 to 100.
        for(NSString *key in [wordsDistDict allKeys])
        {
            NSLog(@"Matched word %@ with confidence %@", key, [wordsDistDict objectForKey:key]);
        }
    } resSize:0];

********************** HOW TO Swift **********************

Add the following system frameworks and libraries:

  • AssetsLibrary.framework
  • AVFoundation.framework
  • Accelerate.framework
  • CoreMedia.framework
  • CoreVideo.framework
  • QuartzCore.framework
  • CoreGraphics.framework
  • Foundation.framework
  • UIKit.framework
  • libiconv.dylib
  • libstdc++.6.0.9.dylib

Create a bridging header

Import the OCRAumentia framework and its dependencies:

#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
#import <CoreVideo/CoreVideo.h>
#import <CoreGraphics/CoreGraphics.h>
#import <QuartzCore/QuartzCore.h>
#import <Accelerate/Accelerate.h>
#import <CoreMotion/CoreMotion.h>
#import <AVFoundation/AVFoundation.h>

#import <OCRAumentia/OCRAumentia.h>

Set Defines Module to Yes

Init the framework.

// Set your API Key
// Set language to English ("eng")
// Set chars whitelist
// Set the path to tessdata, i.e. to the root of the OCRAumentiaBundle
let resourcePath = NSBundle.mainBundle().resourcePath;
            
let pathToTessData = resourcePath! + "/OCRAumentiaBundle.bundle";

_ocr = ocrAPI("80e899706458463676eb3b82decb95777ec698d0",
            path: pathToTessData as String,
            lang: "eng",
            chars: "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ");

Send frames (for real time) or a single UIImage to analyze

Frame:

self._ocr.processRGBFrame(cameraFrame, result:{ resImage in
                    
  dispatch_async(dispatch_get_main_queue(),{
        // Display processed image: Update in main thread
        let resView:UIImageView = UIImageView(frame: CGRectMake(0, 0, 180, 240));
        resView.image = resImage;
        self.view.addSubview(resView);
    });
    
    }, wordsBlock:{ wordsDistDict in
        
        for (key, value) in wordsDistDict
        {
            print("Matched word \(key as! String) with confidence \(value)");
        }
}, resSize:0);

UIImage:

// Image to analyse
let image:UIImage = UIImage(named: "pic1.jpg")!;
            
self._ocr.processUIImage(image, result:{ resImage in
    // Display processed image: Update in main thread
    dispatch_async(dispatch_get_main_queue(),{
        let resView:UIImageView = UIImageView(frame: CGRectMake(0, 0, 180, 240));
        resView.image = resImage;
        self.view.addSubview(resView);
    });
    
    }, wordsBlock:{ wordsDistDict in
        
        for (key, value) in wordsDistDict
        {
            print("Matched word \(key as! String) with confidence \(value)");
        }
}, resSize:0);
            

********************** Request an API Key ********************** To use the framework you will need an API Key. To request it, just send an email to info@aumentia.com with the following details: * Bundle Id of the application where you want to use the framework. * Name and description of the app where you want to use the framework. * Your ( or your company ) name.
****************** API ****************** [api.aumentia.com](http://api.aumentia.com/ocr_ios/)
****************** iOS Version ****************** 7.0+
************************* OCR Framework version ************************* 0.65
****************** Devices tested ****************** iPhone 4
iPhone 4s
iPhone 5
iPhone 5s
iPhone 6
iPhone 6+
iPad 2,3,4
iPad Air
iPad mini 1, 2, 3

****************** License ****************** [LICENSE](https://github.com/aumentia/OCR-Aumentia-iOS/blob/master/LICENSE)
****************** Bugs ****************** [Issues & Requests](https://github.com/aumentia/OCR-Aumentia-iOS/issues)
You can’t perform that action at this time.