Skip to content

Latest commit

 

History

History
446 lines (388 loc) · 18.5 KB

File metadata and controls

446 lines (388 loc) · 18.5 KB
layout title description keywords needAutoGenerateSidebar needGenerateH3Content permalink
default-layout
iOS User Guide - Dynamsoft Label Recognizer
This is the user guide page of Dynamsoft Label Recognizer for iOS SDK.
iOS, swift, objective-c, user guide
true
true
/programming/objectivec-swift/user-guide.html

Dynamsoft Label Recognizer - iOS User Guide

Requirements

  • Supported OS: iOS 11+ (iOS 13+ recommended).
  • Supported ABI: arm64 and x86_64.
  • Development Environment: Xcode 13+ (Xcode 14.1+ recommended).

Add the xcframeworks

The Dynamsoft Label Recognizer (DLR) iOS SDK comes with seven libraries:

File Description Mandatory/Optional
DynamsoftLabelRecognizer.xcframework The Dynamsoft Label Recognizer module identifies and recognizes text labels such as passport MRZs, ID cards, and VIN numbers. Mandatory
DynamsoftCore.xcframework The Dynamsoft Core module lays the foundation for Dynamsoft SDKs based on the DCV (Dynamsoft Capture Vision) architecture. It encapsulates the basic classes, interfaces, and enumerations shared by these SDKs. Mandatory
DynamsoftCaptureVisionRouter.xcframework The Dynamsoft Capture Vision Router module is the cornerstone of the Dynamsoft Capture Vision (DCV) architecture. It focuses on coordinating batch image processing and provides APIs for setting up image sources and result receivers, configuring workflows with parameters, and controlling processes. Mandatory
DynamsoftImageProcessing.xcframework The Dynamsoft Image Processing module facilitates digital image processing and supports operations for other modules, including the Barcode Reader, Label Recognizer, and Document Normalizer. Mandatory
DynamsoftNeuralNetwork.xcframework The Dynamsoft Neural Network module allows SDKs compliant with the DCV (Dynamsoft Capture Vision) architecture to leverage the power of deep learning when processing digital images. Mandatory
DynamsoftLicense.xcframework The Dynamsoft License module manages the licensing aspects of Dynamsoft SDKs based on the DCV (Dynamsoft Capture Vision) architecture. Mandatory
DynamsoftCameraEnhancer.xcframework The [Dynamsoft Camera Enhancer]({{ site.dce_android }}){:target="_blank"} module controls the camera, transforming it into an image source for the DCV (Dynamsoft Capture Vision) architecture through ISA implementation. It also enhances image quality during acquisition and provides basic viewers for user interaction. Optional
DynamsoftUtility.xcframework The Dynamsoft Utility module defines auxiliary classes, including the ImageManager, and implementations of the CRF (Captured Result Filter) and ISA (Image Source Adapter) . These are shared by all Dynamsoft SDKs based on the DCV (Dynamsoft Capture Vision) architecture. Optional

There are two ways to add the libraries into your project - Manually or via CocaPods.

Add the xcframeworks Manually

  1. Download the SDK package from the Dynamsoft website. After unzipping, seven xcframework files can be found in the Dynamsoft/Frameworks directory:

    • DynamsoftCaptureVisionRouter.xcframework
    • DynamsoftLabelRecognizer.xcframework
    • DynamsoftCore.xcframework
    • DynamsoftImageProcessing.xcframework
    • DynamsoftNeuralNetwork.xcframework
    • DynamsoftLicense.xcframework
    • DynamsoftUtility.xcframework
    • DynamsoftCameraEnhancer.xcframework
  2. Drag and drop the above xcframeworks into your Xcode project. Make sure to check Copy items if needed and create groups to copy the xcframeworks into your project's folder.

  3. Click on the project settings then go to General –> Frameworks, Libraries, and Embedded Content. Set the Embed field to Embed & Sign for all the xcframeworks.

Add the xcframeworks via CocoaPods

  1. Add the frameworks in your Podfile, replace TargetName with your real target name.

    target '{Your project name}' do
      use_frameworks!
    
      pod 'DynamsoftLabelRecognizerBundle','3.2.3000'
    
    end
    
  2. Execute the pod command to install the frameworks and generate workspace({Your project name}.xcworkspace):

    pod install

Build Your First Application

The following sample will demonstrate how to create a HelloWorld app for recognizing text from camera video input.

Note:

Create a New Project

  1. Open XCode and select Create a new Xcode Project or in the File > New > Project menu to create a new project.

  2. Select iOS -> App for your application.

  3. Input your product name (Helloworld), interface (StoryBoard) and language (Objective-C/Swift)

  4. Click on Next and select the location to save the project.

  5. Click on Create to finish creating the new project.

Include the Library

To add the SDK to your new project, please read add the xcframeworks section for more details.

Deploy the CharacterModels

A CharacterModel is a file that trained to support the text line recognition. Before implementing the label recognizing tasks, you have to include the required CharacterModels in your project first.

  1. Create a DynamsoftResources folder in the finder. Under the folder create a CharacterModel folder.

  2. Copy your CharacterModel file(s) to the CharacterModel folder. In this guide we put the NumberLetter.data to the CharacterModel folder.

  3. Rename the DynamsoftResources folder's extension name to .bundle.

  4. Drag the DynamsoftResources.bundle into your project on Xcode.

Initialize the License

  1. Use the LicenseManager class and initialize the license in AppDelegate.

    >- Objective-C >- Swift > >1. ```objc - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { [DSLicenseManager initLicense:@"DLS2eyJvcmdhbml6YXRpb25JRCI6IjIwMDAwMSJ9" verificationDelegate:self]; return YES; } - (void)onLicenseVerified:(BOOL)isSuccess error:(NSError *)error { if (error != nil) { NSString *msg = error.localizedDescription; NSLog(@"erver license verify failed, error:%@", msg); } } ``` 2. ```swift func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { LicenseManager.initLicense("DLS2eyJvcmdhbml6YXRpb25JRCI6IjIwMDAwMSJ9", verificationDelegate:self) return true } func onLicenseVerified(_ isSuccess: Bool, error: Error?) { if(error != nil) { if let msg = error?.localizedDescription { print("Server license verify failed, error:\(msg)") } } } ```

    Note:

    • Network connection is required for the license to work.
    • The license string here will grant you a time-limited trial license.
    • You can request a 30-day trial license via the Request a Trial License{:target="_blank"} link. Offline trial license is also available by contacting us{:target="_blank"}.

Initialize the Camera Module

  1. Create the instances of CameraEnhancer and CameraView in ViewController.
>- Objective-C >- Swift > >1. ```objc @property (nonatomic, strong) DSCameraEnhancer *dce; @property (nonatomic, strong) DSCameraView *dceView; - (void)configureDCE { _dceView = [[DSCameraView alloc] initWithFrame:self.view.bounds]; _dceView.scanLaserVisible = YES; [self.view addSubview:_dceView]; DSDrawingLayer *dlrDrawingLayer = [_dceView getDrawingLayer:DSDrawingLayerIdDLR]; dlrDrawingLayer.visible = YES; _dce = [[DSCameraEnhancer alloc] initWithView:_dceView]; [_dce open]; // Set a scan region to restrict the scan area. DSRect *region = [[DSRect alloc] init]; region.top = 0.4; region.bottom = 0.6; region.left = 0.1; region.right = 0.9; region.measuredInPercentage = YES; [_dce setScanRegion:region error:nil]; } ``` 2. ```swift class ViewController: BaseViewController{ private var dce: CameraEnhancer! private var dceView: CameraView! private func configureDCE() -> Void { dceView = CameraView(frame: self.view.bounds) dceView.scanLaserVisible = true self.view.addSubview(dceView) let dlrDrawingLayer = dceView.getDrawingLayer(DrawingLayerId.DLR.rawValue) dlrDrawingLayer?.visible = true dce = CameraEnhancer(view: dceView) dce.open() // Set a scan region to restrict the scan area. let region = Rect() region.top = 0.4 region.bottom = 0.6 region.left = 0.1 region.right = 0.9 region.measuredInPercentage = true try? dce.setScanRegion(region) } } ```

Initialize the Capture Vision Router

Create an instance of CaptureVisionRouter and bind it with the already created instance of DynamsoftCameraEnhancer.

>- Objective-C >- Swift > >1. ```objc @property (nonatomic, strong) DSCaptureVisionRouter *cvr; ... - (void)configureCVR { _cvr = [[DSCaptureVisionRouter alloc] init]; // Set DCE as the input. [_cvr setInput:_dce error:nil]; } ``` 2. ```swift class ViewController: BaseViewController{ private var cvr: CaptureVisionRouter! ... private func configureCVR() -> Void { cvr = CaptureVisionRouter() // Set DCE as the input. try? cvr.setInput(dce) } } ```

Receive the Text Line Recognition Results

Set up result callback in order to receive the text line recognition results after the camera is opened and the scanning starts.

>- Objective-C >- Swift > >1. ```objc // Add DSCapturedResultReceiver to the class. @interface ViewController () - (void)configureCVR { ... [_cvr addResultReceiver:self]; } - (void)onRecognizedTextLinesReceived:(DSRecognizedTextLinesResult *)result { if (result.items != nil) { // Parse results. int index = 0; NSMutableString *resultText = [NSMutableString string]; for (DSTextLineResultItem *dlrLineResults in result.items) { index++; [resultText appendString:[NSString stringWithFormat:@"Result %d:%@\n", index, dlrLineResults.text != nil ? dlrLineResults.text : @""]]; } dispatch_async(dispatch_get_main_queue(), ^{ self.resultView.text = [NSString stringWithFormat:@"Results(%d)\n%@", (int)result.items.count, resultText]; }); } } ``` 2. ```swift // Add CapturedResultReceiver to the class. class ViewController: UIViewController, CapturedResultReceiver { ... private func configureCVR() -> Void { ... // Add a CaptureResultReceiver to receive results. cvr.addResultReceiver(self) } func onRecognizedTextLinesReceived(_ result: RecognizedTextLinesResult) { guard let items = result.items else { return } // Parse Results. var resultText = "" var index = 0 for dlrLineResults in items { index+=1 resultText += String(format: "Result %d:%@\n", index, dlrLineResults.text ?? "") } DispatchQueue.main.async { self.resultView.text = String(format: "Results(%d)\n", items.count) + resultText } } } ```

Display the Recognized Textline Results

Now that we created the result receiver, let's now create a text view to display the text line recognition results that's referenced in the result callback.

>- Objective-C >- Swift > >1. ```objc @property (nonatomic, strong) UITextView *resultView; ... - (UITextView *)resultView { if (!_resultView) { CGFloat left = 0.0; CGFloat width = self.view.bounds.size.width; CGFloat height = self.view.bounds.size.height / 2.5; CGFloat top = self.view.bounds.size.height - height; _resultView = [[UITextView alloc] initWithFrame:CGRectMake(left, top, width, height)]; _resultView.layer.backgroundColor = [UIColor clearColor].CGColor; _resultView.layoutManager.allowsNonContiguousLayout = NO; _resultView.font = [UIFont systemFontOfSize:14.0 weight:UIFontWeightMedium]; _resultView.textColor = [UIColor whiteColor]; _resultView.textAlignment = NSTextAlignmentCenter; } return _resultView; } - (void)setupUI { [self.view addSubview:self.resultView]; } ``` 2. ```swift class ViewController: UIViewController, CapturedResultReceiver { ... lazy var resultView: UITextView = { let left = 0.0 let width = self.view.bounds.size.width let height = self.view.bounds.size.height / 2.5 let top = self.view.bounds.size.height - height resultView = UITextView(frame: CGRect(x: left, y: top , width: width, height: height)) resultView.layer.backgroundColor = UIColor.clear.cgColor resultView.layoutManager.allowsNonContiguousLayout = false resultView.isEditable = false resultView.isSelectable = false resultView.font = UIFont.systemFont(ofSize: 14.0, weight: .medium) resultView.textColor = UIColor.white resultView.textAlignment = .center return resultView }() private func setupUI() -> Void { self.view.addSubview(resultView) } } ```

Configure viewWillAppear, viewDidLoad

Time to configure these core functions that will connect everything together. All of the configuration code such as the configureCVR and configureDCE goes in viewDidLoad. While viewWillAppear contains the method that will open the camera and start scanning.

>- Objective-C >- Swift > >1. ```objc - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; weakSelfs(self) // Start capturing when the view appear. [self.cvr startCapturing:DSPresetTemplateRecognizeTextLines completionHandler:^(BOOL isSuccess, NSError * _Nullable error) { if (error != nil) { [weakSelf displayError:error.localizedDescription completion:nil]; } }]; } - (void)viewDidLoad { [super viewDidLoad]; self.view.backgroundColor = [UIColor whiteColor]; // Initialize CaptureVisionRouter, CameraEnhancer and the text view you created. [self configureCVR]; [self configureDCE]; [self setupUI]; } - (void)displayError:(NSString *)msg completion:(nullable ConfirmCompletion)completion { dispatch_async(dispatch_get_main_queue(), ^{ UIAlertController *alert = [UIAlertController alertControllerWithTitle:@"" message:msg preferredStyle: UIAlertControllerStyleAlert]; UIAlertAction *action = [UIAlertAction actionWithTitle:@"OK" style:UIAlertActionStyleDefault handler:^(UIAlertAction * _Nonnull action) { if (completion) completion(); }]; [alert addAction:action]; [self presentViewController:alert animated:YES completion:nil]; }); } ``` 2. ```swift class ViewController: UIViewController, CapturedResultReceiver { override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) cvr.startCapturing(PresetTemplate.recognizeTextLines.rawValue) { [unowned self] isSuccess, error in if let error = error { self.displayError(msg: error.localizedDescription) } } } override func viewDidLoad() { super.viewDidLoad() self.view.backgroundColor = .white // Initialize CaptureVisionRouter, CameraEnhancer and the text view you created. configureCVR() configureDCE() setupUI() } private func displayError(_ title: String = "", msg: String, _ acTitle: String = "OK", completion: ConfirmCompletion? = nil) { DispatchQueue.main.async { let alert = UIAlertController(title: title, message: msg, preferredStyle: .alert) alert.addAction(UIAlertAction(title: acTitle, style: .default, handler: { _ in completion?() })) self.present(alert, animated: true, completion: nil) } } } ```

Build and Run the Project

  1. Before deploying the project, select the device that you want to run your app on.
  2. Run the project, then your app will be installed on your device.

Note: