layout | title | description | keywords | needAutoGenerateSidebar | needGenerateH3Content | permalink |
---|---|---|---|---|---|---|
default-layout |
iOS User Guide - Dynamsoft Label Recognizer |
This is the user guide page of Dynamsoft Label Recognizer for iOS SDK. |
iOS, swift, objective-c, user guide |
true |
true |
/programming/objectivec-swift/user-guide.html |
- Dynamsoft Label Recognizer - iOS User Guide
- Requirements
- Add the xcframeworks
- Build Your First Application
- Create a New Project
- Include the Library
- Deploy the CharacterModels
- Initialize the License
- Initialize the Camera Module
- Initialize the Capture Vision Router
- Receive the Text Line Recognition Results
- Display the Recognized Textline Results
- Configure viewWillAppear, viewDidLoad
- Build and Run the Project
- Supported OS: iOS 11+ (iOS 13+ recommended).
- Supported ABI: arm64 and x86_64.
- Development Environment: Xcode 13+ (Xcode 14.1+ recommended).
The Dynamsoft Label Recognizer (DLR) iOS SDK comes with seven libraries:
File | Description | Mandatory/Optional |
---|---|---|
DynamsoftLabelRecognizer.xcframework |
The Dynamsoft Label Recognizer module identifies and recognizes text labels such as passport MRZs, ID cards, and VIN numbers. | Mandatory |
DynamsoftCore.xcframework |
The Dynamsoft Core module lays the foundation for Dynamsoft SDKs based on the DCV (Dynamsoft Capture Vision) architecture. It encapsulates the basic classes, interfaces, and enumerations shared by these SDKs. | Mandatory |
DynamsoftCaptureVisionRouter.xcframework |
The Dynamsoft Capture Vision Router module is the cornerstone of the Dynamsoft Capture Vision (DCV) architecture. It focuses on coordinating batch image processing and provides APIs for setting up image sources and result receivers, configuring workflows with parameters, and controlling processes. | Mandatory |
DynamsoftImageProcessing.xcframework |
The Dynamsoft Image Processing module facilitates digital image processing and supports operations for other modules, including the Barcode Reader, Label Recognizer, and Document Normalizer. | Mandatory |
DynamsoftNeuralNetwork.xcframework |
The Dynamsoft Neural Network module allows SDKs compliant with the DCV (Dynamsoft Capture Vision) architecture to leverage the power of deep learning when processing digital images. | Mandatory |
DynamsoftLicense.xcframework |
The Dynamsoft License module manages the licensing aspects of Dynamsoft SDKs based on the DCV (Dynamsoft Capture Vision) architecture. | Mandatory |
DynamsoftCameraEnhancer.xcframework |
The [Dynamsoft Camera Enhancer]({{ site.dce_android }}){:target="_blank"} module controls the camera, transforming it into an image source for the DCV (Dynamsoft Capture Vision) architecture through ISA implementation. It also enhances image quality during acquisition and provides basic viewers for user interaction. | Optional |
DynamsoftUtility.xcframework |
The Dynamsoft Utility module defines auxiliary classes, including the ImageManager, and implementations of the CRF (Captured Result Filter) and ISA (Image Source Adapter) . These are shared by all Dynamsoft SDKs based on the DCV (Dynamsoft Capture Vision) architecture. | Optional |
There are two ways to add the libraries into your project - Manually or via CocaPods.
-
Download the SDK package from the Dynamsoft website. After unzipping, seven xcframework files can be found in the Dynamsoft/Frameworks directory:
- DynamsoftCaptureVisionRouter.xcframework
- DynamsoftLabelRecognizer.xcframework
- DynamsoftCore.xcframework
- DynamsoftImageProcessing.xcframework
- DynamsoftNeuralNetwork.xcframework
- DynamsoftLicense.xcframework
- DynamsoftUtility.xcframework
- DynamsoftCameraEnhancer.xcframework
-
Drag and drop the above xcframeworks into your Xcode project. Make sure to check Copy items if needed and create groups to copy the xcframeworks into your project's folder.
-
Click on the project settings then go to General –> Frameworks, Libraries, and Embedded Content. Set the Embed field to Embed & Sign for all the xcframeworks.
-
Add the frameworks in your Podfile, replace TargetName with your real target name.
target '{Your project name}' do use_frameworks! pod 'DynamsoftLabelRecognizerBundle','3.2.3000' end
-
Execute the pod command to install the frameworks and generate workspace({Your project name}.xcworkspace):
pod install
The following sample will demonstrate how to create a HelloWorld app for recognizing text from camera video input.
Note:
- The following steps are completed in XCode 14.2
- View the entire Objective-C source code from ReadTextLinesWithCameraEnhancerObjc sample
- View the entire Swift source code from ReadTextLinesWithCameraEnhancer sample
-
Open XCode and select Create a new Xcode Project or in the File > New > Project menu to create a new project.
-
Select iOS -> App for your application.
-
Input your product name (Helloworld), interface (StoryBoard) and language (Objective-C/Swift)
-
Click on Next and select the location to save the project.
-
Click on Create to finish creating the new project.
To add the SDK to your new project, please read add the xcframeworks section for more details.
A CharacterModel
is a file that trained to support the text line recognition. Before implementing the label recognizing tasks, you have to include the required CharacterModels
in your project first.
-
Create a DynamsoftResources folder in the finder. Under the folder create a CharacterModel folder.
-
Copy your
CharacterModel
file(s) to the CharacterModel folder. In this guide we put the NumberLetter.data to theCharacterModel
folder. -
Rename the DynamsoftResources folder's extension name to .bundle.
-
Drag the DynamsoftResources.bundle into your project on Xcode.
-
Use the
>- Objective-C >- Swift > >1. ```objc - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { [DSLicenseManager initLicense:@"DLS2eyJvcmdhbml6YXRpb25JRCI6IjIwMDAwMSJ9" verificationDelegate:self]; return YES; } - (void)onLicenseVerified:(BOOL)isSuccess error:(NSError *)error { if (error != nil) { NSString *msg = error.localizedDescription; NSLog(@"erver license verify failed, error:%@", msg); } } ``` 2. ```swift func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { LicenseManager.initLicense("DLS2eyJvcmdhbml6YXRpb25JRCI6IjIwMDAwMSJ9", verificationDelegate:self) return true } func onLicenseVerified(_ isSuccess: Bool, error: Error?) { if(error != nil) { if let msg = error?.localizedDescription { print("Server license verify failed, error:\(msg)") } } } ```LicenseManager
class and initialize the license in AppDelegate.Note:
- Network connection is required for the license to work.
- The license string here will grant you a time-limited trial license.
- You can request a 30-day trial license via the Request a Trial License{:target="_blank"} link. Offline trial license is also available by contacting us{:target="_blank"}.
- Create the instances of
CameraEnhancer
andCameraView
in ViewController.
Create an instance of CaptureVisionRouter
and bind it with the already created instance of DynamsoftCameraEnhancer
.
Set up result callback in order to receive the text line recognition results after the camera is opened and the scanning starts.
>- Objective-C >- Swift > >1. ```objc // Add DSCapturedResultReceiver to the class. @interface ViewController () - (void)configureCVR { ... [_cvr addResultReceiver:self]; } - (void)onRecognizedTextLinesReceived:(DSRecognizedTextLinesResult *)result { if (result.items != nil) { // Parse results. int index = 0; NSMutableString *resultText = [NSMutableString string]; for (DSTextLineResultItem *dlrLineResults in result.items) { index++; [resultText appendString:[NSString stringWithFormat:@"Result %d:%@\n", index, dlrLineResults.text != nil ? dlrLineResults.text : @""]]; } dispatch_async(dispatch_get_main_queue(), ^{ self.resultView.text = [NSString stringWithFormat:@"Results(%d)\n%@", (int)result.items.count, resultText]; }); } } ``` 2. ```swift // Add CapturedResultReceiver to the class. class ViewController: UIViewController, CapturedResultReceiver { ... private func configureCVR() -> Void { ... // Add a CaptureResultReceiver to receive results. cvr.addResultReceiver(self) } func onRecognizedTextLinesReceived(_ result: RecognizedTextLinesResult) { guard let items = result.items else { return } // Parse Results. var resultText = "" var index = 0 for dlrLineResults in items { index+=1 resultText += String(format: "Result %d:%@\n", index, dlrLineResults.text ?? "") } DispatchQueue.main.async { self.resultView.text = String(format: "Results(%d)\n", items.count) + resultText } } } ```Now that we created the result receiver, let's now create a text view to display the text line recognition results that's referenced in the result callback.
>- Objective-C >- Swift > >1. ```objc @property (nonatomic, strong) UITextView *resultView; ... - (UITextView *)resultView { if (!_resultView) { CGFloat left = 0.0; CGFloat width = self.view.bounds.size.width; CGFloat height = self.view.bounds.size.height / 2.5; CGFloat top = self.view.bounds.size.height - height; _resultView = [[UITextView alloc] initWithFrame:CGRectMake(left, top, width, height)]; _resultView.layer.backgroundColor = [UIColor clearColor].CGColor; _resultView.layoutManager.allowsNonContiguousLayout = NO; _resultView.font = [UIFont systemFontOfSize:14.0 weight:UIFontWeightMedium]; _resultView.textColor = [UIColor whiteColor]; _resultView.textAlignment = NSTextAlignmentCenter; } return _resultView; } - (void)setupUI { [self.view addSubview:self.resultView]; } ``` 2. ```swift class ViewController: UIViewController, CapturedResultReceiver { ... lazy var resultView: UITextView = { let left = 0.0 let width = self.view.bounds.size.width let height = self.view.bounds.size.height / 2.5 let top = self.view.bounds.size.height - height resultView = UITextView(frame: CGRect(x: left, y: top , width: width, height: height)) resultView.layer.backgroundColor = UIColor.clear.cgColor resultView.layoutManager.allowsNonContiguousLayout = false resultView.isEditable = false resultView.isSelectable = false resultView.font = UIFont.systemFont(ofSize: 14.0, weight: .medium) resultView.textColor = UIColor.white resultView.textAlignment = .center return resultView }() private func setupUI() -> Void { self.view.addSubview(resultView) } } ```Time to configure these core functions that will connect everything together. All of the configuration code such as the configureCVR and configureDCE goes in viewDidLoad. While viewWillAppear contains the method that will open the camera and start scanning.
>- Objective-C >- Swift > >1. ```objc - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; weakSelfs(self) // Start capturing when the view appear. [self.cvr startCapturing:DSPresetTemplateRecognizeTextLines completionHandler:^(BOOL isSuccess, NSError * _Nullable error) { if (error != nil) { [weakSelf displayError:error.localizedDescription completion:nil]; } }]; } - (void)viewDidLoad { [super viewDidLoad]; self.view.backgroundColor = [UIColor whiteColor]; // Initialize CaptureVisionRouter, CameraEnhancer and the text view you created. [self configureCVR]; [self configureDCE]; [self setupUI]; } - (void)displayError:(NSString *)msg completion:(nullable ConfirmCompletion)completion { dispatch_async(dispatch_get_main_queue(), ^{ UIAlertController *alert = [UIAlertController alertControllerWithTitle:@"" message:msg preferredStyle: UIAlertControllerStyleAlert]; UIAlertAction *action = [UIAlertAction actionWithTitle:@"OK" style:UIAlertActionStyleDefault handler:^(UIAlertAction * _Nonnull action) { if (completion) completion(); }]; [alert addAction:action]; [self presentViewController:alert animated:YES completion:nil]; }); } ``` 2. ```swift class ViewController: UIViewController, CapturedResultReceiver { override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) cvr.startCapturing(PresetTemplate.recognizeTextLines.rawValue) { [unowned self] isSuccess, error in if let error = error { self.displayError(msg: error.localizedDescription) } } } override func viewDidLoad() { super.viewDidLoad() self.view.backgroundColor = .white // Initialize CaptureVisionRouter, CameraEnhancer and the text view you created. configureCVR() configureDCE() setupUI() } private func displayError(_ title: String = "", msg: String, _ acTitle: String = "OK", completion: ConfirmCompletion? = nil) { DispatchQueue.main.async { let alert = UIAlertController(title: title, message: msg, preferredStyle: .alert) alert.addAction(UIAlertAction(title: acTitle, style: .default, handler: { _ in completion?() })) self.present(alert, animated: true, completion: nil) } } } ```- Before deploying the project, select the device that you want to run your app on.
- Run the project, then your app will be installed on your device.
Note:
- You can get the source code of the HelloWord app from the following link
- View the entire Objective-C source code from ReadTextLinesWithCameraEnhancerObjc sample
- View the entire Swift source code from ReadTextLinesWithCameraEnhancer sample