Skip to content
A Swift wrapper around Tesseract for use in iOS applications
Branch: master
Clone or download
Steven0351 Merge pull request #33 from JulianKahnert/patch-1
set PDF rendering timeout to 30s
Latest commit 05dad69 May 12, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
SwiftyTesseract set PDF rendering timeout to 30s May 11, 2019
.codebeatignore Fixed readme to use latest version in examples for using Cocoapods and Oct 22, 2018
.gitignore "Initial commit with .gitignore" Feb 28, 2018
.jazzy.yaml Add .jazzy.yaml file for ease of generating docs repeatably. Mar 30, 2019
.swift-version update: updated documentation and .swift-version file Sep 26, 2018
.travis.yml Created a separate test target to run some similar test logic for Feb 2, 2019 Updated Changelog for 2.1.0 (whoops!) and 2.2.0 Mar 30, 2019 Semi-final updates before public release Mar 26, 2018 Added picture to Readme to show visual example of how to import tessd… Mar 23, 2018
IMG_0332.JPG Semi-final updates before public release Mar 26, 2018 Revert "Revert "Made alterations to to include conta… Mar 24, 2018 Merge branch 'master' of… Oct 22, 2018
SwiftyTesseract.podspec Bump podspec version Mar 30, 2019
adding_tessdata_folder.png Added picture to Readme to show visual example of how to import tessd… Mar 23, 2018


pod-version Carthage compatible platforms swift-version Build Status

Using SwiftyTesseract in Your Project

Import the module

import SwiftyTesseract

There are two ways to quickly instantiate SwiftyTesseract without altering the default values. With one language:

let swiftyTesseract = SwiftyTesseract(language: .english)

Or with multiple languages:

let swiftyTesseract = SwiftyTesseract(languages: [.english, .french, .italian])

To perform OCR, simply pass a UIImage to the performOCR(on:completionHandler:) method and handle the recognized string in the completion handler:

guard let image = UIImage(named: "someImageWithText.jpg") else { return }
swiftyTesseract.performOCR(on: image) { recognizedString in

  guard let recognizedString = recognizedString else { return }


A Note on Initializer Defaults

The full signature of the primary SwiftyTesseract initializer is

public init SwiftyTesseract(languages: [RecognitionLanguage], 
                            bundle: Bundle = .main, 
                            engineMode: EngineMode = .lstmOnly)

The bundle parameter is required to locate the tessdata folder. This will only need to be changed if SwiftyTesseract is not being implemented in your primary bundle. The engine mode dictates the type of .traineddata files to put into your tessdata folder. .lstmOnly was chosen as a default due to the higher speed and reliability found during testing, but could potentially vary depending on the language being recognized as well as the image itself. See Which Language Training Data Should You Use? for more information on the different types of .traineddata files that can be used with SwiftyTesseract



Tested with pod --version: 1.3.1

# Podfile

target 'YOUR_TARGET_NAME' do
    pod 'SwiftyTesseract',    '~> 2.0'

Replace YOUR_TARGET_NAME and then, in the Podfile directory, type:

$ pod install


Tested with carthage version: 0.29.0

Add this to Cartfile

github "SwiftyTesseract/SwiftyTesseract" ~> 2.0
$ carthage update

Additional configuration

  1. Download the appropriate language training files from the tessdata, tessdata_best, or tessdata_fast repositories.
  2. Place your language training files into a folder on your computer named tessdata
  3. Drag the folder into your project. You must enure that "Create folder references" is selected or SwiftyTesseract will not be succesfully instantiated. tessdata_folder_example

Which Language Training Data Should You Use?

There are three different types of .traineddata files that can be used in SwiftyTesseract: tessdata, tessdata_best, or tessdata_fast that correspond to SwiftyTesseract EngineModes .tesseractOnly, .lstmOnly, and .tesseractLstmCombined. .tesseractOnly uses the legacy Tesseract engine and can only use language training files from the tessdata repository. During testing of SwiftyTesseract, the .tesseractOnly engine mode was found to be the least reliable. .lstmOnly uses a long short-term memory recurrent neural network to perform OCR and can use language training files from either tessdata_best, tessdata_fast, or tessdata repositories. During testing, tessdata_best was found to provide the most reliable results at the cost of speed, while tessdata_fast provided results that were comparable to tessdata (when used with .lstmOnly) and faster than both tessdata and tessdata_best. .tesseractLstmCombined can only use language files from the tessdata repository, and the results and speed seemed to be on par with tessdata_best. For most cases, .lstmOnly along with the tessdata_fast language training files will likely be the best option, but this could vary depending on the language and application of SwiftyTesseract in your project.

Custom Trained Data

SwiftyTesseract 1.1.0 enabled using custom training data. In version 2.0.0 the method in which custom training data is used has changed. The steps required are the same as the instructions provided in additional configuration. To utilize custom .traineddata files, simply use the .custom(String) case of RecognitionLanguage:

let swiftyTesseract = SwiftyTesseract(language: .custom("custom-traineddata-file-prefix"))

For example, if you wanted to use the MRZ code optimized OCRB.traineddata file provided by Exteris/tesseract-mrz, the instance of SwiftyTesseract would be created like this:

let swiftyTesseract = SwiftyTesseract(language: .custom("OCRB"))

You may also include the first party Tesseract language training files with custom training files:

let swiftyTesseract = SwiftyTesseract(languages: [.custom("OCRB"), .english])

Recognition Results

When it comes to OCR, the adage "garbage in, garbage out" applies. SwiftyTesseract is no different. The underlying Tesseract engine will process the image and return anything that it believes is text. For example, giving SwiftyTesseract this image raw_unprocessed_image yields the following:

a lot of jibbersh...
‘o 1 $ : M |
© 1 3 1; ie oI
LW 2 = o .C P It R <0f
O — £988 . 18 |
SALE + . < m m & f f |
7 Abt | | . 3 I] R I|
3 BE? | is —bB (|
* , § Be x I 3 |
...a lot more jibberish

You can see that it picked SALE out of the picture, but everything else surrounding it was still attempted to be read regardless of orientation. It is up to the individual developer to determine the appropriate way to edit and transform the image to allow SwiftyTesseract to render text in a way that yields predictable results. Originally, SwiftyTesseract was intended to be an out-of-the-box solution, however, the logic that was being added into the project made too many assumptions, nor did it seem right to force any particular implementation onto potential adoptors. SwiftyTesseractRTE provides a ready-made solution that can be implemented in a project with a few lines of code that should suit most needs and is a better place to start if the goal for your project is to get OCR into an application with little effort.

Contributions Welcome

SwiftyTesseract does not currently implement the full Tesseract API, so if there is functionality that you would like implemented, create an issue and open a pull request! Please see Contributing to SwiftyTesseract for the full guidelines on creating issues and opening pull requests to the project.


Official documentation for SwiftyTesseract can be found here


SwiftyTesseract would not be possible without the work done by the Tesseract team. Special thanks also goes out to Tesseract-OCR-iOS for the Makefiles that were tweaked to build Tesseract and it's dependencies for use on iOS architectures.

SwiftyTesseract bundles Tesseract and it's dependencies as binaries. The full list of dependencies is as follows:

You can’t perform that action at this time.