Skip to content
Natural Language Understanding Library for Apache Spark.
Branch: master
Clone or download
Latest commit dd95006 Apr 18, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
.ci Adds required settings for python 3 unit tests Nov 26, 2018
docs Updates documentation for NER CRF Mar 31, 2019
example Remove examples from the main repo Mar 26, 2019
ocr/src ocr - fixes for count pages Mar 22, 2019
project Project configuration to release artifacts to the central repository Oct 31, 2017
python python embeddings dim fix Apr 18, 2019
src Extract longs Apr 17, 2019
.gitattributes Create .gitattributes Oct 19, 2018
.gitignore first implementation of weighted levenshtein Sep 17, 2018
.sbtrc updated build for publishing OCR jar Jan 24, 2019
.travis.yml Adds spark.driver.memory Nov 23, 2018
CHANGELOG Fix missing bullet in changelog Mar 24, 2019
Dockerfile Undo bad commit Sep 10, 2018 Added Issue/PR templates Oct 4, 2017
LICENSE Initial commit Sep 24, 2017 Added another option in Type of changes Oct 10, 2017 Deprecate Basic and Advance pipelines Mar 27, 2019


Build Status

John Snow Labs Spark-NLP is a natural language processing library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines, that scale easily in a distributed environment.

Project's website

Take a look at our official spark-nlp page: for user documentation and examples

Slack community channel

Questions? Feedback? Request access sending an email to

Table of contents


Apache Spark Support

Spark-NLP 2.0.1 has been built on top of Apache Spark 2.4.0

Note that Spark is not retrocompatible with Spark 2.3.x, so models and environments might not work.

If you are still stuck on Spark 2.3.x feel free to use this assembly jar instead. Support is limited. For OCR module, this is for spark 2.3.x.

Spark NLP Spark 2.0.1 / Spark 2.3.x Spark 2.4
2.x.x NO YES
1.8.x Partially YES
1.7.3 YES N/A
1.6.3 YES N/A
1.5.0 YES N/A

Find out more about Spark-NLP versions from our release notes.

Spark Packages

Command line (requires internet connection)

This library has been uploaded to the spark-packages repository.

Benefit of spark-packages is that makes it available for both Scala-Java and Python

To use the most recent version just add the --packages JohnSnowLabs:spark-nlp:2.0.1 to you spark command

spark-shell --packages JohnSnowLabs:spark-nlp:2.0.1
pyspark --packages JohnSnowLabs:spark-nlp:2.0.1
spark-submit --packages JohnSnowLabs:spark-nlp:2.0.1

This can also be used to create a SparkSession manually by using the spark.jars.packages option in both Python and Scala

Compiled JARs

Build from source

Spark NLP

  • FAT-JAR for CPU
sbt assembly
  • FAT-JAR for GPU
sbt -Dis_gpu=true assembly
  • Packaging the project
sbt package


Requires native Tesseract 4.x+ for image based OCR. Does not require Spark-NLP to work but highly suggested

sbt ocr/assembly
  • Packaging the project
sbt ocr/package

Using the jar manually

If for some reason you need to use the JAR, you can either download the Fat JARs provided here or download it from Maven Central.

To add JARs to spark programs use the --jars option:

spark-shell --jars spark-nlp.jar

The preferred way to use the library when running spark programs is using the --packages option as specified in the spark-packages section.


Our package is deployed to maven central. In order to add this package as a dependency in your application:


<!-- -->


<!-- -->


libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.0.1"


libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.0.1"

Maven Central:


Python without explicit Pyspark installation


If you installed pyspark through pip, you can install spark-nlp through pip as well.

pip install spark-nlp==2.0.1

PyPI spark-nlp package


If you are using Anaconda/Conda for managing Python packages, you can install spark-nlp as follow:

conda install -c johnsnowlabs spark-nlp

Anaconda spark-nlp package

Then you'll have to create a SparkSession manually, for example:

spark = SparkSession.builder \
    .config("spark.driver.maxResultSize", "2G") \
    .config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.0.1")\
    .config("spark.kryoserializer.buffer.max", "500m")\

If using local jars, you can use spark.jars instead for a comma delimited jar files. For cluster setups, of course you'll have to put the jars in a reachable location for all driver and executor nodes

Apache Zeppelin

Use either one of the following options

  • Add the following Maven Coordinates to the interpreter's library list
  • Add path to pre-built jar from here in the interpreter's library list making sure the jar is available to driver path

Python in Zeppelin

Apart from previous step, install python module through pip

pip install spark-nlp==2.0.1

Or you can install spark-nlp from inside Zeppelin by using Conda:

%python.conda install -c johnsnowlabs spark-nlp

Configure Zeppelin properly, use cells with %spark.pyspark or any interpreter name you chose.

Finally, in Zeppelin interpreter settings, make sure you set properly zeppelin.python to the python you want to use and installed the pip library with (e.g. python3).

An alternative option would be to set SPARK_SUBMIT_OPTIONS ( and make sure --packages is there as shown earlier, since it includes both scala and python side installation.

Jupyter Notebook (Python)

Easiest way to get this done is by making Jupyter Notebook run using pyspark as follows:

export SPARK_HOME=/path/to/your/spark/folder
export PYSPARK_PYTHON=python3

pyspark --packages JohnSnowLabs:spark-nlp:2.0.1

Alternatively, you can mix in using --jars option for pyspark + pip install spark-nlp

If not using pyspark at all, you'll have to run the instructions pointed here

S3 Cluster

With no hadoop configuration

If your distributed storage is S3 and you don't have a standard hadoop configuration (i.e. fs.defaultFS) You need to specify where in the cluster distributed storage you want to store Spark-NLP's tmp files. First, decide where you want to put your application.conf file

import com.johnsnowlabs.uti.ConfigLoader

And then we need to put in such application.conf the following content

sparknlp {
  settings {
    cluster_tmp_dir = "somewhere in s3n:// path to some folder"

Models and Pipelines


Pipelines English Name
Explain Document ML Download explain_document_ml
Explain Document DL Download explain_document_dl
Entity Recognizer DL Download entity_recognizer_dl



Model English
LemmatizerModel (Lemmatizer) Download
PerceptronModel (POS) Download
ViveknSentimentModel (Sentiment) Download
NerCRFModel (NER) Download
NerDLModel (NER) Download
SymmetricDeleteModel (Spell Checker) Download
ContextSpellCheckerModel (Spell Checker) Download
NorvigSweetingModel (Spell Checker) Download


Model Italian
LemmatizerModel (Lemmatizer) Download
SentimentDetector (Sentiment) Download


Model French
PerceptronModel (POS UD-GSD) Download

How to use Models and Pipelines

To use Spark NLP online pretrained pipelines, you can call PretrainedPipeline with pipeline's name and its language:

pipeline = PretrainedPipeline('explain_document_dl', lang='en')

To use Spark NLP online pretrained models:

ner = NerDLModel.pretrained()

If you have any trouble using online pipelines or models in your environment (maybe it's air-gapped), you can directly download them for offline use.

After downloading offline models/pipelines and extracting them, here is how you can use them iside your code (the path could be a shared storage like HDFS in a cluster):

  • Loading PerceptronModel annotator model inside Spark NLP Pipeline
val pos = PerceptronModel.load("/tmp/pos_ud-gsd_fr_2.0.0_2.4_1553029753307/")
      .setInputCols("document", "token")
  • Loading Offline Pipeline
val advancedPipeline = PipelineModel.load("/tmp/explain_document_dl_en_2.0.0_2.4_1553227894237/")
// To use the loaded Pipeline for prediction


Need more examples? Check out our dedicated repository to showcase Spark NLP use cases! spark-nlp-workshop


Check our Articles and FAQ page here



  • Q: I am getting a Java Core Dump when running OCR transformation

    • A: Add LC_ALL=C environment variable
  • Q: Getting org.apache.pdfbox.filter.MissingImageReaderException: Cannot read JPEG2000 image: Java Advanced Imaging (JAI) Image I/O Tools are not installed when running an OCR transformation

    • A: --packages com.github.jai-imageio:jai-imageio-jpeg2000:1.3.0. This library is non-free thus we can't include it as a Spark-NLP dependency by default


Special community aknowledgments

Thanks in general to the community who have been lately reporting important issues and pull request with bugfixes. Community has been key in the last releases with feedback in various Spark based environments.

Here a few specific mentions for recurring feedback and slack participation

  • @maziyarpanahi - For contributing with testing and valuable feedback
  • @easimadi - For contributing with documentation and valuable feedback


We appreciate any sort of contributions:

  • ideas
  • feedback
  • documentation
  • bug reports
  • nlp training and testing corpora
  • development and testing

Clone the repo and submit your pull-requests! Or directly create issues in this repo.


John Snow Labs

You can’t perform that action at this time.