Stanford CoreNLP Provides a set of natural language analysis tools written in Java. It can take raw human language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize and interpret dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases or word dependencies, and indicate which noun phrases refer to the same entities. It was originally developed for English, but now also provides varying levels of support for (Modern Standard) Arabic, (mainland) Chinese, French, German, Hungarian, Italian, and Spanish. Stanford CoreNLP is an integrated framework, which makes it very easy to apply a bunch of language analysis tools to a piece of text. Starting from plain text, you can run all the tools with just two lines of code. Its analyses provide the foundational building blocks for higher-level and domain-specific text understanding applications. Stanford CoreNLP is a set of stable and well-tested natural language processing tools, widely used by various groups in academia, industry, and government. The tools variously use rule-based, probabilistic machine learning, and deep learning components.
The Stanford CoreNLP code is written in Java and licensed under the GNU General Public License (v2 or later). Note that this is the full GPL, which allows many free uses, but not its use in proprietary software that you distribute to others.
Several times a year we distribute a new version of the software, which corresponds to a stable commit.
During the time between releases, one can always use the latest, under development version of our code.
Here are some helpful instructions to use the latest code:
Sometimes we will provide updated jars here which have the latest version of the code.
At present, the current released version of the code is our most recent released jar, though you can always build the very latest from GitHub HEAD yourself.
- Make sure you have Ant installed, details here: http://ant.apache.org/
- Compile the code with this command:
cd CoreNLP ; ant
- Then run this command to build a jar with the latest version of the code:
cd CoreNLP/classes ; jar -cf ../stanford-corenlp.jar edu
- This will create a new jar called stanford-corenlp.jar in the CoreNLP folder which contains the latest code
- The dependencies that work with the latest code are in CoreNLP/lib and CoreNLP/liblocal, so make sure to include those in your CLASSPATH.
- When using the latest version of the code make sure to download the latest versions of the corenlp-models, english-models, and english-models-kbp and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in.
- Make sure you have Maven installed, details here: https://maven.apache.org/
- If you run this command in the CoreNLP directory:
mvn package
, it should run the tests and build this jar file:CoreNLP/target/stanford-corenlp-4.5.2.jar
- When using the latest version of the code make sure to download the latest versions of the corenlp-models, english-extra-models, and english-kbp-models and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in.
- If you want to use Stanford CoreNLP as part of a Maven project you need to install the models jars into your Maven repository. Below is a sample command for installing the Spanish models jar. For other languages just change the language name in the command. To install
stanford-corenlp-models-current.jar
you will need to set-Dclassifier=models
. Here is the sample command for Spanish:mvn install:install-file -Dfile=/location/of/stanford-spanish-corenlp-models-current.jar -DgroupId=edu.stanford.nlp -DartifactId=stanford-corenlp -Dversion=4.5.2 -Dclassifier=models-spanish -Dpackaging=jar
The models jars that correspond to the latest code can be found in the table below.
Some of the larger (English) models -- like the shift-reduce parser and WikiDict -- are not distributed with our default models jar. These require downloading the English (extra) and English (kbp) jars. Resources for other languages require usage of the corresponding models jar.
The best way to get the models is to use git-lfs and clone them from Hugging Face Hub.
For instance, to get the French models, run the following commands:
# Make sure you have git-lfs installed
# (https://git-lfs.github.com/)
git lfs install
git clone https://huggingface.co/stanfordnlp/corenlp-french
The jars can be directly downloaded from the links below or the Hugging Face Hub page as well.
Language | Model Jar | Last Updated |
---|---|---|
Arabic | download (HF Hub) | 4.5.4 |
Chinese | download (HF Hub) | 4.5.4 |
English (extra) | download (HF Hub) | 4.5.4 |
English (KBP) | download (HF Hub) | 4.5.4 |
French | download (HF Hub) | 4.5.4 |
German | download (HF Hub) | 4.5.4 |
Hungarian | download (HF Hub) | 4.5.4 |
Italian | download (HF Hub) | 4.5.4 |
Spanish | download (HF Hub) | 4.5.4 |
Thank you to Hugging Face for helping with our hosting!
If you don't know Gradle itself, see official site: https://gradle.org
Write the following in your build.gradle according to Maven Central:
dependencies {
implementation 'edu.stanford.nlp:stanford-corenlp:4.5.2'
}
If you want to analyse English, add following:
implementation "edu.stanford.nlp:stanford-corenlp:4.5.2:models"
implementation "edu.stanford.nlp:stanford-corenlp:4.5.2:models-english"
implementation "edu.stanford.nlp:stanford-corenlp:4.5.2:models-english-kbp"
If you use other version, replace "4.5.2" to a version you use.
You can find releases of Stanford CoreNLP on Maven Central.
You can find more explanation and documentation on the Stanford CoreNLP homepage.
For information about making contributions to Stanford CoreNLP, see the file CONTRIBUTING.md.
Questions about CoreNLP can either be posted on StackOverflow with the tag stanford-nlp, or on the mailing lists.