Skip to content
The jiant toolkit for general-purpose text understanding models
Jupyter Notebook Python Shell Other
Branch: master
Clone or download
dzorlu and sleepinyourhat Fix validation macro and micro average calculations (#892)
* fix macro and micro average calculations

* fix micro average

* remove redundant comment

* format

* Spacing
Latest commit 4a0a420 Aug 18, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
.circleci remove lingering src mentions (#816) Jul 8, 2019
config Fix config file name quirks (#882) Aug 9, 2019
gcp Fix NFS server path (#893) Aug 18, 2019
jiant Fix validation macro and micro average calculations (#892) Aug 18, 2019
probing Add PDFs of edge probing posters (#885) Aug 18, 2019
scripts XLNet support and overhaul/cleanup of BERT support (#845) Aug 7, 2019
tests XLNet support and overhaul/cleanup of BERT support (#845) Aug 7, 2019
tutorials Version update. Aug 15, 2019
.gitignore XLNet support and overhaul/cleanup of BERT support (#845) Aug 7, 2019
.gitmodules Rename `src` to `jiant` (#593) Jul 8, 2019
.pep8speaks.yml Clean up formatting with Black. (#660) May 4, 2019
.pre-commit-config.yaml Add pre-commit for code style hooks (#628) May 2, 2019
.pre-commit-hooks.yaml Add pre-commit for code style hooks (#628) May 2, 2019
CODEOWNERS Make sure that reviews from @pruksmhc count as approvals. (#592) Apr 26, 2019
Dockerfile Use environment.yml in Dockerfile (#853) Jul 20, 2019
LICENSE Apply MIT license. Sep 17, 2018 Contributor guidelines (#891) Aug 15, 2019
allennlp_mods Rename `src` to `jiant` (#593) Jul 8, 2019 XLNet support and overhaul/cleanup of BERT support (#845) Aug 7, 2019
environment.yml XLNet support and overhaul/cleanup of BERT support (#845) Aug 7, 2019 XLNet support and overhaul/cleanup of BERT support (#845) Aug 7, 2019
pyproject.toml Add pre-commit for code style hooks (#628) May 2, 2019 Clean up formatting with Black. (#660) May 4, 2019


CircleCI Code style: black

jiant is a software toolkit for natural language processing research, designed to facilitate work on multitask learning and transfer learning for sentence understanding tasks.

A few things you might want to know about jiant:

Getting Started

To find the setup instructions for using jiant and to run a simple example demo experiment using data from GLUE, follow this getting started tutorial!

Official Documentation

Our official documentation is here:


To run an experiment, make a config file similar to config/demo.conf with your model configuration. In addition, you can use the --overrides flag to override specific variables. For example:

python --config_file config/demo.conf \
    --overrides "exp_name = my_exp, run_name = foobar, d_hid = 256"

will run the demo config, but output to $JIANT_PROJECT_PREFIX/my_exp/foobar. To run the demo config, you will have to set environment variables. The best way to achieve that is to follow the instructions in

  • $JIANT_PROJECT_PREFIX: the where the outputs will be saved.
  • $JIANT_DATA_DIR: location of the saved data. This is usually the location of the GLUE data in a simple default setup.
  • $WORD_EMBS_FILE: location of any word embeddings you want to use (not necessary when using ELMo, GPT, or BERT). You can download GloVe (840B) here or fastText (2M) here. To have run automatically, follow instructions in scripts/

Suggested Citation

If you use jiant in academic work, please cite it directly:

    author = {Alex Wang and Ian F. Tenney and Yada Pruksachatkun and Katherin Yu and Jan Hula and Patrick Xia and Raghu Pappagari and Shuning Jin and R. Thomas McCoy and Roma Patel and Yinghui Huang and Jason Phang and Edouard Grave and Najoung Kim and Phu Mon Htut and Thibault F'{e}vry and Berlin Chen and Nikita Nangia and Haokun Liu and and Anhad Mohananey and Shikha Bordia and Nicolas Patry and Ellie Pavlick and Samuel R. Bowman},
    title = {{jiant} 1.1: A software toolkit for research on general-purpose text understanding models},
    howpublished = {\url{}},
    year = {2019}


jiant has been used in these four papers so far:

To exactly reproduce experiments from the ELMo's Friends paper use the jsalt-experiments branch. That will contain a snapshot of the code as of early August, potentially with updated documentation.

For the edge probing paper and the BERT layer paper, see the probing/ directory.

For the function word probing paper, use this branch and refer to the instructions in the scripts/fwords/ directory.

Getting Help

Post an issue here on GitHub if you have any problems, and create a pull request if you make any improvements (substantial or cosmetic) to the code that you're willing to share.


We use the black coding style with a line limit of 100. After installing the requirements, simply running pre-commit install should ensure you comply with this in all your future commits. If you're adding features or fixing a bug, please also add the tests.

For any PR, make sure to update any existing conf files, tutorials, and scripts to match your changes. If your PR adds or changes functionality that can be directly tested, add or update a test.

For PRs that typical users will need to be aware of, include make a matching PR to the documentation. We will merge that documentation PR once the original PR is merged in and pushed out in a release. (Proposals for better ways to do this are welcome.)


This package is released under the MIT License. The material in the allennlp_mods directory is based on AllenNLP, which was originally released under the Apache 2.0 license.


  • Part of the development of jiant took at the 2018 Frederick Jelinek Memorial Summer Workshop on Speech and Language Technologies, and was supported by Johns Hopkins University with unrestricted gifts from Amazon, Facebook, Google, Microsoft and Mitsubishi Electric Research Laboratories.
  • This work was made possible in part by a donation to NYU from Eric and Wendy Schmidt made by recommendation of the Schmidt Futures program.
  • We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan V GPU used at NYU in this work.
  • Developer Alex Wang is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1342536. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
  • Developer Yada Pruksachatkun is supported by the Moore-Sloan Data Science Environment as part of the NYU Data Science Services initiative.
You can’t perform that action at this time.