Skip to content
🏡 Fast & easy transfer learning for NLP. Harvesting language models for the industry.
Python Jupyter Notebook Dockerfile
Branch: master
Clone or download
Latest commit 4df9a8c Oct 14, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github/ISSUE_TEMPLATE Update issue templates Sep 14, 2019
docs Add option to run inference from a file (#107) Oct 11, 2019
examples Add option to run inference from a file (#107) Oct 11, 2019
experiments
farm Automatic multiprocessing chunksize (#114) Oct 14, 2019
test Add option to run inference from a file (#107) Oct 11, 2019
tutorials Add chunk-wise processing for conversion of file to PyTorch DataSets (#… Sep 19, 2019
.gitignore Add end-to-end downstream tests (#12) Jul 26, 2019
.pre-commit-config.yaml initial commit for Github Jul 22, 2019
.travis.yml Add test pipeline (#98) Sep 23, 2019
Dockerfile add optional mounting of saved_models in docker-compose for inference… Jul 29, 2019
LICENSE initial commit for Github Jul 22, 2019
MANIFEST.in Adjust manifest Jul 29, 2019
docker-compose.yml add optional mounting of saved_models in docker-compose for inference… Jul 29, 2019
readme.rst Make hyperlink references anonymous Oct 10, 2019
requirements.txt Update dependency to transformers 2.0 (#106) Oct 5, 2019
run_all_experiments.py do logging within run_experiment() (#37) Sep 16, 2019
setup.cfg Add pypi (#9) Jul 25, 2019
setup.py Upgrade version Oct 14, 2019

readme.rst

FARM LOGO

(Framework for Adapting Representation Models)

Build Release License Last Commit Last Commit

What is it?

FARM makes cutting edge Transfer Learning for NLP simple. Building upon transformers, FARM is a home for all species of pretrained language models (e.g. BERT) that can be adapted to different domain languages or down-stream tasks. With FARM you can easily create SOTA NLP models for tasks like document classification, NER or question answering. The standardized interfaces for language models and prediction heads allow flexible extension by researchers and easy application for practitioners. Additional experiment tracking and visualizations support you along the way to adapt a SOTA model to your own NLP problem and have a fast proof-of-concept.

Core features

  • Easy adaptation of language models (e.g. BERT) to your own use case
  • Fast integration of custom datasets via Processor class
  • Modular design of language model and prediction heads
  • Switch between heads or just combine them for multitask learning
  • Smooth upgrading to new language models
  • Powerful experiment tracking & execution
  • Simple deployment and visualization to showcase your model
  • Tasks: Question Answering, LM Domain Adaptation, NER, (Multilabel) Doc Classification

Resources

Installation

Recommended (because of active development):

git clone https://github.com/deepset-ai/FARM.git
cd FARM
pip install -r requirements.txt
pip install --editable .

If problems occur, please do a git pull. The --editable flag will update changes immediately.

From PyPi:

pip install farm

Basic Usage

1. Train a downstream model

FARM offers two modes for model training:

Option 1: Run experiment(s) from config

https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/code_snippet_experiment.png

Use cases: Training your first model, hyperparameter optimization, evaluating a language model on multiple down-stream tasks.

Option 2: Stick together your own building blocks

https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/code_snippet_building_blocks.png

Usecases: Custom datasets, language models, prediction heads ...

Metrics and parameters of your model training get automatically logged via MLflow. We provide a public MLflow server for testing and learning purposes. Check it out to see your own experiment results! Just be aware: We will start deleting all experiments on a regular schedule to ensure decent server performance for everybody!

2. Run Inference (API + UI)

FARM Inferennce UI

One docker container exposes a REST API (localhost:5000) and another one runs a simple demo UI (localhost:3000). You can use both of them individually and mount your own models. Check out the docs for details.

Core concepts

Model

AdaptiveModel = Language Model + Prediction Head(s) With this modular approach you can easily add prediction heads (multitask learning) and re-use them for different types of language model. (Learn more)

https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/adaptive_model_no_bg_small.jpg

Data Processing

Custom Datasets can be loaded by customizing the Processor. It converts "raw data" into PyTorch Datasets. Much of the heavy lifting is then handled behind the scenes to make it fast & simple to debug. (Learn more)

https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/data_silo_no_bg_small.jpg

Upcoming features

  • More pretrained models roBERTa, XLNet ...
  • Improved functionality for Question Answering Task
  • Additional visualizations and statistics to explore and debug your model
  • SOTA adaptation strategies (Adapter Modules, Discriminative Fine-tuning ...)
  • Enabling large scale deployment for production

Acknowledgements

  • FARM is built upon parts of the great transformers repository from Huggingface. It utilizes their implementations of the BERT model and Tokenizer.
  • The original BERT model and paper was published by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.

Citation

As of now there is no published paper on FARM. If you want to use or cite our framework, please include the link to this repository. If you are working with the German Bert model, you can link our blog post describing its training details and performance.

You can’t perform that action at this time.