Skip to content

Industry-strength Natural Language Processing workflows with Keras

License

Notifications You must be signed in to change notification settings

mattdangerw/keras-nlp

 
 

Repository files navigation

KerasNLP

Python Tensorflow contributions welcome

KerasNLP is a simple and powerful API for building Natural Language Processing (NLP) models within the Keras ecosystem.

KerasNLP provides modular building blocks following standard Keras interfaces (layers, metrics) that allow you to quickly and flexibly iterate on your task. Engineers working in applied NLP can leverage the library to assemble training and inference pipelines that are both state-of-the-art and production-grade.

KerasNLP can be understood as a horizontal extension of the Keras API — components are first-party Keras objects that are too specialized to be added to core Keras, but that receive the same level of polish as the rest of the Keras API.

We are a new and growing project, and welcome contributions.

Quick Links

For everyone

For contributors

Installation

To install the latest official release:

pip install keras-nlp --upgrade

To install the latest unreleased changes to the library, we recommend using pip to install directly from the master branch on github:

pip install git+https://github.com/keras-team/keras-nlp.git --upgrade

Quickstart

Tokenize text, build a tiny transformer, and train a single batch:

import keras_nlp
import tensorflow as tf
from tensorflow import keras

# Tokenize some inputs with a binary label.
vocab = ["[UNK]", "the", "qu", "##ick", "br", "##own", "fox", "."]
sentences = ["The quick brown fox jumped.", "The fox slept."]
tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
    vocabulary=vocab,
    sequence_length=10,
)
x, y = tokenizer(sentences), tf.constant([1, 0])

# Create a tiny transformer.
inputs = keras.Input(shape=(None,), dtype="int32")
outputs = keras_nlp.layers.TokenAndPositionEmbedding(
    vocabulary_size=len(vocab),
    sequence_length=10,
    embedding_dim=16,
)(inputs)
outputs = keras_nlp.layers.TransformerEncoder(
    num_heads=4,
    intermediate_dim=32,
)(outputs)
outputs = keras.layers.GlobalAveragePooling1D()(outputs)
outputs = keras.layers.Dense(1, activation="sigmoid")(outputs)
model = keras.Model(inputs, outputs)

# Run a single batch of gradient descent.
model.compile(optimizer="adam", loss="binary_crossentropy", jit_compile=True)
model.train_on_batch(x, y)

For more in depth guides and examples, visit https://keras.io/keras_nlp/.

Compatibility

We follow Semantic Versioning, and plan to provide backwards compatibility guarantees both for code and saved models built with our components. While we continue with pre-release 0.y.z development, we may break compatibility at any time and APIs should not be consider stable.

Disclaimer

KerasNLP provides access to pre-trained models via the keras_nlp.models API. These pre-trained models are provided on an "as is" basis, without warranties or conditions of any kind. The following underlying models are provided by third parties, and subject to separate licenses: DistilBERT, RoBERTa, XLM-RoBERTa, GPT-2.

Citing KerasNLP

If KerasNLP helps your research, we appreciate your citations. Here is the BibTeX entry:

@misc{kerasnlp2022,
  title={KerasNLP},
  author={Watson, Matthew, and Qian, Chen, and Zhu, Scott and Chollet, Fran\c{c}ois and others},
  year={2022},
  howpublished={\url{https://github.com/keras-team/keras-nlp}},
}

Acknowledgements

Thank you to all of our wonderful contributors!

About

Industry-strength Natural Language Processing workflows with Keras

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 57.8%
  • Jupyter Notebook 42.0%
  • Other 0.2%