Skip to content
Collection of deep learning modules and fucntionality for the TensorFlow-Keras framework
Branch: master
Clone or download
Latest commit c2db3a3 Feb 1, 2020
Type Name Latest commit message Commit time
Failed to load latest commit information.
docs Major changes, better interfaces and corrections to masking mechanisms Dec 13, 2019
tavolo tensorflow 2.1 testing Feb 1, 2020
tests Major changes, better interfaces and corrections to masking mechanisms Dec 13, 2019
.gitignore Initial commit Apr 21, 2019
LICENSE Initial commit Apr 21, 2019
README.rst Revert tests to Python 3.5-3.7, 3.8 not yet supported on TF 2.1 Feb 1, 2020


Supported Python versions

Supported TensorFlow versions

Code test coverage CircleCI status


tavolo aims to package together valuable modules and functionality written for TensorFlow high-level Keras API for ease of use.
You see, the deep learning world is moving fast, and new ideas keep on coming.
tavolo gathers implementations of these useful ideas from the community (by contribution, from Kaggle etc.) and makes them accessible in a single PyPI hosted package that compliments the tf.keras module.



tavolo's API is straightforward and adopting its modules is as easy as it gets.
In tavolo, you'll find implementations for basic layers like PositionalEncoding to complex modules like the Transformer's MultiHeadedAttention. You'll also find non-layer implementations that can ease development, like the LearningRateFinder.
For example, if we wanted to add head a Yang-style attention mechanism into our model and look for the optimal learning rate, it would look something like:
import tensorflow as tf
import tavolo as tvl

model = tf.keras.Sequential([
    tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_size, input_length=max_len),
    tvl.seq2vec.YangAttention(n_units=64),  # <--- Add Yang style attention
    tf.keras.layers.Dense(n_hidden_units, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')])

model.compile(optimizer=tf.keras.optimizers.SGD(), loss=tf.keras.losses.BinaryCrossentropy())

# Run learning rate range test
lr_finder = tvl.learning.LearningRateFinder(model=model)

learning_rates, losses = lr_finder.scan(train_data, train_labels, min_lr=0.0001, max_lr=1.0, batch_size=128)

### Plot the results to choose your learning rate


Want to contribute? Please read our Contributing guide.
You can’t perform that action at this time.