tavolo aims to package together valuable modules and functionality written for TensorFlow high-level Keras API for ease of use.
You see, the deep learning world is moving fast, and new ideas keep on coming.
tavolo's API is straightforward and adopting its modules is as easy as it gets.
In tavolo, you'll find implementations for basic like PositionalEncoding and non-layer implementations that
can ease development, like the LearningRateFinder.
For example, if we wanted to add head a Yang-style attention mechanism into our model and look for the optimal
learning rate, it would look something like:
import tensorflow as tf
import tavolo as tvl
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_size, input_length=max_len),
tvl.seq2vec.YangAttention(n_units=64), # <--- Add Yang style attention
tf.keras.layers.Dense(n_hidden_units, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')])
model.compile(optimizer=tf.keras.optimizers.SGD(), loss=tf.keras.losses.BinaryCrossentropy())
# Run learning rate range test
lr_finder = tvl.learning.LearningRateFinder(model=model)
learning_rates, losses = lr_finder.scan(train_data, train_labels, min_lr=0.0001, max_lr=1.0, batch_size=128)
### Plot the results to choose your learning rate
Want to contribute? Please read our Contributing guide.