Tefla is a deep learning mini-framework that sits on top of Tensorflow. Tefla's primary goal is to enable simple, stable, end-to-end deep learning. This means that Tefla supports:
- Data setup
- Batch preprocessing and data layout.
- A model definition DSL.
- A training config DSL.
- Data loading with data-augmentation and rebalancing.
- Training with support for visualization, logging, custom metrics, and most importantly - resumption of training from an earlier epoch with a new learning rate.
- Pluggable learning rate decay policies.
- Stability and solidity - which translates to days and weeks of training without memory blowup and epoch time degradations.
- Tensorboard visualization of epoch metrics, augmented images, model graphs, and layer activations, weights and gradients.
- Prediction (with ensembling via mean score or voting).
- Metrics on prediction outputs.
- First class support for transfer learning and fine-tuning based on vgg16, resnet50, resnet101, and resnet152.
- Serving of models via a REST API (coming soon).
Tefla contains command line scripts to do batch preprocessing, training, prediction, and metrics, thus supporting a simple yet powerful deep learning workflow.
Documentation is coming soon. For now, the mnist example(s) can help you to get started.
Tefla is very much a work in progress. Contributions are welcome!
An interesting fork of tefla is available here: www.github.com/n3011/tefla. Both projects are evolving independently, with a cross-pollination of ideas.