Don't train deep learning models blindfolded! Be impatient and look at each epoch of your training!
A live training loss plot in Jupyter Notebook for Keras, PyTorch and other frameworks. An open source Python package by Piotr Migdał et al. Open for collaboration! (Some tasks are as simple as writing code docstrings, so - no excuses! :))
from livelossplot.keras import PlotLossesCallback
model.fit(X_train, Y_train,
epochs=10,
validation_data=(X_test, Y_test),
callbacks=[PlotLossesCallback()],
verbose=0)
So remember, log your loss!
- (The most FA)Q: Why not TensorBoard?
- A: Jupyter Notebook compatibility (for exploration and teaching). Simplicity of use.
To install this verson from PyPI, type:
pip install livelossplot
To get the newest one from this repo (note that we are in the alpha stage, so there may be frequent updates), type:
pip install git+git://github.com/stared/livelossplot.git
Look at notebook files with full working examples:
- keras.ipynb - a Keras callback
- minimal.ipynb - a bare API, to use anywhere
- pytorch.ipynb - a bare API, as applied to PyTorch
- pytoune.ipynb - a PyToune callback (PyToune is a Keras-like framework for PyTorch)
- torchbearer.ipynb - an example using the built in functionality from torchbearer (torchbearer is a model fitting library for PyTorch)
Text logs are easy, but it's easy to miss the most crucial information: is it learning, doing nothing or overfitting?
Visual feedback allows us to keep track of the training process. Now there is one for Jupyter.
If you want to get serious - use TensorBoard or even better - Neptune - Machine Learning Lab (as it allows to compare between models, in a Kaggle leaderboard style).
But what if you just want to train a small model in Jupyter Notebook? Here is a way to do so, using livelossplot
as a plug&play component.
It started as this gist. Since it went popular, I decided to rewrite it as a package.
- Add docstrings
- Add Bokeh backend
- History saving
- Add connectors to TensorBoard and Neptune
If you want more functionality - open an Issue or even better - prepare a Pull Request.