Skip to content

A functional plotting library for the visualization of neural network training process

License

Notifications You must be signed in to change notification settings

syntax-surgeon/train-viz

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

train-viz

Visualizing the training process of feed forward neural networks via class probability maps.
(current version: 1.0)

Author: Siddharth Yadav (syntax-surgeon)

Files

trainviz.py

  • Contains version-1.0 of the train-viz program which provides functions to generate and visualize the training process via class probability maps

  • The visualization relies on generating class probability maps for the model at different epochs during training

  • The code follows a function approach in which both the generation and visualization of the probability maps are individual functions

  • Currently supports only network models built in the PyTorch library (https://github.com/pytorch/pytorch)

  • Please check the provided example.ipynb for a demonstration of the intended implementation

example.ipynb

  • Contains the demo example for the implementation of train-viz program

  • The example utilizes a randomly generated seven class dataset (Figure 1) for the visualization process

  • The dataset and the general network description is inspired by Mike X Cohen's course "Deep Understanding of Deep Learning" on Udemy.

Figure 1: The seven class dataset used in the demo example

alt text

Features

  • Mulitple visualization modes
    The training process visualization can be modelled in two different modes (ref. map_type parameter of make_map function):

    • Boundary mode - Emphasizes the boundary between the classes (Figure 2)

    • Region mode - Shows the region of the probability classes themselves (Figure 3)

    Figure 2: Boundary mode visualization for seven class dataset

    alt text

    Figure 3: Region mode visualization for seven class dataset

    alt text

  • Control over domain construction
    The construction of the domain (input space) can be controlled dynamically or scaled (ref. axial_gradation and square_axis_points parameters of make_map funtion)

  • Map interpolation
    Interpolation between maps can be achieved to increase the total number of maps. The rationale behind this is the following: To reduce memory utilization, class probability maps can be skipped for several epochs. However, this can cause "jagged" or "skipping" appearance of the animation. Interpolation can used to smoothen these artifacts. (ref. interpolation_factor and interpolation_type parameters of plot_maps function)

  • Simplified epoch skipping
    To perform the aforementioned epoch skipping, additional logic will have to be implemented during the training process. This may be difficult to achieve in several situations. To circumvent this, the make_map function implements simple epoch skipping if the epoch number/index is provided. (ref. epoch_num and epoch_freq parameters of make_map function)

Planned changes for the next version

  • Additional customization of animation (eg: title, margin adjustment etc.)

  • Add support for other deep learning libraries (eg: Tensorflow/Keras)

  • Region mode can be improved with dedicated quantitative colormaps

  • Support for developing static images

  • Migrating to an object-oriented design


For issues, comments and suggestions:

About

A functional plotting library for the visualization of neural network training process

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages