Skip to content

This Looks Like That There: Interpretable neural networks for image tasks when location matters

License

Notifications You must be signed in to change notification settings

eabarnes1010/tlltt

Repository files navigation

This Looks Like That There


An interpretable neural network (ProtoLNet) based on training prototypes is extended from that developed by Chen et al. (2019) to consider absolute location. The network's decision making process is interpreted via looking at which patches of the input image look like specific prototypes (i.e. this looks like that there). The interpretability of the prototype architecture is demonstrated via applications to 2-dimensional geophysical fields.

This work has been accepted for publication in The Journal of Advances in Modeling Earth Systems (JAMES).

Tensorflow Code


This code was written in python 3.9.4 with tensorflow 2.5.0 and numpy 1.20.1.

Within the ProtoLNet code:

  • experiment_settings.py specificies the experiment parameters throughout the code
  • _pretrain_CNN.ipynb pre-trains the base CNN if desired
  • _main_CNN.ipynb trains the ProtoLNet
  • _vizPrototypes.ipynb computes the prototypes and displays them

The ProtoLNet follows the architecture of the ProtoPNet of Chen et al. (2019) except with the addition of a location scaling grid within the prototype layer (see architecture schematic below).

example use case

Figure 1: Schematic depicting the ProtoLNet architecture. Example and internally consistent dimensions of the tensors at each step are given in grey brackets, although the specific dimensions vary for each use case. Pink colors denote components of the network that are trained (learned), while gray and black colors denote components that are directly computed. The weights within the base CNN (blue shading) can either be trained or frozen.

Example use case

example use case

Figure 2: Class composites and example samples for the idealized quadrants use case. (a-c)Composites of all samples by class label, and (d-f) one example sample for each class.

prototypes for example use case

Figure 3: Three example predictions by the network for the idealized quadrants use case, along with the two winning prototypes for each sample and the associated location scaling grid.

General Notes


Python Environment

The following python environment was used to implement this code.

- conda create --name env-tf2.5-cartopy
- conda activate env-tf2.5-cartopy
- conda install anaconda
- pip install tensorflow==2.5 silence-tensorflow memory_profiler  
- conda install -c conda-forge cartopy
- pip uninstall shapely
- pip install --no-binary :all: shapely
- conda install -c conda-forge matplotlib cmocean xarray netCDF4 
- conda install -c conda-forge cmasher cmocean icecream palettable seaborn
- pip install keras-tuner --upgrade

Credits

This work is a collaborative effort between Dr. Elizabeth A. Barnes and Dr. Randal J. Barnes. In addition, Dr. Zane Martin and Jamin Rader contributed to the two use cases and the writing of the scientific article. The ProtoPNet of Chen et al. (2019) is the backbone of this work.

Funding sources

This work was funded, in part, by the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) under NSF grant ICER-596 2019758. ZKM recognizes support from the National Science Foundation under Award No. 2020305. JKR recognizes support from the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Department of Energy Computational Science Graduate Fellowship under Award No. DE-SC0020347.

Fundamental references for this work

License

This project is licensed under an MIT license.

MIT © Elizabeth A. Barnes

About

This Looks Like That There: Interpretable neural networks for image tasks when location matters

Resources

License

Stars

Watchers

Forks

Packages

No packages published