Skip to content

hirthjo/Formal-Conceptual-Views-in-Neural-Networks

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Formal Conceptual Views in Neural Networks

This is the repository to the paper Formal Conceptual Views in Neural Networks.

With the present work, we introduce two notions for conceptual views of a neural network, specifically a many-valued and a symbolic view. Both provide novel analysis methods to enable a human AI analyst to grasp deeper insights into the knowledge that is captured by the neurons of a network.

pics/pipeline.png

We test the expressivity of our novel views through different experiments on the ImageNet and Fruit-360 data set. Furthermore, we show to which extent the views allow to quantify the conceptual similarity of different learning architectures. Finally, we demonstrate how conceptual views can be applied for abductive learning of human comprehensible rules from neurons. In summary, with our work, we contribute to the most relevant task of globally explaining neural networks models.

Table of Contents

Requirements

This project uses functionality from python for the machine learning parts and the conexp-clj framework for the methods from formal concept analysis.

Python

The used packages and versions can be found in requirements.txt. We used Python version 3.7.3.

pip install -r requirements.txt

Clojure

We used Clojure version 1.10.1 and conexp-clj version 2.3.0. There are two options to evaluate the Clojure code. The first is to build the most recent version from the repository using Leiningen.

git clone https://github.com/tomhanika/conexp-clj
cd conexp-clj
lein uberjar

A standalone jar can then be found at /builds/uberjar/conexp-clj-VERSION-SNAPSHOT-standalone.jar and executing it will start a REPL which can be used to execute the clojure code.

java -jar /builds/uberjar/conexp-clj-VERSION-SNAPSHOT-standalone.jar

Alternatively, a recent executable can be downloaded from the Maven repository.

Data

There are three data sources that we use. The first is the Fruit-360 data set which can be downloaded using the init-data.sh skript. The data set is then extracted into the image-data/fruit360 directory.

The second data set is the ImageNet data set from the visual recognition challenge. We only use its test set which should be extracted to image-data/imagenet/test.

Setup

First, we need to compute the many-valued and symbolic conceptual views for the ImageNet data set. The code can be found in imagenet_conceptual_views.org. This results in a representation of the objects and classes in a single pseudo metric space.
python src/tangled/imagenet_conceptual_views.py
Next, we need to compute the many-valued and symbolic conceptual views for the Fruit-360 data set. The model files were split using the Linux command split. The code can be found in fruit_conceptual_views.org
python src/tangled/fruit_conceptual_views.py

The code which we used to train these models is located in train_fruits.org.

Evaluate Many-Valued Conceptual View

Statistics on the ImageNet models can be computed using statistics.org.
python src/tangled/statistics.py

pics/statistics/statistics.png

We evaluated the quality of the many-valued conceptual views using the fidelity of a simple one nearest neighbor classifier in the pseudo-metric space and the original model.
python src/tangled/fidelity.py

Fidelity scores for the ImageNet models:

pics/fidelity/imagenet_fidelity.png

Fidelity scores for the Fruit-360 models:

pics/fidelity/fruit_fidelity.png

The pseudo metric space allows for comparing models using Gromov-Wasserstein distance. We compare the resulting similarities using a pairwise fidelity of the original models.
python src/tangled/imagenet_similarity.py

pics/similarity/imagenet_similarity.png

Symbolic Conceptual View

We conducted an ablation study for the influence of the activation function and the number of neurons. We did ten training runs of the same architecture for each parameter setting.
python src/tangled/ablation.py

We evaluated their results using the fidelity of the views.

pics/ablation/ablation_fidelity.png

And the shape of the views where we identified the tanh activation function to cause the clearest visible separation between negative and positive values and highest fidelity scores.

pics/ablation/ablation_views.png

We evaluated the quality of the symbolic conceptual views using the fidelity of a simple one nearest neighbor classifier using the symbolic views and the original model.
python src/tangled/fidelity.py

Fidelity for the symbolic conceptual views on the ImageNet models:

pics/fidelity/imagenet_fidelity_symbolic.png Fidelity for the symbolic conceptual views on the Fruit-360 models:

pics/fidelity/fruit_fidelity.png

The code can be found in formal_conceptual_views.org and should be executed in order in a Clojure REPL.

The first result are the number of formal concepts.

pics/fca/concept_sizes.png

Secondly, we can compute a similarity based on formal concepts. This similarity is based on concepts in which two fruits co-occur. For example the fruits Plum, Cherry, Apple Pink Lady and Apple Red in the VGG16 transfer learned model.

pics/fca/formal_concept_sim.png

Using the formal concept analysis, we can zoom in individual fruits in the model and how it related other fruits in a hierarchical manner. For this, we employ the concept lattice.

pics/fca/concept_lattice_plum_vgg.png

To derive explanations for the information captured by the neurons, we employ subgroup detection for visual and botanic taxon features. The code can be found in abductive_explanation.org.
python src/tangled/abductive_explanation.py

We provide explanations for neuron 13, as well as, representations for the apple taxon and orange color.

pics/subgroup/subgroup.png

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 83.6%
  • Clojure 16.0%
  • Shell 0.4%