Skip to content

spcask/cat-classifier-logit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cat Classifier

This is a tiny experiment to visualize the weights of an image classifier as a graphical plot.

The image classifier in this experiment is a single neuron (yes, a single neuron) that uses the sigmoid function as its activation function.

Although the model trained in this experiment works with about 70% to 80% accuracy, that's not the primary concern of this experiment. The primary concern of this experiment is to visualize the weights in a trained model and see if it offers any insight into how a trained model makes decisions.

Development Setup

The development steps here are written for a Linux or Mac system. All steps mentioned below assume that Python 3 is installed and you are at the top-level directory of this project.

  1. Enter the following command to create a Python 3 virtual environment with numpy, matplotlib and h5py.

    make venv
    
  2. Enter the following command to enter the virtual environment.

    . venv
    
  3. Enter the following command to train a model, test it and write the model to a file named model.json.

    ./model.py
    

    To alter the learning parameters, look for the train() function in this file, edit the values of count and alpha variables and run this script again.

  4. Classify arbitrary 64x64 PNG images in the extra-set directory with the following command. You can copy any image into this directory as long as it is a 64x64 PNG and run the following command.

    ./classify.py
    
  5. To generate graphical plots of the learned model, enter the following command.

    ./plotmodel.py
    

    This generates four weight plots wr.png, wg.png, wb.png and w.png. These are plots of the weights for each channel (R, G and B channels) and an overall plot of all weights, respectively.

    This generates four weight plots ar.png, ag.png, ab.png and a.png. These are plots of the activations for each channel (R, G and B channels) and an overall plot of all activations, respectively.

    These plots are explained in a little more detail in the next few sections.

Weight Plots

Here are the graphical plots of the red, blue and green channel weights. The fourth image is a plot of the weights of all the channels.

R Channel Weights G Channel Weights B Channel Weights All Channel Weights

Activation Plots

Similarly, these are the activation plots of the red, blue, green and all channel weights.

R Channel Activations G Channel Activations B Channel Activations All Channel Activations

Note: These plots look tiny because they are 64x64 images. If you want to zoom, try opening the image in a new tab and then zooming it.

Description of Plots

Here is a brief description of each weight plot.

  1. The first plot contains the plot of the weights for the red channel. The red area is where the neuron assigns positive weights to the red component of the pixels. The cyan area is where the neuron assigns negative weights to the red component of the pixels. Note that red, orange, yellow, purple, gray and white are examples of colors that have positive red component, so the weights would affect the red component of these colors.

    All positive weights are normalized with respect to the maximum positive weight. All negative weights are normalized with respect to the minimum positive weight.

  2. The second plot contains the plot of the weights for the green channel. The green area represents positive weights and the magenta area represents negative weights. The weights are normalized as explained in the first point. It is easy to see that the model has associated very negative weights with the presence of green component around the edge of the image, perhaps because such images are typical of landscapes.

  3. The third plot contains the plot of the weights for the blue channel. The blue area represents positive weights and the yellow area represents negative weights. The weights are normalized as explained in the first point.

  4. The fourth plot contains the plot of the weights for all the channels. All weights are normalized to a positive real number between 0 and 1. This means that the plot is more colorful but the negative weights are not easy to visualize.

    I could not come up with a clever way to color-code negative weights of each channel while color-coding the positive weights of the other two channels and still display the negative and positive weights of each channel. That is why I am simply plotting all (r, g, b) weights normalized to positive real numbers.

Here is a brief description of each activation plot.

  1. The first plot contains the plot of the activations of the red component in each pixel of the input image. The activation for each pixel being red is computed separately and displayed as a graphical plot. Notice the similarity between the bright red areas in the weight plot and the bright red areas in the activation plot. Similarly, notice the similarity between the cyan areas in the weight plot and the dark areas in the activation plot.

  2. The second plot contains the plot of the activations of the green component in each pixel of the input image.

  3. The third plot contains the plot of the activations of the blue component in each pixel of the input image.

  4. The fourth plot contains the plot of the red, green and blue components of the first three activation plots combined.

Training Images

Training Image 0

Training Image 1

Training Image 2

Training Image 3

Training Image 4

Training Image 5

Training Image 6

Training Image 7

Training Image 8

Training Image 9

Training Image 10

Training Image 11

Training Image 12

Training Image 13

Training Image 14

Training Image 15

Training Image 16

Training Image 17

Training Image 18

Training Image 19

Training Image 20

Training Image 21

Training Image 22

Training Image 23

Training Image 24

Training Image 25

Training Image 26

Training Image 27

Training Image 28

Training Image 29

Training Image 30

Training Image 31

Training Image 32

Training Image 33

Training Image 34

Training Image 35

Training Image 36

Training Image 37

Training Image 38

Training Image 39

Training Image 40

Training Image 41

Training Image 42

Training Image 43

Training Image 44

Training Image 45

Training Image 46

Training Image 47

Training Image 48

Training Image 49

Training Image 50

Training Image 51

Training Image 52

Training Image 53

Training Image 54

Training Image 55

Training Image 56

Training Image 57

Training Image 58

Training Image 59

Training Image 60

Training Image 61

Training Image 62

Training Image 63

Training Image 64

Training Image 65

Training Image 66

Training Image 67

Training Image 68

Training Image 69

Training Image 70

Training Image 71

Training Image 72

Training Image 73

Training Image 74

Training Image 75

Training Image 76

Training Image 77

Training Image 78

Training Image 79

Training Image 80

Training Image 81

Training Image 82

Training Image 83

Training Image 84

Training Image 85

Training Image 86

Training Image 87

Training Image 88

Training Image 89

Training Image 90

Training Image 91

Training Image 92

Training Image 93

Training Image 94

Training Image 95

Training Image 96

Training Image 97

Training Image 98

Training Image 99

Training Image 100

Training Image 101

Training Image 102

Training Image 103

Training Image 104

Training Image 105

Training Image 106

Training Image 107

Training Image 108

Training Image 109

Training Image 110

Training Image 111

Training Image 112

Training Image 113

Training Image 114

Training Image 115

Training Image 116

Training Image 117

Training Image 118

Training Image 119

Training Image 120

Training Image 121

Training Image 122

Training Image 123

Training Image 124

Training Image 125

Training Image 126

Training Image 127

Training Image 128

Training Image 129

Training Image 130

Training Image 131

Training Image 132

Training Image 133

Training Image 134

Training Image 135

Training Image 136

Training Image 137

Training Image 138

Training Image 139

Training Image 140

Training Image 141

Training Image 142

Training Image 143

Training Image 144

Training Image 145

Training Image 146

Training Image 147

Training Image 148

Training Image 149

Training Image 150

Training Image 151

Training Image 152

Training Image 153

Training Image 154

Training Image 155

Training Image 156

Training Image 157

Training Image 158

Training Image 159

Training Image 160

Training Image 161

Training Image 162

Training Image 163

Training Image 164

Training Image 165

Training Image 166

Training Image 167

Training Image 168

Training Image 169

Training Image 170

Training Image 171

Training Image 172

Training Image 173

Training Image 174

Training Image 175

Training Image 176

Training Image 177

Training Image 178

Training Image 179

Training Image 180

Training Image 181

Training Image 182

Training Image 183

Training Image 184

Training Image 185

Training Image 186

Training Image 187

Training Image 188

Training Image 189

Training Image 190

Training Image 191

Training Image 192

Training Image 193

Training Image 194

Training Image 195

Training Image 196

Training Image 197

Training Image 198

Training Image 199

Training Image 200

Training Image 201

Training Image 202

Training Image 203

Training Image 204

Training Image 205

Training Image 206

Training Image 207

Training Image 208

Test Results

Test Image 0
cat (pass)

Test Image 1
cat (pass)

Test Image 2
cat (pass)

Test Image 3
cat (pass)

Test Image 4
cat (pass)

Test Image 5
cat (fail)

Test Image 6
cat (pass)

Test Image 7
cat (pass)

Test Image 8
cat (pass)

Test Image 9
cat (pass)

Test Image 10
cat (pass)

Test Image 11
cat (pass)

Test Image 12
cat (pass)

Test Image 13
cat (fail)

Test Image 14
not (pass)

Test Image 15
cat (pass)

Test Image 16
not (pass)

Test Image 17
cat (pass)

Test Image 18
not (fail)

Test Image 19
not (fail)

Test Image 20
cat (pass)

Test Image 21
not (pass)

Test Image 22
not (pass)

Test Image 23
cat (pass)

Test Image 24
cat (pass)

Test Image 25
cat (pass)

Test Image 26
cat (pass)

Test Image 27
not (pass)

Test Image 28
not (fail)

Test Image 29
cat (fail)

Test Image 30
cat (pass)

Test Image 31
cat (pass)

Test Image 32
cat (pass)

Test Image 33
cat (pass)

Test Image 34
cat (fail)

Test Image 35
not (pass)

Test Image 36
not (pass)

Test Image 37
cat (pass)

Test Image 38
cat (fail)

Test Image 39
not (pass)

Test Image 40
cat (pass)

Test Image 41
cat (pass)

Test Image 42
cat (pass)

Test Image 43
not (pass)

Test Image 44
cat (fail)

Test Image 45
not (pass)

Test Image 46
cat (pass)

Test Image 47
cat (pass)

Test Image 48
cat (pass)

Test Image 49
not (pass)

Test Accuracy

Out of 50 test samples, 41 were correctly classified.

The test accuracy is: 82.00%.

Alter the learning parameters in count and alpha variables in train() function of model.py to alter the test accuracy.

Training and Test Sets

The training images and test images are present in train-set and test-set directories.

The training and test data were obtained from a few HDF5 files shared by Andrew Ng. The original H5 files are present in the h5data directory.

The script h5toimg.py converts this data to separate PNG image files and writes them to train-set and test-set directories.

What More?

By tuning the learning parameters (number of iterations and learning rate), it is possible to alter the accuracy. The accuracy was observed to be between 70% and 80% for most tests.

An accuracy of 80% is not great because 1 out of every 5 predictions would be false which is not impressive. But it is not bad either considering that this model uses just one single neuron.

Here is a similar experiment that shows the activation plots of a multi-layer neural network: mycask/cat-classifier-dnn.

About

An experiment to visualize a trained logistic regression model as graph plots

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published