This project is a subproject from a bigger and older project called CAI and is sister to the pascal based CAI NEURAL API.
You'll need python and pip.
Installing via shell is very simple:
git clone https://github.com/joaopauloschuler/k-neural-api.git k
cd k && pip install .
Place this on the top of your Google Colab Jupyter Notebook:
import os
if not os.path.isdir('k'):
!git clone https://github.com/joaopauloschuler/k-neural-api.git k
else:
!cd k && git pull
!cd k && pip install .
- A number of new layer types.
cai.util.create_image_generator: this wrapper has extremely well tested default parameters for image classification data augmentation. For you to get a better image classification accuracy might be just a case of replacing your current data augmentation generator by this one. Give it a go!cai.util.create_image_generator_no_augmentation: image generator for test datasets.cai.densenet.simple_densenet: simple way to create DenseNet models. See example.cai.datasets.load_hyperspectral_matlab_image: downloads (if required) and loads hyperspectral image from a matlab file. This function has been tested with AVIRIS and ROSIS sensor data stored as a matlab files.cai.models.calculate_heat_map_from_dense_and_avgpool: calculates a class activation mapping (CAM) inspired on the paper Learning Deep Features for Discriminative Localization (see example below).cai.util.show_neuronal_patterns: creates an array for visualizing first layer neuronal filters/patterns (see example below).cai.models.CreatePartialModel(pModel, pOutputLayerName, hasGlobalAvg=False): creates a partial model up to the layer name defined in pOutputLayerName.cai.models.CreatePartialModelCopyingChannels(pModel, pOutputLayerName, pChannelStart, pChannelCount): creates a partial model up to the layer name defined in pOutputLayerName and then copies channels starting from pChannelStart with pChannelCount channels.cai.models.CreatePartialModelFromChannel(pModel, pOutputLayerName, pChannelIdx): creates a partial model up to the layer name defined in pOutputLayerName and then copies the channel at index pChannelIdx. Use it in combination withcai.gradientascent.run_gradient_ascent_octavesto run gradient ascent from a specific channel or neuron.cai.gradientascent.run_gradient_ascent_octaves: allows visualizing patterns recognized by inner neuronal layers. See example. Use it in combination withcai.models.CreatePartialModel,cai.models.CreatePartialModelCopyingChannelsorcai.models.CreatePartialModelFromChannel.cai.datasets.save_tfds_in_format: saves a TensorFlow dataset as image files. Classes are folders. See example.cai.datasets.load_images_from_folders: practical way to load small datasets into memory. It supports smart resizing, LAB color encoding and bipolar inputs.
cai.layers.CopyChannels: copies a subset of the input channels.cai.layers.Negate: negates (multiplies by -1) the input tensor.cai.layers.ConcatNegation: concatenates the input with its negation.cai.layers.InterleaveChannels: interleaves channels stepping according to the number passed as parameter.cai.layers.SumIntoHalfChannels: divedes channels into 2 halfs and then sums both halfs. This results into an output with the half of the input channels.cai.layers.GlobalAverageMaxPooling2D: adds both global Average and Max poolings. This layers is known to speed up training.cai.layers.FitChannelCountTo: forces the number of channels to fit a specific number of channels. The new number of channels must be bigger than the number of input channels. The number of channels is fitted by concatenating copies of existing channels.cai.layers.EnforceEvenChannelCount: enforces that the number of channels is even (divisible by 2).cai.layers.kPointwiseConv2D: parameter efficient pointwise convolution as shown in the paper Grouped Pointwise Convolutions Significantly Reduces Parameters in EfficientNet.
The documentation is composed by examples and PyDoc.
Some recommended introductory source code examples are:
- Simple Image Classification with any Dataset: this example shows how to create a model and train it with a dataset passed as parameter.
- DenseNet BC L40 with CIFAR-10: this example shows how to create a densenet model with
cai.densenet.simple_densenetand easily train it withcai.datasets.train_model_on_cifar10. - DenseNet BC L40 with CIFAR-100: this example shows how to create a densenet model with
cai.densenet.simple_densenetand easily train it withcai.datasets.train_model_on_dataset. - Experiment your own DenseNet Architecture: this example allows you to experiment your own DenseNet settings.
- Gradient Ascent / Deep Dream Example: this example shows how you can quickly display heatmap (CAM), activation maps and first layer filters/patterns.
- Heatmap and Activation Map Examples with CIFAR-10: this example shows how you can quickly display heatmap (CAM), activation maps and first layer filters/patterns.
- kEffNet: shows how to create and run kEffNet described in the paper Grouped Pointwise Convolutions Significantly Reduces Parameters in EfficientNet.
- Saving a TensorFlow dataset into png files so you can use the dataset with Keras image generator.
The following image shows a car (input sample), its heatmap and both added together.
Heatmaps can be produced following this example:
heat_map = cai.models.calculate_heat_map_from_dense_and_avgpool(InputImage, image_class, model, pOutputLayerName='last_conv_layer', pDenseLayerName='dense')
These are activation map examples:
Activation maps above have been created with a code similar to this:conv_output = cai.models.PartialModelPredict(InputImage, model, 'layer_name', False)
...
activation_maps = cai.util.slice_3d_into_2d(aImage=conv_output[0], NumRows=8, NumCols=8, ForceCellMax=True);
...
plt.imshow(activation_maps, interpolation='nearest', aspect='equal')
These are filter examples:
Above image has been created with a code similar to this:
weights = model.get_layer('layer_name').get_weights()[0]
neuron_patterns = cai.util.show_neuronal_patterns(weights, NumRows = 8, NumCols = 8, ForceCellMax = True)
...
plt.imshow(neuron_patterns, interpolation='nearest', aspect='equal')
With cai.gradientascent.run_gradient_ascent_octaves, you can easily run gradient ascent to create Deep Dream like images:
base_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet')
pmodel = cai.models.CreatePartialModel(base_model, 'mixed3')
new_img = cai.gradientascent.run_gradient_ascent_octaves(img=original_img, partial_model=pmodel, low_range=-4, high_range=1)
plt.figure(figsize = (16, 16))
plt.imshow(new_img, interpolation='nearest', aspect='equal')
plt.show()
Above image was generated from:
Experiments done for these papers were made with K-CAI API:
- Grouped Pointwise Convolutions Significantly Reduces Parameters in EfficientNet.
- Reliable Deep Learning Plant Leaf Disease Classification Based on Light-Chroma Separated Branches.
After installing K-CAI, you can find documentation with:
python -m pydoc cai.datasets
python -m pydoc cai.densenet
python -m pydoc cai.layers
python -m pydoc cai.models
python -m pydoc cai.util
You can cite this API in BibTeX format with:
@software{k_cai_neural_api_2021_5810092,
author = {Joao Paulo Schwarz Schuler},
title = {K-CAI NEURAL API v0.1.6},
month = dec,
year = 2021,
publisher = {Zenodo},
version = {v0.1.6},
doi = {10.5281/zenodo.5810092},
url = {https://doi.org/10.5281/zenodo.5810092}
}





