Skip to content

vanstrn/Anomaly-Detection

Repository files navigation

Machine Vision

This library implements several Anomaly detection methods using Tensorflow 2.X. It can also be used for other purposes, such as GANs, classifiers or object detectors..

Installation

This repo is based on a creating docker images. Steps for installation:

  1. Install Docker.
  2. (Recommended) Install Nvidia Container Toolkit for GPU capabilities in Docker.
    • If these are not used you can run code using CPU resources. To do this add -p cpu to execution command.
  3. Build docker image with make image
  4. Run code with interactive command line or as a bash script.
    • Command line - Use command make image to launch interactive docker container. Alternatively, use make headless to launch without display capabilities (Cannot use GUI based tools like plt.show()).
    • Bash script - Run bash scripts/alias to register docker alias for running bash scripts. You can then run bash scripts with vision bash experiment_bash_script.bash

Implementations and Results

Wiki Homepage summarizes major results and method implementations.

Examples of runing all the code are provided in the Running the Code wiki page.

Example Results

Example generation of digits:
GAN Results WGAN Results
Generator Results - (Left-GAN, Right-WGAN)

Example Anomaly detection results:
AE Anomaly Results
Autoencoder Anomaly Detection Results - Queried Images(left) are passed through the Autoencoder and regenerated(center). If the image is not within the original dataset, then the Autoencoder struggles to recreate the image. In this experiment the Autoencoder was trained without data from the four class in MNIST Digits. This results in poor reconstruction of fours, often as nines, and high mean-squared-error between the original and generated images(right). This can be used as a predictor for anomaly detection.
AE Latent Results AE Latent Results
Autoencoder Latent Representation - (Left) Latent representation when training data from all digits is used. The Autoencoder clusters digits into distinct clusters. (Right) Latent Representation when Autoencoder is trained without data from the four class. Fours become clusted in the nine cluster due to their similar appearance. This results in fours being reconstructed as nines, as seen in previous figure.

Performance Notes:

Tensorflow implementation(average 21.2s/epoch) of an autoencoder is 10% faster than Pytorch implementation(average 23.4s/epoch). Experiment used same batch size, epochs, optimizer, learning rate on the MNIST dataset. Convolutional networks were used for both cases, but are slightly different because padding is handled differently in Conv2D Transpose (Similar number of variables). Testing was performed on GTX1050. (Optimal may vary by GPU )

Pytorch script is torch-test/AutoencoderTorch.py. Docker image for pytorch is included in docker/ folder. Tensorflow script is python Training.py -f AE_PytorchComp.json

Code Notes

Network Generation

This code base uses a modularly defined json files to define different network architectures.

Releases

No releases published

Packages

No packages published

Languages