synthetic unity of neural net apperception
##Inspired by Inceptionism
Researchers at Google discovered that neural nets trained for image recognition, when turned upside down, could sythesize their own version of the image they're trained to detect.
So for neural nets and humans alike, the old addage is true. We see things not as they are but as we are.
If only Immanual Kant were here to see this --> A critique of pure A.I. reason and the synthetic unity of neural net apperception.
##Neural Net Feedback Loops!
##Resources
- Inceptionism photo gallery
- Inceptionism - Google Research Blog - Going Deeper into Neural Networks
- Intro to Kant's Critique of Pure Reason - synthetic unity of apperception
- 2015 Conference on Computer Vision and Pattern Recognition - (CVPR 2015)
- Google Computer Vision Research at CVPR 2015
##Articles
- Yes, androids do dream of electric sheep - Inceptionism article in The Guardian
##Papers
- Going Deeper with Convolutions
- We propose a deep convolutional neural network architecture codenamed Incep- tion, which was responsible for setting the new state of the art for classification and detection
- Deep Inside Convolutional Networks: Visualizing IMage Classification Models and Saliency Maps - the visualisation of image classification models
- Inverting Convolutional Networks with Convolutional Networks - a new approach to study deep image representations by inverting them with an up-convolutional neural network.
- Understanding Deep Image Representations by Inverting Them - given an encoding of an image, to which extent is it possible to reconstruct the image itself?
- Deep Neural Networks are Easily Fooled - High confidence predictions for unrecognizable images