yann.lecun.-.Convolutional.Network.Demo.from.1989.mp4
Source: Yann LeCun - Convolutional Network Demo from 1989
Dataset: Kaggle
Complete JupyterNotebook: Link
Metrics:
Algorithm | Precision | Recall | F1-score | Accuracy |
---|---|---|---|---|
Random Forest(SnapML) | 97.06% | 97.06% | 97.06% | 97.06% |
Introduced by LeCun et al. in Gradient-based learning applied to document recognition The MNIST database (Modified National Institute of Standards and Technology database) is a large collection of handwritten digits. It has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger NIST Special Database 3 (digits written by employees of the United States Census Bureau) and Special Database 1 (digits written by high school students) which contain monochrome images of handwritten digits. The digits have been size-normalized and centered in a fixed-size image. The original black and white (bilevel) images from NIST were size normalized to fit in a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels as a result of the anti-aliasing technique used by the normalization algorithm. the images were centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
Source: http://yann.lecun.com/exdb/mnist/