From b9cb6dad1eb436c4265d8c5c1087dc2f6a482b48 Mon Sep 17 00:00:00 2001 From: Andre Bessi Date: Tue, 29 Mar 2016 01:23:47 +0200 Subject: [PATCH] Fixed little typos in convolutional-networks.md Fixed little typos in convolutional-networks.md: - They are made up of => they are made up of - The whole network still express => The whole network still expresses - From the raw image pixels => from the raw image pixels - vastly reduces the amount of parameters => vastly reduce the amount of parameters --- convolutional-networks.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/convolutional-networks.md b/convolutional-networks.md index c55ea1f0..c0238c37 100644 --- a/convolutional-networks.md +++ b/convolutional-networks.md @@ -21,9 +21,9 @@ Table of Contents: ## Convolutional Neural Networks (CNNs / ConvNets) -Convolutional Neural Networks are very similar to ordinary Neural Networks from the previous chapter: They are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still express a single differentiable score function: From the raw image pixels on one end to class scores at the other. And they still have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer and all the tips/tricks we developed for learning regular Neural Networks still apply. +Convolutional Neural Networks are very similar to ordinary Neural Networks from the previous chapter: they are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. And they still have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer and all the tips/tricks we developed for learning regular Neural Networks still apply. -So what does change? ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduces the amount of parameters in the network. +So what does change? ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network. ### Architecture Overview