Skip to content

Commit

Permalink
Update ANN_intro_and_ Entropy.rst
Browse files Browse the repository at this point in the history
  • Loading branch information
Sahar Niknam committed Oct 6, 2018
1 parent b1fb7a9 commit 99a80b0
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions docs/ANN_intro_and_ Entropy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
- What is entropy?
- How is entropy useful for understanding artificial neural networks?


What are artificial neural networks?
====================================
Making some sorts of artificial lives, capable of acting humanly-rational, has been a long lasting dream of mankind. We started with mechanical bodies, working solely based on laws of physics, which were mostly fun creatures rather than intelligent ones. The big leap took place as we stepped in the programmable computers era; when the focus shifted to those features of human skills which were a bit more brainy. So the results became more serious and successful. Codes started beating us in some aspects of intelligence which involve memory and speed, especially when they were tested using well, and formally, structured problems. But their Achilles Heel was the tasks that need a bit of intuition and our banal common sense. So, while codes were instructed to outperform us at solving elegant logical problems at which our brains are miserably weak, they failed to carry out some simple trivial tasks that we are able to do, even without consciously thinking about them. It was like we made an intangible creature who is actually intelligent, but in a different direction perpendicular to the direction of our intelligence. Thus, we thought if we really want something that act similar to us, we need to structure it just like ourselves. And that was the very reason for all the efforts that finally led to the realization of artificial neural networks (ANNs).
Expand All @@ -15,14 +15,14 @@ Perceptron
Let’s start with the perceptron, which is a mathematical model of a single neuron and the plainest version of an artificial neural network: a network with one single-node layer. However, from a practical point of view, a perceptron is only a humble classifier that divide input data into two categories: the ones that cause our artificial neuron fires, and the ones that does not. The procedure is like this: the perceptron takes one or multiple real numbers as input, sums over a weighted version of them, adds a constant value, bias, to the result, and then uses it as the net input to its activation function. That is the function that calculates if the perceptron is going to be activated with the inputs or not. The perceptron uses Heaviside step function as its activation function. So the output of this function is the the perceptron output.


.. image:: https://user-images.githubusercontent.com/27868570/46574055-18a0b080-c99e-11e8-8cf4-ddc151ec33f3.jpg
.. image:: https://user-images.githubusercontent.com/27868570/46575181-adaca500-c9b0-11e8-8788-ce58fe1fb5bd.png
:alt: Perceptron


In language of math, a perceptron is a simple equation:

.. image:: http://latex.codecogs.com/gif.latex?H%28%5Csum_%7Bi%7Dw_ix_i%20+%20b%29

.. image:: http://latex.codecogs.com/gif.latex?H%28%5Csum_%7Bi%7Dw_ix_i%20+%20b%29



Expand Down

0 comments on commit 99a80b0

Please sign in to comment.