Skip to content

Commit

Permalink
Update intro-2-ANN-Entropy.rst
Browse files Browse the repository at this point in the history
  • Loading branch information
Sahar Niknam committed Oct 21, 2018
1 parent 898eaf2 commit 1b8dca1
Showing 1 changed file with 10 additions and 2 deletions.
12 changes: 10 additions & 2 deletions docs/intro-2-ANN-Entropy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
What are artificial neural networks?
====================================
Making some sorts of artificial lives, capable of acting humanly-rational, has been a long lasting dream of mankind. We started with mechanical bodies, working solely based on laws of physics, which were mostly fun creatures rather than intelligent ones. The big leap took place as we stepped in the programmable computers era; when the focus shifted to those features of human skills which were a bit more brainy. So the results became more serious and successful. Codes started beating us in some aspects of intelligence which involve memory and speed, especially when they were tested using well, and formally, structured problems. But their Achilles Heel was the tasks that need a bit of intuition and our banal common sense. So, while codes were instructed to outperform us at solving elegant logical problems at which our brains are miserably weak, they failed to carry out some simple trivial tasks that we are able to do, even without consciously thinking about them. It was like we made an intangible creature who is actually intelligent, but in a different direction perpendicular to the direction of our intelligence. Thus, we thought if we really want something that act similar to us, we need to structure it just like ourselves. And that was the very reason for all the efforts that finally led to the realization of artificial neural networks (ANNs).
Making some sorts of artificial lives, capable of acting humanly-rational, has been a long lasting dream of mankind. We started with mechanical bodies, working solely based on laws of physics, which were mostly fun creatures rather than intelligent ones. The big leap took place as we stepped in the programmable computers era; when the focus shifted to those features of human skills which were a bit more brainy. So the results became more serious and successful. Codes started beating us in some aspects of intelligence which involve memory and speed, especially when they were tested using well, and formally, structured s. But their Achilles Heel was the tasks that need a bit of intuition and our banal common sense. So, while codes were instructed to outperform us at solving elegant logical problems at which our brains are miserably weak, they failed to carry out some simple trivial tasks that we are able to do, even without consciously thinking about them. It was like we made an intangible creature who is actually intelligent, but in a different direction perpendicular to the direction of our intelligence. Thus, we thought if we really want something that act similar to us, we need to structure it just like ourselves. And that was the very reason for all the efforts that finally led to the realization of artificial neural networks (ANNs).

Unlike the conventional codes, which are instructed what to do, in a step-by-step way and by a human supervisor, neural networks learn stuff by observing data. It is almost the same as the way our brain learns doing intuitive tasks that we have no clear idea of exactly how and exactly when we have learned doing them; for example, a trivial task like recognizing a fire hydrant in a picture of a random street. And that was the way we chose to tackle the common sense problem of AI. So, what are these neural networks?

Expand Down Expand Up @@ -752,6 +752,14 @@ Hard tangent function...
.. image:: http://www.animatedimages.org/data/media/695/animated-under-construction-image-0035.gif


**Problem (3)**

::

Think of a new activation function with some advantages over the popular ones. Run an expriment to compare its perfocrmance with the other. If it outperform the popular one, publish a paper on it.



Training
--------
But...
Expand Down Expand Up @@ -799,4 +807,4 @@ How is entropy useful for understanding artificial neural networks?
.. [#] And provided that the nodes’ activation functions are nonlinear.
.. [#] Both in an abstract and also a physical sense.
.. [#] Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., ... & Bengio, Y. (2015, June). Show, attend and tell: Neural image caption generation with visual attention. In *International conference on machine learning* (pp. 2048-2057).
.. [#] Compare with the fact that you can use a, say, sigmoid neuron, wherever in a network you want, without being sure of what you are doing!
.. [#] Compare with the fact that you can use, say, a sigmoid neuron, almost wherever in a network that you want, without being sure of what you are doing!

0 comments on commit 1b8dca1

Please sign in to comment.