Skip to content
Full rewrite of Frogeye with attention to actual frog eyes
JavaScript HTML
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
img
.gitignore
README.md
Senses.js
eye.js
frogeye.js
hardware design.md
index.js
package-lock.json
package.json
test1.raw
test2.raw
viewer.html

README.md

frogeye.js

I discovered the seminal 1959 paper "What a Frog's Eye Tells a Frog's Brain" in a article on Walter Pitts

Mind == Blown.

Not only was Walter a fascinating person, but the results of the frog eye study jibed with my own work on AI. The powerful simplicity of image processing in a frog's retina inspired me to build a similar type of image processor.

What Does a Frog's Eye Tell a Frog's Brain?

The gist of the paper is that most people probably think that the vertebrate eye works like a camera:

camera eye

Light-sensitive cells send a "picture" back to the brain to be analyzed. But when anatomists first dissected vertebrate eyeballs, the structure of the retina was more like this:

intermediate processing

There was an intermediate layer of cells between the light-detecting ones and the optic nerves. What were these doing? Jerome Lettvin's experiment was designed to find out. He recorded the signals from electrodes in the nerves of the different intermediate cells while he showed pictures to the frogs.

What he discovered was that this middle layer of cells was doing significant visual processing. Among the types of visual analysis found were:

  • On/off - These cells signal if something in an area changes from light to dark or dark to light (implying movement.)
  • Dimming - These cells fire when a large portion of the visible area suddenly gets darker.
  • Contrast - These cells recognize edges.
  • "Bug" - These cells send a signal when a small, round, moving object enters the field of view.

The frog's brain didn't need to makes sense of the world at all. The intermediate retinal cells handled visual processing in a very hard-wired way.

This looks nothing like a neural net. Co-author Walter Pitts was arguably the father of neural networks. After the frog paper was published he burned all of his work and drank himself to death.

Running This Code on Your Own Pi

First, set up your Raspberry Pi camera. The Node.js scripts will be using shell commands very similar to this:

raspiyuv -w 64 -h 48 -bm -tl 0 -o -

Follow instructions to install and configure your camera module. If you can run that command on your Pi and see a bunch of data being spewed into the terminal, then your camera is ready for the Node script.

You will also need to install Node.js. The only way I have found to reliably install Node.js on a Raspberry Pi is using the n Node version manager.

The easiest way to install the scripts is to install Git, then clone this repository. To install the npm dependencies, run npm install in the repo's root directory. Then you can run it using npm start.

Once the view server is running, you can view it from any computer commectied to your local network at the Raspberry Pi's IP address over port 3789. Your router should be able to tell you the Pi's address on your network, or use ifconfig and look for the inet addr. It is often something like 192.168.0.27. Then just navigate to this in your browser to see the results.

http://192.168.0.27:3789/

4 Detector Types

Operation 1-Sustained Contrast Detectors

Each detector in a frog's retina has a different resolution depending on its type. The size of its receptive field is described in approximate degrees of the field of view.

Edge        2°
Bug         7°
Movement   12°
Looming    15°

Edge detection is the most fine-grained of these receptors. For simplicity I am getting raw image data 64x48 pixels so we'll make edges' receptive field equal to one pixel. I am making the first version a simplification that gives good results. In the frogeye.isEdge method I check every pixel to see if any one of its 4 adjacent pixels are significantly brighter. I am using a flat value of 50 luma (of possible values 0 - 255) as the indicator of significant brightness difference. This is a simple way to detect contrast differences, but has some issues.

One bug is that edges very close to 50 luma will flicker due to normal light intensity fluctuations in the camera.

The biggest difference in the behavior of this algorithm and the retinal cells it means to emulate is that those cells will persist firing when light is removed. I hope to change the logic to persist the edge image, and maybe it will also fix my flickering issue.

Operation 2-Net Convexity Detectors

Still working on this one. It will probably take the output of both the edge and movement detectors, plus check alpha-shape.

Operation 3-Moving-Edge Detectors

I am going to write this to accept the output of multiple edge detectors.

Operation 4-Net Dimming Detectors

You can’t perform that action at this time.