Skip to content
Mark Janssen edited this page Apr 7, 2019 · 2 revisions

Flow networks are graphs where energy can transmit around the graph. In this project graph edges have weights giving a capacity or resistance value to transfer energy form the source neuron to the sink neuron.

Flow networks are interesting because even with a small networks (n=3, in a DCG), you can get chaotic, unpredictable behaviors.

Flow networks are the key to this project`s AI. Even with 10 neurons in a DCG has quadriillion different (unique, nonisomorphic) configurations, not including the current energy value at any given neuron. A graph essentially encodes a concept (endeme from wikiwikiweb?) and it doesn't take much to encode many different such concepts.

One a graph configuration becomes stable (in this project that means repeats = 2), a new "meta" neuron (with a name placed by the teacher, should they get wise to what the graph represents) is created.

The number 2 is used because the universe hardly repeats itself, when it does it is called "information". If this starts consuming too much memory, the program can cull neurons that haven't had more than one activation and are old.

Neuron class should include:

  • date created,
  • date last activated,
  • number of activations
Clone this wiki locally