Skip to content
Marxos edited this page Jul 1, 2022 · 10 revisions

The neural network is based on a multi-level, markov model. A network of linked data, organized in discrete(?) layers by data granularity. With higher level neurons representing denser (re)occurance of activity. There are two dimensions of links between neurons, lateral and hierarchical. Lateral connections exiting the neuron (axons) hold the probabilities of one neuron following another of the same or lower level. This sets the bias for knowledge generation.

  • Assume every bit (a character of text, a movement of a waveform, a piece of light (and color if you have it) is an input.
  • If data repeats itself, assume it isn't noise. Form a link that shows the relationship. (In the infinite space of possibilities, a pattern repeating itself is near 0 probability.) Except for the eyes, this is called temporal proximity.
  • This layer forms the foundation for the next and starts the hierarchical connections by activations that are lower triggering those which are higher and forming a new lateral markov network layer of probability.
  • A functioning neural network has analogues and is fractal with people in society. People have a belief value toward one another and network with them in ways nearly exactly as neural nets. They also organize under others in hierarchies, to create larger structures of order, even meaning.
Now you have the basics. Join the project.
  • Activity at higher levels affects the neurons at lower level to create the imagination.
  • 1-pixel neurons can be made across the screen and the brain organize according to least path for visualization.
  • Negative edge-weights give a lot of extra computation power for inverse correlation. Use them.
Clone this wiki locally