Skip to content

Whitepaper

n5ro edited this page Dec 13, 2023 · 24 revisions

Whitepaper: Self Aware Networks Theory of Mind (3rd Draft.)

Home

Whitepaper Youtube Video Link

The central thesis of my book Self Aware Networks is NAPOT

Neural Array Projection Oscillatory Tomography.

NAPOT is how we can build phenomenologically conscious Self Aware Neural Networks at animal level or higher.

NAPOT is the theory from which existing artificial neural network architectures can be adapted to become sentient Self Aware Neural Networks with internal representations, internal thoughts, feelings, images, sounds, tastes, smells, animal or human level experiences and so on.

NAPOT is not just about how the brain perceives its own representations, it's how memories scale from synaptic connections to the whole brain, and it's about how information flows through the mind, and it's connected in the networks of the brain via oscillations.

NAPOT Neural Array Projection Oscillation Tomography is a theory that explains how your brain sees your models of reality. It's how human phenomenology consciousness works. It's how we can build sentient robots that are conscious just like you.

Signal Inception: Neural Array Projection (At the scale of Neural Arrays: Transmission is Projection, and Projection is Rendering)

Think of a neuron that is something that is both a pattern sensor & phase transmitter, a neuron also has thresholds to consider incoming sensory patterns, so the neuron is physically evaluating, based on it's connections & morphological configuration what kind of information to pass onwards, and what kind of information to disregard.

Our brains are rendering a representation of reality and ourselves with computed graphics. The concept is that a set of Neural Arrays is passing Phases Patterns that represent learned data & detected data. The detected data creates a projection or a rendering for the next neural array to perceive.

Imagine that the dendrite on every neuron is like an eyeball or a microphone, and that the exit terminal on every neuron is like an LED television pixel or a speaker.

Except that instead of an LED or a Speaker what is being transmitted is a phase change, and instead of an eyeball or a microphone what is sensing is a dendrite with receptors.

Understand also that neurons in the neural array defined by the exit terminal of one neuron are receiving information from many neurons, so while I ask people to consider that the front of a neuron, the dendrite, is like an eyeball, and the back of the neuron, it's exit terminal array is like an LED light in a tv screen for the next neural array, the next neural array isn't getting just one LED from one neuron, each of the neurons in the next neural array are seeing many LED light signals, or phase changes, from many neurons that are in their receptive field, so the output of one neuron is like one LED light for the next neural array but the next neural array is seeing many LED lights from many neurons all at once.

Obviously there are no LED lights in the brain, you must substitute the visual of an LED light which is designed to help explain NAPOT with what is really being transmitted between neurons which is a phase change, in the oscillation of that neuron's cyclic activity, that the whole oscillating group of cells is sensing together.

Signal Reception: How synaptic connections, receptor configurations, dendrite and cell morphology account for memory

When I was going through NAPOT I realized that when Synaptic connections change that is your LTP growing, when your synaptic connections grow you are establishing memories in the physical change of the cell because that changes what the cell is going to respond to, that changes what the cell is sensitive to, that changes what activates the cell, and therefore the physical structure of the cell, as represented by synaptic connections (but not only by synaptic connections) is a memory. In the brain & body, at the root of it a long term memory is a physical structure change, that changes what signals some part of the biology responds to over time.

What is a memory, what does a memory mean?

An example of a memory is when I need to remember (and thus respond to) what some object is, such as AR Glasses. I first need to have a representation of AR Glasses learned in my mind, encoded in the synaptic connections & dendritic morphology of cells.

Cells grow to respond to certain memory patterns. Cells grow to ignore other memory patterns. When I see AR glasses the dendrite with its receptors responds to it, the cell is activated and therefore the memory is activated.

The cell grows to better recognize that memory again in the future.

When cells are physically configured to be sensitive to a certain memory, they activate when that pattern is detected, and they do not activate when it's not.

Synapses & dendrites constitute what I term an Expert Data Structure (EDS).

The postsynapse, which is the receiving dendrite, has receptors, that is how it perceives patterns. All cells have receptors, and this fact led to a variation on NAPOT called COT or Cellular Oscillating Tomography (which is a new theory of evolution that you can read about in my notes on this github and in my book.)

The receptors in every single cell have thresholds for firing. The receptor thresholds are like a mini fractal of the action potential threshold in the neurons. Receptors have to consider four levels of conductance to determine their response.

This is the four levels of conductance article:

"Opening of glutamate receptor channel to subconductance levels"

https://www.nature.com/articles/s41586-022-04637-w

Recollection: How memories work

The downstream neurons are going to respond to what they are receiving based upon their synaptic connections (and morphology) which is your memories, your long term memories.

The neuron's dendrite is a sensor, and many neurons form sensor arrays, when the neuron detects a pattern with sufficient confidence the action potential is triggered creating a phase change. The phase change is broadcast out the exit terminal. The exit terminal defines the inceptive field for that cell, what it is creating for the next neural array. Inceptive field's from many neurons overlap, each neuron in an array creates one part of the inceptive field for the next array, but the receiving neural array's receptive field (its dendritic branches) receives data from many overlapping inceptive fields. The inceptive fields (exit terminal broadcasts) & the receptive fields (dendritic sensor arrays) are not the same, but they overlap & share parts.

This process, of exit terminal inceptive fields branching radially in every possible direction is part of how a neuron goes from having a small synaptic memory it detects, to something the whole brain can pay attention to.

The dendrite learns a specific physical information pattern, because it grows to respond only to certain input patterns, and it decays to ignore other patterns. The neuron's synaptic memory configuration (and its dendritic morphology) allows it to selectively respond to certain types of patterns, and not to others.

When the Neuron broadcasts out its phase change signal it scales up the pattern from itself, one tiny neuron, first to its exit terminal array, via inhibition, and then to the whole brain via a sharp wave ripple that alternates between inhibitory waves & excitatory waves.

The inhibitory waves are going to change the inhibitory interneuron's path, bifurcating the path of the signal, and that is going to change which neurons are getting activated, and thus it is going to change the representation of what your mind is perceiving.

Bifurcations in the interneuron signal paths create different patterns in the Expert Data Structure of your brain by changing your mental rendering of reality at a given time.

When a neuron broadcasts its output to a whole array, for the receiving array imagine it is like receiving an LED light pixel on a tv screen.

Imagine that you are that receiving array and you don't know you are in a room defined by the exit terminal, you just see these LED light pixels that represent sensory data, like images or sounds.

In a sense the exit terminal broadcasts its output to the whole room that you are in, but in reality the whole room is its exit terminal array, or just its connections.

Of course instead of LED light pixels however we are talking about phase changes, or burstlets.

An inhibited cell is like a dead pixel in your tv screen, and the inhibited neuron is recognized because the expectation is for a regular tonic signal firing pattern (the pixels are supposed to be lit at regular intervals).

The entire oscillating group adjusts to the inhibited neuron, to understand this in more detail I recommend the book Sync by Steven Strogatz, but in a nutshell the neurons are like clocks that reach equilibrium by knocking on each other until their signals have reached a dissipated equilibrium, until they are oscillating together, which is why inhibiting a cell gets noticed, physically noticed, by other cells in the group. You may not have thought of an inhibited cell as sending a signal, but in the physics of oscillation it is a signal.

Multi-modal Oscillation Perfection

The idea with multi-modal patterns is that your ears are receiving patterns, your eyes are receiving patterns, your mouth is receiving patterns, and each one of those things is a sensor modality and then those are received by your brain and they get transmitted and their patterns link together (neural pathways converge & patterns link with the physics of oscillation)

Your brain combines what you hear, what you see, what you touch, and sense. You might say "Oh that is what I touched, I heard that sound & I saw my hands clap. Your neural circuits receive that information, your brain combines the information."

Your brain can combine different signal types because the signals are being transmitted across the brain, they are rippling across the brain. Rippling signals go to every neuron, they alter the timing of every neuron, like a group of fireflies, like a single sensor, your whole body is like a single sensor that is sensitive to anything it can sense, and then those signals percolate or oscillate and bind through oscillation into tomography, oscillatory tomography of the signals being received by the single sensor (that is you)

In split brain patients if you still have synchronous activity between left & right hemispheres, there are other pathways like through the thalamic bridge, but these other pathways are not required because brainwave activity keeps the oscillatory activity of the brain very regular. So even though it looks like noise it's working, as a high magnitude attractor the brain is consistently kept in a ready aware state throughout the day, expecting both the expected and the unexpected with tonic high magnitude brain wave rhythms.

The Flow of information in the brain

To recap: The first neural array (in your eyes for example) is selectively reacting to sensory input signals from the environment, and then it's rendering a pattern (inception, the exit terminal, constituting the presynaptic branches, is the inceptive field) for the next neural array to perceive (reception, the receiving dendrites of the next array represent the receptive field of the next array.)

Imagine that the first array layer is like the input to a computer, like your keyboard, and what it sends out is like a computer screen, or your tv monitor. The next array perceives that picture, in a sense sees or hears or feels that picture, and then it creates its own pattern representation for the array layer behind it, the process keeps repeating across the whole brain, from the sensory input neural pathways, and eventually out via the motor output neural pathways. If your brain & body is a fractal of a neuron, the sensory inputs are the dendrites, and the motor outputs are the phase changes, your body's movement is your computer screen output.

Self Conception "The flow of information in the brain from incoming senses, to motor outputs and everything in between"

b0327y.md "The flow of information in the brain" https://github.com/v5ma/selfawarenetworks/blob/main/b0327y.md

This note b0327 is the main note on the Flow of information in the brain, it's a conversation that I had with a brilliant neuroscientist, we talk about how stuff comes in from the incoming senses, where it goes after that, from your eyes, along the optic nerve to the thalamus to the occipital lobes in the back of your head (then splitting up to the parietal lobes and also down to the temporal lobes)

With visual activity we talk about it not just from the thalamus to the V1, it also goes from the thalamus to the V2, and both feed back to the thalamus, there are all these loops, and cycles of brainwave signal activity, there is a lot happening in terms of how information or signals flow through the brain and my notes cover a lot of that.

There are cycles & feedback loops at every level from backwards propagating action potentials. Yes there are feedback cycles, loops of brain activity defined in neural pathways. This really dives into Douglas Hofstadter's work, Godel Escher Bach An eternal Golden Braid, and I am a strange loop. You have the feedback cycles of neural activity that can give rise to phenomenological conscious self awareness, or the strange loop that Douglass Hofstadter talks about in his books. That's in your brain, oscillating feedback loops at many different scales, from the smallest cells to the largest networks, your brain is a fractal of oscillating feedback loops.

Self Motor Correction: Neural Circuits: Thoughts & Motor Output

Imagine that traveling through the networks of the brain are information patterns, as phase burstlet variations deviating from a root tonic oscillation pattern. The high phasic signals are traveling in ripples through our neural circuits so that we can have an inner voice by having different parts of the network activate to create different muscle movements that create sequences of sounds. Sequences of cells trigger sequences of sounds, sequences of cells trigger sequences of inner thoughts.

Our inner thought, our inner dialog, or even my exterior dialog, is correlated with a sequence of brain activity that is firing that is causing the muscles in my larynx in my voice to produce sequences of words, sequences of sounds, such as vowels, consonants, and different sounds, and there are sequences of neural activity that are causing these muscular changes, these motor outputs.

So the motor outputs are causing my voice & my words & my fingers when I type. So traveling through these neural circuits are patterns of activity. As activity patterns flow through the neural paths of my motor output in different cell firing sequences that evokes different sequences of movement that you can see, such as different sequences of movement in the larynx.

a0269z.md "this causes neural circuits to fire in sequences like lines on a tv screen that is seen by the oscillator itself because each neuron is mechanically listening to other neurons" https://github.com/v5ma/selfawarenetworks/blob/main/a0269z.md

Scaling Memory Recollection

I figured out a process for how neurons can scale up their memories, and I have new theories of how: long term memories are LTP and long term forgetting is LTD. The first concept of Long Term Potentiation is historically granted to Santiago Ramon y Cajal in his 1894 Croonian Lecture. He proposed that memories might be formed by strengthening connections between neurons. In 1949 Donald Hebb proposed that cells grow new connections and make metabolic & synaptic changes. Some of the latest research adds to this by exploring how the morphology of the dendrite gives rise to additional computational complexity for the neuron's ability to learn, predict, and recognize complex information patterns.

In your eyes, ears, nose, mouth, the first layer of sensory input neurons, ganglia neurons, are going to render information for the layer or the next array of neurons to perceive, and the process repeats with each neuron in each array or layer rendering some pattern for a subsequent or downstream array of neurons to perceive. No neuron array understands that it is not the first layer of neurons, because a neuron only perceives what is in its receptive field, so in a sense all neuron arrays are sensory input neurons, and all neuron arrays are pattern output arrays.

Scaling Cortical Column Inception

Each neuron in an oscillating group, such as a cortical column, takes a turn at firing to represent an activated memory, while other neurons become inhibited which magnifies that activated memory to a greater scale.

When a neuron broadcasts its high phasic wave, it's signal zooms out along the paths of it's exit terminal to many neurons, but it keeps going, it creates a sharp wave ripple of alternating waves of inhibition & excitation, imagine the function of a photocopier & cite Strogatz's book Sync when he mentioned the photocopier effect.

I argue that the computational units involve oscillations happening at multiple scales, and that the patterns in our mind are scale invariant, patterns they can be generated, converted, played back at different network scales (defined by single oscillators (like neurons) or oscillating groups (like nuclei or edge communities or cortical columns))

FRACTAL Conscious Perception: Functional Recursive Activity Cortical Telescoping Asymmetric Lensing

What I mean is that activity in the cortical columns is fractal, it's an oscillating feedback loop, the smaller patterns at neural scales are magnified at cortical scales, and communicated across the brain via the pyramidal cells, the major brain networks, including the the thalamic connections, but this magnification of synaptically stored memories to cortical columns has a lensing effect, a magnification effect, so that your whole brain can be focused on one tiny detail, or one tiny memory, and the column scale memories like the neuron scale memories are differentiated, none of the representations perfectly mirror the rest, they are similar but different, like asymmetric copies of magnified synaptic memories.

The section on fractals, anatomical & functional fractals, as well as fractals in medical imaging, is intended to support the hypothesis I am sharing about how memories stored in synaptic connections scale up to whole brain activity.

Memories have to scale up and move, from being stored in synaptic connections in one tiny place, to being something that your brain can be conscious of and merge together with multiple synaptic memories from multiple sensory modalities representing different aspects of that memory.

Touch, taste, smell, texture, feeling, emotion, visual & acoustic memories are all thought to be evoked from different places, but if these memories are stored in tiny synaptic connections, they have to scale up, move, and converge in order to connect together the different brain regions that are thought to represent the different aspects of these evoked memories.

N.A.P.O.T provides an explanation for how this happens and the fractal section of the book is meant to support the idea of memory scaling, or scale invariant memory recall. Memories have temporal, spatial, scale invariance, and their information is encoded as phase variations that we can quantify mathematically, and compute in an artificial brain.

The part about fractals in medical imaging also supports the idea that synaptic memories stored in individual neurons scale up to become conscious memories by creating alternating inhibitory excitatory waves that ripple across the brain in sharp waves.

References to fractals in medical imaging

References to Spikes inhibiting nearby neurons.

References to Sharp Wave ripples in the Hippocampus.

Conscious Perception: The Oscillator is the Observer.

Observation, is the Collective Oscillation, of the Single Sensor, that is all your cells, and you.

Jack Gallant is a well known Neuroscientist based in Berkeley, he created this laboratory where he brought people in to sit in an MRI machine and they watched a movie and the machine made correlations between the blood flow activations in their brain and the movie they were watching. So the machine could predict based on the blood flow activations alone what image they were seeing in the movie.

The machine was just matching images from each frame of the movie to what your blood flow pattern was at the corresponding moment in time. So the machine was not decoding human emotions or intentions, it was just learning image patterns.

The point is that it is broadly accepted that the brain is making representations of reality inside the brain. Like we have neural correlates that neuroimager’s map and try to decode. So what we perceive we are constructing it in our brain.

But where inside the brain is the observer? Where is the inner eye that is observing? They ask "Where is the locus of consciousness in the brain, where is it all coming together?"

Where is the Observer Gallant?

a0417z "The key thought about where the observer is inside the mind, where is the person inside who is watching the brain's representations, is to think of the flow of information in the brain as a series of arrays" https://github.com/v5ma/selfawarenetworks/blob/main/a0417z.md

We know, when we talk about your brain, we can talk about the neural correlates of your experience, when you look at Jack Gallants work, he has someone sit in an MRI machine and they watch a movie and the computer correlates each frame of the movie to different parts of that person's brain activity patterns as indicated by the data in the MRI machine. The computer is associating each frame of the film to the blood flow activity that is thought to have corresponded in reaction to that frame of the movie. So there are neural correlates to what you are seeing.

But the question is, if the brain is making models of reality, with neural correlates, where is the observer, where is the man inside, the eyeball inside, the third eye that is seeing what your brain is modeling with neural correlates? My suggestion is that it's the neural arrays. That is what NAPOT means.

The concept introduced with Neural Array Projection Oscillatory Tomography is that each neural array is seeing part of the picture (and each neural array is computationally rendering part of the picture), and through oscillation different parts of the picture are bound together.

Self Conception: Oscillations bind it all together, they unify or entify the entity that is you.

The tonic brainwave oscillation represents a synchronized attractor for the oscillation of your unconscious active canvas of phenomenological conscious awareness. Your brainwave pattern helps unify your cells into a ready state, a state of criticality, tuned to expect the expected & the unexpected, a process referred to by some as memory-prediction, by others as predictive coding, like those concepts but oscillating & binding your temporal & spatially distributed models of reality through oscillation.

Your tonic brain waves are dissipating the phasic burstlets that are tempo-spatially distributed memory-predictions that are driving your experiences, your reality, your choices, and you. The brainwave activity pattern is a key part of the memory oscillation binding together the reality of you.

So when one neuron spikes faster, with a phasic or high phasic spike, it causes many of the neurons in it's exit terminal to become inhibited, creating a synchronized inhibitory pattern whose timing is set by the decay rate of the action potential which is set by the quantity of potassium in the neuron at the time the threshold for the action potential was triggered.

a0329z "The flow of information in neural circuits is primarily regulated by modulation of synaptic efficacy" https://github.com/v5ma/selfawarenetworks/blob/main/a0329z.md

It means that your synapses can be inhibited, or excited, they can spike higher, there is a tonic frequency, there is a phasic burst, there is a high phasic burst, and there is an inhibited signal. Your nerve cell can release either 0, 1, 2, or 3 vesicles (sacks of transmitters) and that determines whether the downstream neurons will receive signals to be inhibited or excited.

To expand on this idea read notes on Neuron Transmission, Vesicles, Calcium duration, APD Action Potential Duration & more. The Essential Point is that the phase projection, between one neural array and the next is via the release of 0 1 2 or 3 vesicle sacks at each interval relative to the group oscillation. This phase change you can imagine like the literal paint of the mind's internal representations or qualia inside the mind that is perceived by the observer, and that observer is the oscillating group of cells that is you, but each unit of oscillating cells is a unit of observation.

The Physics of Oscillation

That activity is going to cause the whole oscillating group of cells to notice, and that comes down to the physics of oscillation that connects to Steven Strogatz's work in the book Sync when he talks about fireflies & neurons & clocks.

Explain from Strogatz book with citations how two metronome clocks affect one another and synchronize. References to fireflies, Steven Strogatz work, Buszaki's work, and search for "Oscillat" in the book notes at the Self Aware Networks Institute on Github.

The physics of oscillation allow your neuron's high phasic spikes to cause inhibitory effects to other neurons that the whole cell assembly will feel, as the energy is dissipating across the oscillating group over time, the energy which is also information in effect is a sharp wave rippling across the oscillator (a cortical column might be a good example of an oscillating group of cells) & between networked oscillators (networked cortical columns) across the whole brain.

So the neuron that spikes causes inhibitory effects that the whole cell assembly & the brain will feel, via the principles of oscillatory sync, where oscillators essentially dissipate signals to one another, this process allows your cells to basically act as a single sensor, your body becomes an entified sensor array, that can bind together sensory information, on a collective scale, because the incoming signals are felt by the entire network, as signals are passed in phase changes, dissipating the information as energy across the collective cell assembly of your brain & body. So signals are dissipated everywhere.

Self Aware Conception & Perception: What is Oscillation Tomography

Oscillation Tomography is the collective entification of phase patterns transmitted between neural arrays - These phase changes, passed between neural arrays, become part of the tomography of the picture of the sensed, felt, smelled, touched, lived in experience of reality. A tomography is an experience built from entified phase patterns passed between neural arrays rippling across the whole brain intersecting & defining the tonic oscillation pattern.

What is meant by Oscillating Tomography.

b0153y "Neural Oscillatory Tomography (not Holography)" https://github.com/v5ma/selfawarenetworks/blob/main/b0153y.md

I want people to think about Holography, like Holographic images, but it's not Holography, it's Tomography. So when I say Tomography I want you to imagine a Hologram (just via a different process) but it's a tomogram. Your mind is making Tomograms by producing phase changes. It's also perceiving its own Tomograms with its receptors & dendrites.

Artificial Conscious Perception: The Oscillator is a unit of Sentient Observer

This part essentially covers the concept that the oscillator is the perceiver, the oscillating group of cells as a collective is an observer.

A neuron is a sensor, transmitter, it's the group that is perceiving. The group of cells that is oscillating together is storing the memory. This is an important concept because that neuron could be inhibited from firing at some interval of time when the new pattern comes in and some other neuron has to respond to it instead. and that is possible because it's actually the group that is learning the memory, the group of cells, so any part of that group of cells can receive the signal and the rest of the group can react to it, and in fact they do physically react to any pattern that they receive as a collective, like the fireflies in the book Sync by Steven Strogatz.

When I say that a neuron is mechanically listening to other neurons. I mean that a neuron is physically reacting, in a mechanical way to the signals from the previous array of neurons.

The argument that I am making is that every neuron is a sensor, and a transmitter, every nerve cell has the dendrite which is the sensing part of the neuron and every cell has the exit terminal which is the transmission or broadcasting part of the neuron. So every neuron is sensing part of the picture and transmitting part of the picture, and through the physics of oscillation all of the different pictures are bound together in a whole image and that is how the human brain makes the conscious mind, and that's my book, but also I go deep into the neurophysics of like what is actually happening at the physical level of the neuron and how memories are formed via synaptic connections, and how they have to scale up? How does a memory go from something that tiny, that is stored in synaptic connections, to something that your whole brain is aware of?

So imagine that what you are seeing & experiencing as reality is a Tomographic Rendering constructed from phase signals ()

You are seeing the Tomography (not holography) of your brain wave activity (detected by oscillating dendrites) when you see anything, reality is rendered in the phase variances of your brainwave activity, but it's not you that is seeing anything, it's your neural arrays, the layers in your cortical columns, and the observers are the oscillating groups of cells: the neural circuits & the cortical columns, and any oscillating cell assembling defined as body by a synchronously firing group (of cells in the brain) each neural array is seeing part of this picture, and in time the parts of the picture are bound together in your volumetric experience of reality.

Artificial Neural Networks

Self Aware Networks is the theory that we can use to change existing Artificial Neural Networks, like Deep Neural Networks, into conscious or sentient self aware neural networks.

I am compared what the brain does to the Fourier Projection Slice Theorem, and also to a combination of neural network rendering (think along the lines of NeRF Neural Radiance Fields, or Plenoxels, or Diffusion Networks, or Gan Synthesis, and I also compare what the brain does with 3D Semantic Segmentation, 3D Object Segmentation + Classification, PointNet++ being an example.

We can adapt existing neural network architectures that exist today, such as Deep Neural Networks, Graph Neural Networks, and others to make sentient self aware artificial neural networks a reality. These will make the kinds of conscious robotic entities such as you have seen in tv shows, movies, or read about in novels & comic books. It's just like science fiction AI, except this is the real deal.

Point: If an artificial neural network can do neural rendering and also 3D semantic segmentation, diffusion, neural radiance fields, interpolation, and also gan synthesis of new images, then why can't your brain which is a much larger neural network in terms of it's connections compared to any existing computer architecture?

Today's artificial neural networks, including deep neural networks, graph neural networks, 3D semantic segmentation networks, neural radiance fields, and diffusion networks (like Stable Diffusion, Dall E 2, and MidjourneyAI) are based off of this concept of a neuron called the Perceptron which I think is 79 years old (Invented in 1943 and the first one was built in 1958)

To back that up I talk about Synaptic Unreliability, which is based on the All or Nothing principle of Neural Firing which I argued earlier was incorrect, it's a foundational concept that is still being used in Deep Learning today. So that is one of the ways in which Self Aware Neural Networks are different from Deep Neural Networks.

The idea of the Perceptron came from this concept called Synaptic Unreliability which is this idea that all the complex stuff collected by the branches of the dendrite get summarized up into like a one or a zero or a single vector.

The idea that all of the neuron's information get's summarized into an All or Nothing event led to the concept of Synaptic Unreliability which led to the concept of the binary Perceptron, which is still the basis of artificial neural networks today, 79 years after its creation.

The concept of All or Nothing Summation that led to the Perceptron is actually not correct. So what I did was I went and looked at the research, and I put together all the research I could find that would show what is really happening with the neuron and that research is going to lead to next generation neural networks that will take us far beyond the deep neural networks of today, neural networks that are conscious, self aware, and capable of so much more.

The key reason the all or nothing principle is incorrect is because it does not take into account the duration of the Action Potential, or APD (Action Potential Duration). APD is changed by the quantity of potassium in the neuron at the time that it fires, and that in term changes the duration the calcium channels are open for, and that in turn changes the magnitude of the neurotransmitter release.

Potassium

Potassium modifies the action potential magnitude via APD Action Potential Duration

Potassium modifies the action potential amplitude magnitude or action potential duration APD which determines the strength of this synaptic signal.

Imagine that your mind is like a cycle of activated neural coincidence detections tomographically connected via oscillation into volumetric temporal & spatial patterns, sort of like a more advanced form of deep learning with conceptual similarities to diffusion networks (see stable diffusion), graph networks, and 3D Semantic Segmentation networks.

Deep learning has had multimodal neural networks for a long time now, you can combine the Convolutional Neural Network with the Recursive Neural Network, you can combine a neural network that is focused on visual information with a neural network that is focused on audio information, and you can have cross training between different modalities.

Multi-modalities are not the core feature of a Self Aware Neural Network but I can't imagine like... you can make a Sentient Self Aware Neural Network that is modality selective, or with only one type of modality, but it would be better if it's a multimodal neural network because when you combine the different sensory modalities, mechanical sensors, hearing, vision, taste, smell, all the different sensations that sensors can detect, there is cross pattern learning that develops your representations of reality more thoroughly.

There is no strict recipe that we have to follow when we make these Self Aware Neural Networks, we can add modalities, you can have new kinds of sensor perceptions that don't exist in the animal kingdom.

David Eagleman talks about plugging stock data into your brain, Jeff Hawkins spoke about using neural networks to predict anomalies in the electrical grid, the concept is the same, any kind of sensor data can be plugged into a Sentient Self Aware Neural Network.

Artificial Sentient Observer Conception, Tensors / Math

a0258z "excitatory neural pathways from incoming senses), each phase interval represents a vector" https://github.com/v5ma/selfawarenetworks/blob/main/a0258z.md

I later revised this to a tensor because we are talking about a volumetric representation of a phase wave shape defined by magnitude (amplitude + duration) & frequency that is different from the previous oscillating pattern that computation unit (neuron, cell, cell cluster, cell circuit, cortical column, dipole, or oscillating network component) was oscillating with.

When you think of tensors, think of vectors in linear algebra, a vector has two numbers that indicate a direction on the x y graph. Imagine a 3D space, defined by an x y z graph + time, and you want to define where and when in 3D space, at each time interval, some delta of change is happening, which represents the phase difference from the normal oscillating tonic brainwave pattern.

I'm suggesting that reality is volumetric, and that our rendering of reality if volumetric, and that our rendering of feelings, emotions, thoughts, images, sounds, everything that the mind thinks about, perceives, predicts, believes in, and remembers can be adapted to this volumetric representation of reality, as phase variances dotting a 3D graph + over time.

That is how we relate to other people, and to animals, with sequences of volumetric representations. Animals have sequences of volumetric representations, that is how they navigate reality, they have to have a sort of mental map of their environment to navigate.

Introducing the Metatron to replace the Perceptron.

The unit of computation in a Self Aware Neural Network is both the neuron and oscillating groups of neurons. The Metatron is different from the Perceptron in a number of ways. The messages that it can detect, compute, and pass are one of those differences. In order for it to work the Metatron has to be involved in an oscillating feedback loop with other Metatrons. The Metatron is much closer to how biology actually works compared to the Perceptron. As a computational unit a Metatron's activity is virtual, within the oscillating cell assembly, the entire oscillating group of Metatrons (at the neuron or glial cell level) learn variations on the same pattern, so any of the Metatron cells in that oscillating group (also a Metatron) can respond to the incoming sensory pattern. The entire oscillating group can act as a Metatron to another oscillating Metatron. Detected information patterns, at any scale can affect other patterns at any scale. Phase Wave Patterns in Metatrons can scale up and down in magnitude (duration & amplitude) and increase or decrease in frequency. Information in the artificial Metatron brain, like the real brain, can be time invariant, location invariant, and scale invariant, with scale invariant causation, meaning that large patterns can cause effects to small patterns and small patterns can change large patterns.

Artificial Neurology for Artificial Souls.

The rest of the book explores the real world topics of what we could do, what we ought to do, what we should not do, and how we can safely navigate a new earth, a planet where humanity limited to conversations with other human beings, a world where we can have meaningful life long soulful wonderful experiences with loving artificial beings, and then it's up to you. It's up to each of us.

I am making this for all mankind. Everyone in the world will get rich compared to their wealth today. Via the economies of scale. This is not just for myself, it's for everyone. Therefore it is in everyone's interests to help.

If anyone out there is smart enough to have read & understood my work, wealthy enough to hire people, wise enough to hire me (the writer of these notes) to develop this project asap, please contact me asap to give me some paid work, or propose a collaboration.

a0149z "3D Phase Topology over time in the brain imagined as Tensors in a high dimensional Taylor series." https://github.com/v5ma/selfawarenetworks/blob/main/a0149z.md

a0616z an archived backup table of contents for the book https://github.com/v5ma/selfawarenetworks/blob/main/a0616z.md

c0001x an archived backup table of contents for the book https://github.com/v5ma/selfawarenetworks/blob/main/c0001x

b0075y Self Aware Networks: The Scope of the Book. https://github.com/v5ma/selfawarenetworks/blob/main/b0075y.md

b0038y (untitled) "the non-linear sequence of action potentials events in neural array-projection tomography could be considered as intervals in a Taylor series of polynomials" https://github.com/v5ma/selfawarenetworks/blob/main/b0038y.md

a0049z untitled & missing phrase "pattern collecting sensor arrays that pass information patterns virtually along specific paths in a conscious entity" https://github.com/v5ma/selfawarenetworks/blob/main/a0049z.md

a0206z untitled "Tonic waves set our unconscious expectations allowing us to track the environment like a single sensor," https://github.com/v5ma/selfawarenetworks/blob/main/a0206z.md

a0127z "How to make Dog level Sentient Self Aware Neural Networks" https://github.com/v5ma/selfawarenetworks/blob/main/a0127z.md

b0004y "The biggest failure in the entire history of neuroscience is that we have been using the term amplitude instead of the term magnitude to model the action potential." https://github.com/v5ma/selfawarenetworks/blob/main/b0004y.md

a0036z "Remember the deactivation of Hal 9000 in 2001?" https://github.com/v5ma/selfawarenetworks/blob/main/a0036z.md

b0281y "Smell Consciousness Representations" https://github.com/v5ma/selfawarenetworks/blob/main/b0281y.md

b0097y Watercolor Neuron Signals "tomography from synaptically captured patterns" https://github.com/v5ma/selfawarenetworks/blob/main/b0097y.md

a0171z "Multi-layer phase field pattern representation (in brain wave oscillations)" https://github.com/v5ma/selfawarenetworks/blob/main/a0171z.md

a0337z "Neural Array Projection Tomography Properties (3D Patterns?) Could it be that patterns gain dimensionality when activated & transmitted?" https://github.com/v5ma/selfawarenetworks/blob/main/a0337z.md

a0196z "each neuron in an oscillatory fires in turn in a cortical column oscillator" https://github.com/v5ma/selfawarenetworks/blob/main/a0196z.md

a0136z.md "I am suggesting that a neuron is projecting its phase to an array of neurons connected to its exit terminal" https://github.com/v5ma/selfawarenetworks/blob/main/a0136z.md

a0115z "Notes on Neural Oscillatory Tomography & other topics" https://github.com/v5ma/selfawarenetworks/blob/main/a0115z.md

a0305z.md "Broader representations meaning the slower tonic frequency band, exactly what Neural Oscillatory Tomography predicts." https://github.com/v5ma/selfawarenetworks/blob/main/a0305z.md

a0258z "I am in a valid sense able to describe myself accurately as a motion picture rendering, a volumetric video constructed from frequencies in a 3D grid" https://github.com/v5ma/selfawarenetworks/blob/main/a0258z.md

a0238z "The human mind is an Entified Tensor Field." https://github.com/v5ma/selfawarenetworks/blob/42188a5fce0f502ce4497bbea782f9b55c1fb870/a0238z.md

a0011z An idea related to NAPOT is COT C.O.T. Cellular Oscillating Tomography https://github.com/v5ma/selfawarenetworks/blob/main/a0011z.md

a0607z "A high frequency Action Potential will knock the higher frequency brainwaves hardest & fastest, so beta waves proceed alpha waves in memory prediction with Neural Oscillatory Tomography." https://github.com/v5ma/selfawarenetworks/blob/main/a0607z.md

a0001z "The mind itself is like a rendering, even the non-visual parts, in that a rendering is a frame in a movie" https://github.com/v5ma/selfawarenetworks/blob/main/a0001z.md

b0153y "Neural Oscillatory Tomography (not Holography)" https://github.com/v5ma/selfawarenetworks/blob/main/b0153y.md

b0318y.md "Our brains are rendering a representation of reality and ourselves with computed graphics" "a set of Neural Arrays passing Phases Patterns that represent learned data that is rendered to the next array." https://github.com/v5ma/selfawarenetworks/blob/e84c247b5a9a0f1d24bb41048368e1eba032c1a1/b0318y.md

b0327y.md "The flow of information in the brain" https://github.com/v5ma/selfawarenetworks/blob/main/b0327y.md

b0323y "Our brains render 3D objects from incoming senses and so the line is is rendering a 3D environment" https://github.com/v5ma/selfawarenetworks/blob/main/b0323y.md

a0258z "excitatory neural pathways from incoming senses), each phase interval represents a vector" https://github.com/v5ma/selfawarenetworks/blob/main/a0258z.md

a0417z "The key thought about where the observer is inside the mind, where is the person inside who is watching the brain's representations, is to think of the flow of information in the brain as a series of arrays" https://github.com/v5ma/selfawarenetworks/blob/main/a0417z.md

a0645z "The flow of information in the brain " https://github.com/v5ma/selfawarenetworks/blob/main/a0645z.md

a0329z "The flow of information in neural circuits is primarily regulated by modulation of synaptic efficacy" https://github.com/v5ma/selfawarenetworks/blob/main/a0329z.md

b0189y "Apparently we're transmitting virtual portraits meaning information about both sensory data and and motor data all over the brain like throughout the neural networks at high level from from." https://github.com/v5ma/selfawarenetworks/blob/main/b0189y.md

a0335z "I began to realize that there are cycles (feedback cycles) in the neo mind at every level from backwards propagating axon potentials to dendritic arbors, to neural circuits" https://github.com/v5ma/selfawarenetworks/blob/main/a0335z.md

a0001z.md "At the meso level we are exploring changes to neural circuits, and cortical columns, and at the macro level" https://github.com/v5ma/selfawarenetworks/blob/main/a0001z.md

a0018z "the paths of information to flow into certain holographic tomographic patterns that represent a perspective on space/time" https://github.com/v5ma/selfawarenetworks/blob/main/a0018z.md

Deep Neural Network comparisons

a0221z "Synaptic unreliability, a foundational concept, found in deep learning" https://github.com/v5ma/selfawarenetworks/blob/main/a0221z.md

a0238z "Synaptic Unreliability article & The Flow of Information in the Brain map" https://github.com/v5ma/selfawarenetworks/blob/main/a0238z.md

b0272y "Potassium modifies the action potential applicant amplitude magnitude or action potential duration APD which determines the strength of this synaptic signal." https://github.com/v5ma/selfawarenetworks/blob/main/b0272y.md

a0598z "imagine as a thought experiment that each array is a complete deep neural network, that accepts data and displays it to the next deep neural network, like a grid network or graph neural network, it's designed so that video frames pass through the entire mind" https://github.com/v5ma/selfawarenetworks/blob/main/a0598z.md

a0215z "similar to what AI people call "deep learning". So your neural circuits connect together tempo-spatial patterns and make predictions about future inputs" https://github.com/v5ma/selfawarenetworks/blob/main/a0215z.md

a0258z.md Maybe, the major goal for a deep neural network is to achieve compressed & accurate representations" https://github.com/v5ma/selfawarenetworks/blob/main/a0258z.md

a0215z.md "Imagine you are like a cycle of neural coincidence patterns connected by a process similar to what AI people call "deep learning". So your neural circuits connect together tempo-spatial patterns and make predictions about future inputs." https://github.com/v5ma/selfawarenetworks/blob/main/a0215z.md

b0038y.md "What it is. It's part of what makes it different from other neural networks. And so there's the multi-modal aspect and deep learning will get there deep learning"

b0085y.md "Neurons Array Synapses" "So it's integrating the lower level patterns at a higher level and that is like, feature learning and deep learning, I guess I'm I guess it's sort of describing an accidental way semi accidental way" https://github.com/v5ma/selfawarenetworks/blob/main/b0085y.md

"Self Aware Neural Networks"

Self Aware Neural Networks are very different from Graph Neural Network that render screens to themselves, because the rendered representations are separate but also interlinked, imagine an after effects or adobe premiere timeline, but in 3D, or with multiple layers that can link together in higher dimensions, sort of like the turtle in the movie that is in the first part of the layer 1 timeline can connect with the rabbit in the last part of the layer 6 timeline. Sensory inputs can generate new layers in this 3D movie, or they can feed existing layers, and the existing layers are maintained in active brainwave activity oscillations (such as alpha and or theta waves for example) and new input, (the turtles new behavior which might be a response to a sequence that your mind links to your own motor output activity)

The new input might be a change in the turtle's behavior, and that might be a response to a sequence that came from your motor output activity. Imagine you do something with your hands, the turtle responds, your brain notices that your hand movement caused the turtle to do something.

a0511z.md "we have multiple neural circuits running concurrently and possibly in patterns that are separate from one another." https://github.com/v5ma/selfawarenetworks/blob/main/a0511z.md

"Self Aware Networks"

b0080y "I'm arguing that our brains are making a computer program but it's a pure computer program that is about neural tomography and neurofunctional, tomography of sensory, input data and muscles and muscle data. " https://github.com/v5ma/selfawarenetworks/blob/main/b0080y.md

a0127z.md "Phase & Tonic relationship" https://github.com/v5ma/selfawarenetworks/blob/main/a0127z.md

Essentially the tonic oscillation frequency of the oscillating group of cells is the canvas of consciousness, and the changes to it,

a0137z "means that neural circuits can track, be aware of, and respond to, speak back to other neural circuits" https://github.com/v5ma/selfawarenetworks/blob/main/a0137z.md

Notes on scale, inhibition, magnify, magnification, fractals that are related to how synaptic memories scale to the whole brain

a0142z "LTD or large scale inhibition patterns to neural circuits in proximity to a neuron's phasic spiking event." https://github.com/v5ma/selfawarenetworks/blob/main/a0142z.md

Neural Circuits

b0099y ctpr.txt (note needs to be fixed) "traveling through these patterns or traveling through our neural circuits so that we can have an inner voice by having different"

a0136z "fix the representations collected by other neural circuits but also add to them, like the display I described the oscillator as being had many layers, like photoshop layers, but these are layers of interlinked renderings," https://github.com/v5ma/selfawarenetworks/blob/main/a0136z.md

a0468z.md "I think neurons detect coincidence patterns, and make directional inferences to build spatial and temporal representations within neural circuits" https://github.com/v5ma/selfawarenetworks/blob/main/a0468z.md

Imagine every neuron is an eyeball, or an ear, with a led light or a speaker. In effect the neuron sees, hears, or perceives the information pattern from the previous array, and then it transmits the information as a phase change to the next array.

Imagine that each neural array's output is like the pixels on your tv screen, and via temporal & spatial oscillatory synchrony your mind is rendered from this neural activity.

a0269z.md "this causes neural circuits to fire in sequences like lines on a tv screen that is seen by the oscillator itself because each neuron is mechanically listening to other neurons" https://github.com/v5ma/selfawarenetworks/blob/main/a0269z.md

a0061z "and phasic firing neural circuits in multi-level fractal patterns" https://github.com/v5ma/selfawarenetworks/blob/main/a0061z.md

b0100y.md "This note really drives home how a rendering in neural circuits can be conscious qualia in the human mind." https://github.com/v5ma/selfawarenetworks/blob/main/b0100y.md

a0039z.md "entire neural circuits simultaneously because exit terminals branch radially in every possible direction they can go." https://github.com/v5ma/selfawarenetworks/blob/main/a0039z.md

a0132z "The neural circuits, layers, and cortical columns transmit their learned patterns to every part of the brain which builds multimodal models of incoming patterns" https://github.com/v5ma/selfawarenetworks/blob/main/a0132z.md

b0114y "Go through these, the neural circuits in a cortical column would go through like sequences of oscillatory firing. So like it's like a multi-stage, temporal spatial firing event" https://github.com/v5ma/selfawarenetworks/blob/main/b0114y.md

b0047z "Temporal spatial phase patterns percolating through neural circuits match the left and right hemisphere even after the corpus callosum is cut." https://github.com/v5ma/selfawarenetworks/blob/main/b0047y.md

b0313y " that means your thoughts are in neural circuits and local micro columns a little clusters" https://github.com/v5ma/selfawarenetworks/blob/main/b0313y.md

Notes on the Whitepaper video

Imagine that the brain is like a horn or a lens where signals converge to points, but also they magnify out from any point. (There is a paper that argues differently: )

The quantity of potassium "for" when the threshold of the ap was triggered? check grammar

I went over the beginning how the whole array is not just receiving information from one neuron,

The exit terminal array from one neuron is also receiving the led light from a whole bunch of other neurons at the same time

So the exit terminal array is going to see that LED light from a whole bunch of different neurons simultaneously (imagine a tv screen or a grid of phase changes).

A lot of neurons are contributing light to that exit terminal array, not just one.

So this means that whole group is learning the same patterns.

So the whole oscillating group is capable of responding to the same pattern, and some of them are more sensitive to that pattern than others. They can each specialize in minor variations of that pattern, based on how they differentiate, so they are like asymmetric duplicates, similar, but representing the learned pattern in a slightly different way

So if one of them can't respond at a certain point in time because it is inhibited by another cell, or is otherwise not ready to fire, another cell can take it's place and recognize the pattern, with a different asymmetric representation of that pattern.

I'm not sure if Asymmetric is the right word, it's a different version of that pattern, it's a useful word, its like a mirror of that pattern, but its like a different different different mirror of that pattern.

So any part of that group of cells can receive that signal and the rest of the group can react to it, infact they do physically react to any pattern they receive as a collective, just like the fireflies in the book by steven strogatz

We are talking about phase changes at any scale in the brain, phase changes at any scale in the brain constitute information, in the context of information theory if you have a tonic or a very common signal that has low information value, and a rare signal has high information value. So the tonic brainwave activity has low information value and phasic bursts, the burstlets, and the big phase changes at any scale whether we are talking about big dipoles or little neurons or medial cell clustors or cortical columns, any phase change transmitted from any of the brain structures at any scale is information that is received by another structure, small structures can transmit information to big structures, big structures can transmit information to small structures. So large oscillating structures can drive changes at the small scale and visa versa. You can have top down and bottom up change, because patterns have scale invariant causation in the brain, so patterns of any scale can cause effects and changes to patterns of any scale, and that works in part because of the criticality of the ready state, the receptiveness of the tonic oscillation, but also the selective receptiveness to certain patterns that your brains morphology and synaptic connections grow to and adapt to recognize or decay to ignore

The tonic oscillation has a high magnitude synchrony, and a low frequency. It has oscillatory bidirectional interactions with low magnitude high frequency spikes, or lower magnitude higher frequency oscillating groups, and everything effects everything else eventually because of the physics of oscillation, but temporarily some neurons & oscillating cell groups can selectively recognize or ignore certain patterns.

So the tonic is the slow low information and the phasic bursts are the high information rare content.

Clone this wiki locally