-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Pi edited this page Jan 28, 2024
·
9 revisions
(copied over from the Discord)\
I had a quick go at mapping out an overview of what's relevant/missing in the quest to unlock machine intelligence.
Basically I'm thinking: What properties do bio-brains have that AI is lacking?
Here's the second draft:
# Considerations
What qualities does a neural engine need to have?
๐ธ Homogeneity
Neocortical tissue is homogenous.
We want a basic computation mechanism that's abstracted away from input modality
So image / audio / text / sensory / etc. data goes thru some initial layers
INTO a generic neural engine
๐ธ Local
(bio-networks act locally, backprop requires global forward-then-backward)
๐ธ Scale-Free
letter <-> word, word <-> phrase, phrase <-> sentence, etc.
Need a mechanism that handles arbitrary abstraction
It's ugly to separate into fixed "N layers" (won't scale)
๐ธ Energy efficiency / sparsity
Transformers are fugly huge / inefficient
bio-brains run at low watts. Only a small fraction of the brain NN is used for a computation,
evolutionary forces have pushed us towards efficient computation. fastest reaction wins. (Genetic Algos + multimodal AI gym setups???)
๐ธ Agency
At what level is agency? Does it just sit on top of the LLM? Or can we see it as an emergent property from neuronal behaviour (competition/inhibition)
- Recursion / looping
Our brains loop. We learn to loop. e.g. socratic dialogue, problem-solving
Dynamical systems / SSM / mamba / ...
๐ธ Learning
๐น One-shot learning
We don't need to get run over at 1000 red lights to learn to wait for the green when we cross the road
- Our bio-system releases chemicals to strongly reinforce certain patterns
๐น Solidify early-learning
Observe how our auditory cortex learns. Low-level is baked in before puberty.
e.g. Dynamically add neurons. A neuron's learning rate decreases over time.
๐น Intelligent teacher-student training (curriculum learning)
It's ludicrous to train an LLM on a million kids' stories
Keep feeding the student challenges where it can solve say 70% of them
๐น Continual learning
What if LLM can repeatedly contemplate, think/distill thoughts and fine-tune on these new conclusions?
What if the engine can extract targeted information from its environment (e.g. read a new arxiv paper/blogpost)
๐น [TODO] Wake/Sleep / Dreaming
๐ธ [TODO] Maslow's hierarchy of drives
๐ธ [TODO] Evolutionary learning (neuroevolution?)