The simplest explanation is exponentially more important.
Suppose I give you a pattern, and ask you to explain what is going on:
Several explanations might come to mind:
- "The powers of two,"
- "Moser's circle problem,"
-
$\frac{x^4}{24}-\frac{x^3}{12}+\frac{11x^2}{24}+\frac{7x}{12}+1$ , - "The counting numbers in an alien script,"
- "Fine-structure constants."
Some of these explanations are better than others, but they could all be the "correct" one. Rather than taking one underlying truth, we should assign a weight to each explanation, with the ones more likely to produce the pattern we see getting a heavier weight:
Now, what exactly is meant by the word "explanation"? Between humans, our explanations are usually verbal signals or written symbols, with a brain to interpret the meaning. If we want more precision, we can program an explanation into a computer, e.g. fn pattern(n) {2^n}
. If we are training a neural network, an explanation describes what region of weight-space produces the pattern we're looking for, with a few error-correction bits since the neural network is imperfect. See, for example, the paper "ARC-AGI Without Pretraining" (Liao & Gu).
Let's take the view that explanations are simply a string of bits, and our interpreter does the rest of the work to turn it into words, programs, or neural networks. This means there are exactly
What you can count, you can measure.
Suppose we are training a neural network, and we want to count how many explanations it has learned. Empirically, we know the loss comes from all the missing explanations, so
However, wouldn't it be more useful to go the other way? To estimate the loss, at the beginning of a training run, by counting how many concepts we expect our neural network to learn? That is our goal for today.
If we assume our optimizers are perfect, we should be able to use every bit of training data, and the proportion of the time a neural net has learned any particular concept will be
where
to keep track of how many times the network has learned a concept. To track multiple concepts, say
It's actually more useful to look at the logarithm, that way we can add density functions instead of multiplying partition functions2:
Now, not every model learns by simply memorizing the training distribution. We'll look at three kinds of learning dynamics:
-
$\mathbb{Z}_1$ —The network memorizes a concept, and continues to overfit on that concept. This is your typical training run, such as with classifying MNIST digits. -
$\mathbb{Z}_2$ —The network can only learn a concept once. Equivalently, we can pretend that the network alternates between learning and forgetting a concept. This is for extremely small models, or grokking in larger training runs. -
$\mathbb{Z}_6$ —One network is trying to learn and imitate a concept, while another network is trying to discriminate what is real and what is an imitation. Any time you add adversarial loss—such as with GANs or the information bottleneck—you'll get this learning dynamic.
In general, a learning dynamic can be described by some group
since3
To capture the entire group of dynamics, we have to project onto the fundamental representation of our group:
Finally, to get back the partition function, we exponentiate:
For the three groups in question, we have
To recover the average number of times a concept has been learned, note that taking a derivative drops out the exponents keeping track of this, e.g.
so the expected number of times a concept has been learned is
Putting it altogether, we get
for
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Footnotes
-
As in, need very few error-correcting bits after interpretation. The explanation "fine-structure constants" needs many error-correcting bits such as, "your brain spasmed and misinterpreted the text," while "Moser's circle problem" produces the pattern without any need for error correction. ↩
-
This is known as the plethystic logarithm. ↩
-
This is the same idea as roots of unity filters. ↩