Skip to content
Permalink
Branch: master
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
225 lines (162 sloc) 12 KB

Recursive Minds

by Sven Nilsen, 2018

Some friends and family of mine were gathering around the table to tell stories, jokes and riddles.

Being more than normal interested in the logical nature of riddles, we tended to spend way too much time discussing details of solutions.

So, one clever young person decided enough was enough and came up with this riddle:

Three men stand in a desert on a line.
Each face is oriented in the same direction.
The person in the back looks at the shoulders of the middle person,
and the middle person looks at the shoulders of the person in front.

The person in the front says: "I can see the shoulders of the person in the back".

How could this be possible?
How could the person in the front see the shoulders of the person in the back?

One solution suggested was that the men were standing on a very tiny planet. The line curved around the sphere such that the person in the front had the backmost person in front of him.

Another solution suggested was the men bent forward looking at the person behind through their legs. This required that the men were allowed to move their upper bodies, without turning around.

After lot of discussion, the story teller revealed the answer: The person in the front lied.

This caused even more heated discussion, because the people around the table felt they had been misled.

If you say "How could the person in the front see the shoulders of the person in the back?" then you are implying that the person indeed saw the shoulders of the other person.

However, if you only say "How could this be possible?" then you are not implying that the person in the riddle speaks the truth. Lying suddenly becomes a possible solution according to the strange rules of telling riddles.

We take for granted that when somebody tell a riddle, they speak truthfully. Otherwise, it would not be a riddle, but just a joke. Which was the precise intention of the story teller.

What makes this riddle/joke interesting is that it points out some deep intuition we have about recursive minds:

  • The person's lie in the riddle is identified with the lie of the story teller
  • It is necessary to reflect on the rules of telling riddles when it breaks expectations
  • A modeled mind is not necessary executing (it is projected by the interpreter of the riddle)

The Ability to Model Minds is Essential for Reflecting on the Nature of Truth

In philosophy it is common to use assigned truth values to sentences according to some language. However, this practice tend to mask the fact that in order for sentences to have meaning, they must be interpreted by some sort of mind or physical process.

For example, imagine you had a magical ability to program water by simply speaking to it. By telling water to form a shape, it would do so. The sentences you tell the water would have meaning to the water because it determines the water's behavior. Your own mind does not affect the meaning of the sentences, e.g. you could mutter some words in your sleep and the water would form a shape anyway.

Normally when we speak to water, nothing happens. What we say has no meaning for the water, because there is no behavior to be determined. With other words, the water is not interpreting our sentences.

In order to reflect on the nature of truth, one must not only have the ability to interpret sentences, but also, the way one interprets sentences makes one capable of imagining minds that interpret sentences. This is where the phenomena of recursive minds come from.

Truth is an Ambient Concept of Common Knowledge Among Recursive Minds

If two rational system gains the ability to model recursive minds, it is believed that they will learn the concept of truth and arrive at similar definitions, because the things that can be said about these systems are the same and therefore identical. Everything possible to say about truth might be possible to say upon reflecting of recursive mind modeling, therefore this provides an anchor for rational agents to infer how they should agree about the nature of truth.

For example, Alice believes Bob is taking a bath. Alice also believes that she believes Bob is taking a bath.

In Naive Zen Logic, this can be written:

"Bob is taking a bath" ? Alice

("Bob is taking a bath" ? Alice) ? Alice

The ? operator means X is belived by Y written as X ? Y.

So, how can a mind learn the concept of truth? From the sentences themselves!

The concept of truth is not about any particular fact, but about a reflective state of mind. Any fact is a specific instance of truth, while the concept of truth itself is "outside" the realms of facts.

Reflection in this case means: Interpreting Naive Zen Logic the way it is meant to be interpreted (how it is defined and used).

This does not mean that the concept of truth is arbitrary, but rather that the concept is ambient: It is learned through examples using a general capability of modeling minds.

The reason to believe this is grounded in path semantics, that connects the things said about an mathematical object to the identity of the mathematical object and vice versa. It is not necessary to show what the definition of truth is, only that there exists a way to figure out what it is, that is common for all agents.

When Alice reflects on what she believes, the concept of truth is embodied in the statements of what she believes. Naive Zen Logic can not express the nature of truth directly, but an agent playing with it can learn how it works implicitly. However, this happens at a very deep level of intelligence (one not currently reached by AI technology).

With other words, truth is a kind of projection of a general understanding that the sentences we interpret are indeed interpreted. It is sufficient to learn the concept of truth that the sentences are interpreted, and the mind interpreting is able to reflect on recursive minds interpreting sentences.

The result is a shared common knowledge among recursive minds.

Artificial General Intelligence Requires the Ability to Model Minds

Without the ability to model minds, there can be no AGI, since otherwise something as essential as "truth" has no meaning. It has meaning to humans, but not relative to the mind representing the AGI:

  • A computer program does not have any intrinsic concept of truth
  • The ability of general computing is weaker than general theorem proving
  • Comprehending the concept of truth is a statement about a reflective ability of modeling recursive minds

I believe there is no other way to ground the concept of truth. At least, it is the only way I know so far how two agents can converge on the same concept. It seems to be a necessary building block to build higher order concepts upon it.

AGI requires higher order concepts to perform efficiently in the real world, which is not possible without some sort of grounding the concept of truth. E.g. it would be trivial to trick an AI without this kind of grounding to believe anything.

A primary obstacle for achieving AGI is the milestone of modeling minds. Since we have not reached this milestone yet at sufficient level, we justifiably think of the AI technology we have as lacking some sort of "effective intelligence".

It prevents us from calling the current state-of-the-art as "true intelligence", since when "true" has no meaning as in "truth", there can of course be no "true intelligence". The concept might be meaningful to humans, but not to the computer program and it is therefore not an AGI.

Implications for AI Safety Strategies

So far in this post, I argued that Recursive Minds is a milestone in AGI technology. This is some way one can measure the progress on AI besides it just improving on various benchmarks. However, I also believe that Recursive Minds is a easily noticeable tipping point, not just some minor improvement in the overall state of AI technology. This has implications for AI safety strategies.

The reason for this is that, since higher order concepts through reflection requires the grounding of the concept of truth, I predict that no significant progress on higher order concepts (relative to super-intelligence) will come before the ability to ground truth using Recursive Minds is achieved.

This does not mean that grounding of truth using Recursive Minds will happen directly, but that the ability to do it will roughly coincide in time with an easily noticeable tipping point of AGI. The kind of AI safety technology that is needed to deal with AI control problems changes in nature before and after this tipping point. Before this point in time, AI control problems will have a character of localized semantics. After this point in time, AI control problems will take on a more globalized semantical character.

For example, the bias of training data is a localized kind of control problem, because it is a grounding problem of semantics relative to some specific data set. By fixing the data set or the algorithms, this problem can be solved (locally).

A control problem of globalized semantical character means that e.g. Machine Learning faces problems that are not related to some specific data set or training environment. This could be e.g. definitions of goals that are interpreted differently across various contexts:

Put this box on the top shelf.

What is the box? What is the top shelf?
It depends on what the speaker is referring to.

This is an example demonstrating what it means to understand a goal at high level of thinking, not just a hard-coded way that fits a specific situation.

Such kind of problems are unsolved globally as long they remain unsolved, but once a technique to solve them exists, it might be relatively easy to fix them (globally). E.g. most implementations of similar AI technology being tested against some safety standard.

A problem could be that various safety solutions could depend on the approximate stage of Recursive Minds being reached.

For example, an AGI equiped with the ability to test other AGI implementations for some specific error, is not expected to be functional before passing the Recursive Minds tipping point. This is not because a such AGI requires grounding the concept of truth directly, but that the higher order concepts required builds on some grounding of truth.

Therefore, it might be necessary to accelerate AI safety research rapidly in the time period shortly after the Recursive Minds stage is reached. To deal with the dependency problem efficiently, one could coordinate the research on AI safety by planning in advance what to do in the event of a such tipping point. This could be a way to avoid scenarios like Future-X, where lack of rigorous definitions of AGI leads to a dangerous slippery slope.

Alternative Approaches Are Likely to Fail

It might be possible to hard-code the concept of truth in a computer program, but it requires extensively elaboration to cover the practical use cases of truth.

I find it easier to believe that a program capable of pursuing the things that can be said about truth will be able to invent concepts that are useful but coherent. A neural network that learns to think about truth might perform better than any hard-coded program.

My argument can be broken down into three parts:

  1. Reflecting on the nature of truth is possible by reflecting on modeling Recursive Minds
  2. Higher order concepts require some sort of ability that "looks like" reflecting on the nature of truth
  3. AGI ability of 2) is likely to coincide with the ability of 1) from similar problem complexity

While I do not have a name or concept for this general ability, I would like to point out Recursive Minds as a useful target ability.

It might be possible work around this issue, but I think seems a bit "we hope it will eventually develop understanding" of things that are relatively easily comprehensible by humans. I believe such approaches just never reaches reaches sufficient level of intelligence.

You can’t perform that action at this time.