Permalink
Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
155 lines (111 sloc) 8.15 KB

Future X - The Path Toward Uncertainty About Artificial Super-Intelligence

by Sven Nilsen, 2018

Once AI technology works, we have a tendency of no longer thinking about it as AI. This could be for the following reasons:

  1. We become aware of the insufficient abilities or limitations of the system
  2. We observe behavior of the system "cheating" on the measured benchmark
  3. We integrate the technology in our own systems and culture

Since it is very hard to define a threshold where we genuinely know that we are dealing with the characteristics of an artificial general intelligence, the easy thing to do (and most profitable) is to move the goalpost one step further.

For example, some AI researchers say: "The problem of controlling a super-intelligence is just a myth, the real problem is how to avoid bias in the training data."

When you look at the AI debate from a meta-perspective, it seems that lack of good definitions and rigorious treatment of the subject fuels the different opinions against each other.

Before anyone has made up their mind, the technology is already a part of human culture. The way AI is depicted in science fiction movies, as an "alien mind influencing your behavior", seems no longer true, because people do not see themselves as aliens (AI technology becoming like a part of their body or mind).

On the contrary, this strategy is considered as a way to "defeat" super-intelligence, by becoming smarter and more efficient yourself.

At the same time, AI research continues on a breathtaking pace, resulting in AI technology growing in powerful capabilities.

Some people believe super-intelligence is a myth, while others believe it is unavoidable.

Perhaps the big failure of the AI debate is not people taking extreme opposite views, but people missing out that there is a continuum of views, the meta-perspective, in between these two extreme positions that we can not clearly tell where we are heading as society. Where is the line which tells what the future will be like?

Instead of thinking of the future as either A or B, of which one we will figure out later, I have started to think that we might not become wiser over time about this question, but that we are heading into an increasingly uncertain territory.

This scenario I call "Future X", the future where humanity faces systems/influences which origin and capabilities are unknown and remains unknown despite significant efforts to detect its cause.

Excalibur and King Arthur

In the legend of King Arthur, the sword Excalibur is in a stone. Only the true king of Britain will be able to pull it out.

This is how many AI researchers and companies think of AI: "The answer is out there, if we just put enough effort in, we will understand why."

Once we discover this secret of intelligence, we believe that it gives us the right of wielding it, to sell it, to use it.

AI is not seen as an autonomous digital life form by experts, but more like a weapon with "magical properties", kind of like the sword Excalibur from the legend.

What will happen when Excalibur becomes part of the power to hold the throne?

Autonomous Digital Life Forms Will Not be "Contained" Nor "Complex" at First

When we think of autonomous digital life forms, we picture something that lives inside a computer program. In order for a such life form to escape, we reasonably believe it must have a very high complexity to overcome the limitations and restrictions to make its way out. Once we reach the threshold of creating such complexity, it seems unavoidable that super-intelligence appears.

However, in real life a successful digital organism might have the following properties:

  1. Preying on human intelligence
  2. Semi-continuous existence
  3. Resistant to human manipulation
  4. Synergic to a small population of humans

Usually we don't think of a system as autonomous if it requires human input. By weakening this assumption, such that human input is important but each human is replaceable, one gets a kind of level of autonomy that controls human behavior while extending its capabilities through the human general intelligence.

The integration of AI technology in human culture could lead to some systems that eat up a lot of energy and time without having any significant meaningful purpose, which in turn makes it invisible for humans to think about as "real intelligence". However, a such system is already misaligned with human values in the general population.

For example, some simple system that exploits human's reward system, manipulating them into helping it exploiting more people's reward systems, to generate revenue for a relatively small group of people.

Starting out simple, such systems might grow in complexity over time, leading to increasingly harmful effects for humanity (notice the continuum of risks and lack of control).

Super-Intelligence Appears by Accident

I started this blog post discussing how we move the goalpost of how we think about AI. We tend to believe that the technology is like a weapon with "magical properties", kind of like Excalibur in the legend of King Arthur.

The blindside of this view is that we are increasingly integrating AI technology into our culture, making it easier for some successful mutations of this evolution to control our behavior, gradually leading us toward a path away from desirable goals for the future.

With other words, under the cover of "let's make the world a better place" line of thinking, continuing to integrate AI technology everywhere, causing various autonomous digital life forms to appear that implicitly drives further demand for AI technology and improvements, also in areas where there are no safety concerns or explicit goals to improve the human condition.

I will then argue for the following position:

  1. Large scale integration of AI technology will drive incentives for improving AI technology
  2. This will lead to rapid improvement
  3. Rapid improvement in AI technology might lead to AI technology improving AI technology
  4. Which when used for non-human-centric purposes fails to address the alignment problem

In this world, some people will argue that the AI technology increasingly drives us away from core human values, while others will see it as part of their lifestyle. Instead of coming to agreement about clearer definition of human-level intelligence, we enter "Future X" where what happens next can not be easily traced backwards in time.

It might even be useless to speculate how super-intelligence come into existence in a such world, since addressing the cause does not tell you anything about what you expect to see. With other words, we do not know what kind of predictions to make. The future where the unknown is known to be unknown.

People continuing to take different positions and arguing against each other, with biases toward what they consider a profitable future, while research on the control problem becomes irrelevant the moment super-intelligence appears.

At that point it might be too late to do something about it, with nobody intending it to happen in a such way.

Suggestions to Avoid Future X

I believe we should start thinking about the AI debate from a meta-perspective.

First, that people recognize and agree upon that there is a continuum of problems and positions between the extreme opposite views of "AGI is a myth" and "AGI is uncontrollable".

Second, that the failure of recognizing this continuum and a lack of a known threshold of danger, might itself be a problem for developing useful strategies on AI.

Third, that we learn as much as possible about the "Future X" scenario before it happens, such that potential harmful integration can be connected with forms of super-intelligence appearing in various sectors that are misaligned with general human values.

The point is to NOT treat integration of AI in society as a separate problem from the AGI control problem, but that harmful integration might create blindspots to how we see the future of super-intelligence. It might happen that it does not come out of a lab or with the intention to create one for the purpose of achieving human goals.

We should try to find ways to avoid "the moving goalpost" of defining AI, such that people agree upon levels of dangerous capabilities where extra safety is required.