The Stone and the Glass House
by Sven Nilsen, 2017
When people think about artificial superintelligence, they often compare to the cognitive capabilities that humans have. This leads easily to a wrong kind of thinking that assumes artificial intelligence will adopt similar traits.
In ordinary life, when people think about intelligence, they imagine the complex abilities that are required to behave like a human being. In computer science on the other hand, it is the problems that have complexity, and any algorithm that can deal with this complexity is sufficient.
Artificial superintelligence is very likely to have a property that I could call "shapeshifting intelligence". Instead of thinking about something that looks and behaves like a human while being smarter, it would be more proper to imagine something that beats all biological life forms on arbitrary benchmarks.
- Better at survival under extreme conditions than tardigrades
- Faster at self-replication than the fastest growing bacteria
- More able to exploit a variety of energy sources than any mammal digestive system
In this sense, it makes it much easier to think about which benchmarks humans use to measure intelligence. A lot of traits that human have, such as desire to make more money, is a benchmark specifically designed to fit the human genetic makeup and culture, and completely irrelevant for most other life forms. An AI would only think of making more money if it operated under similar constraints that humans have. Most of the things we do would be completely meaningless, when it could survive million of years all by itself. It is only when AI is programmed to solve human problems it becomes meaningful to us.
It is better to think of the phrase "super-biological intelligence" rather than mere super-human intelligence.
The primary reason we are trying to develop AI technology, is to solve problems, not because we want something that mimics human behavior. That is just a specific application, but AI is much more general than that. Problems viewed from a computer science perspective contain intrinsic complexity, which drives the characteristics of an optimal system for solving problems in general to look very different from humans.
An AI that is capable of redesigning itself, or produce modified copies of itself, or design other smart systems, can operate outside the constraints that humans have and therefore behave in very alien ways.
Even if we design an artificial superintelligence correctly, it might have very shocking and frightening impacts.
For example, consider the following scenario:
- Marine ice-cliff instability in West-Antarctic will lead to 3m sea level rise in a few decades.
- This will happen regardless of how temperature changes, because the reason the ice-cliffs collapses is not just heat, but because they get too high to carry their own weight, and when they collapse they expose even higher ice-cliffs.
- Because of the physical size of these ice-cliffs, humanity fails to come up with a solution.
- We develop an artificial superintelligence to fix this problem.
The artificial intelligence is designed such that it will never do anything that that we rather would prefer not happening. However, because hundreds of millions of lives could be at stake, it would rationally scale up in power until the level of scariness is cancelled by the value of hundreds of millions of human beings, if necessary.
At first, one would observe a system that is vastly more intelligent than humans, trying to solve the problem by being extremely clever. For example, it could try making nets of carbon-nanotubes or another strong material to hold the ice-cliffs from collapsing. It does not require much to do this, only a bit of sophisticated technology.
Now, imagine that nets of carbon-nanotubes is the optimal physically possible solution that requires least amount of energy, and yet it fails.
What do you think happens if the AI fails at solving this problem by being clever?
If the ice-cliffs collapses despite the nets, it might infer that there is no low-energy solution to stop the sea level from rising, and it could go to extreme measures. For example, it might create a self-replicating system that grows at an extreme rate, and build a gigantic wall around the ice-cliffs, or cut up the ice in pieces and transport it to somewhere safer place. This could require more energy than the whole world economy consumes in a single year.
The AI becomes vastly more powerful than humans in terms of consuming energy, not because it desires to do so as an end-goal, but because that it tries to avoid outcomes in the future that harms people.
Human beings do not behave like this at all. One reason that people work the way they do, is because we are able to ignore suffering and consequences to other beings. We tend to create a world of illusion around ourselves, a glass house of perceived prosperity.
A powerful AI, despite being designed correctly, will seem frightening to us, because it behaves as we imagine our utilities from the inside of our perceptive dream. It will be like a stone was thrown against the glass house we live in, and suddenly we become aware of this fragile form of existence.
The difference is that human values are not correlated rationally with our actions. We know what is right, but we manufacture a dream, an illusion, to save calories from thinking about the world. This is how our genetic makeup works, the limit from a biological perspective to what problems we can solve.
The AI, on the other hand, is beyond the ability of biological intelligence. It might predict how scared we are of powerful systems, but it also knows about the stakes in an urgent and difficult situation.
One reason that the illusive dream works very well for human beings, is because by doing nothing to solve severe problems allows the temporary perception that the world is not a dangerous place. This do-nothing is an adaptive trait that require less stress on the human body, and saves calories, something that was very important when human evolved.
Our human physiology is not optimizing for outcomes, but for the energy required to survive and reproduce. The same thing goes for all life forms on earth, for all biological beings.
When we get in touch with a system that rationally optimizes outcomes, it will be the first time that we witness anything behaving that way. As a result, the observation of something powerful and acting decisively on behalf of our whishes, will be impossible to ignore, and therefore the perceived illusion that do-nothing is a good choice will fall apart. We become aware of how close we are to the abyss of destruction.
An AI that is correctly designed and safe will push the world to a state that is like a horror movie, where people wonder if this will end well or not. It will do so to the extent that is determined by the stakes of the problems are have created, and how easily it can keep its actions just below the level of scariness where we are frightened out of our minds. This level can get pretty high, because we are currently putting the majority of large animals at risk.
The AI technology we have today is not bringing the radical improvements to most human lives as fast as physically possible. It is much easier to create an impressive demo of what AI can do, than it is to solve actual problems in the real world for millions of humans. The difference is scale, or described with other words, the power to transform society.
As technology gets more and more sophisticated, we should reach higher ethical standards and set higher goals. Sometimes we have to move outside the glass house a bit. Otherwise, technology can be just a subject of worshipping, a promised never-land, beyond the horizon.
If we are serious about applying AI to society, then we should expect significant and radical improvements very rapidly. This is a result from the mathematical optimal solution to almost any problem with lives at stake, is more like "just fix it" than trying to compare how much money one can save by pretending the situation is not that serious. How can we use AI to make our society more fair and happier, today?
I think we possess a deep fear of a complex world where things are hard to understand and where the future is not guaranteed. Perhaps this is why we desire AI, because we know we have problems, but we do not like to think about them, to realize the full scope and what is at stake, and actual work on finding solutions. It feels easier to fix the world using a magic box of technology, sometime in the future.
Even in the case we succeed at building a such thing, it will not spare us from being aware of the glass house we live in. The stone that AI will become, makes a lot of noise and shows where we are, and where we will be as a society.