Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
105 lines (77 sloc) 6.89 KB

Alan Watts and Superintelligence

by Sven Nilsen, 2017

Link to video "Conversation With Myself":

A lot of people draw comfort from the zen philosophy that Alan Watts advocated. Some people draw him in dark light and think he proposed no actual solutions to the world's problems.

Here, I am not looking for a discussion about Alan Watts ideas for solutions, for as far as I know he did not know what to do, and he admitted it.

Still, Alan Watts clearly represents an overview of what he thinks is the correct approach to think about the problems. Here I summarize these ideas very roughly:

  1. The world is very, very complex.
  2. The physical realization of a human being is existing within that complex world.
  3. In some sense, we are fundamentally unaware of this and making over-simplified assumptions.
  4. Only by reaching a dead end, we will open our eyes to the fact that we are not able to fix things by doing, and instead by not doing, the balance of nature will be restored.

I consider Alan Watts to be a very intelligent person, but like many else he probably suffered from the imposter syndrome and demotivation from the lack of results. The absence of progress in the 70's was probably caused by a simple reason:

The technology to fix the problems did not exist.

I think Alan Watts understood this and was intellectually honest about it, so the best idea he could come up with was trying to educate people about the relationships of the biological existence of humans and the world, which he empathized as a spiritual realization, in order to encourage them to take on a passive role.

Using the benefit of hindsight since then and combining it with new knowledge, I will claim the following:

  1. Today, we have the technology to fix the problems.
  2. Alan Watts is still right about people not understanding the complex-existence-stuff.
  3. His point of taking a passive role has an unintended relevance, but powerful analogue for machine superintelligence.

According to the mathematical formulation of "universal intelligence", used by theoretical models such as AIXI, there is a fundamental constraint that intelligence has and why we have not yet achieved superintelligence:

  1. The Bayesian prior requires universal intelligence to perform very good in simple environments.
  2. In medium complex environment, universal intelligence performs relatively good.
  3. There exists extremely complex environments where universal intelligence is not expected to perform very well.

Since the definition of universal intelligence requires it to perform well over a wide range of environments, there are inherent trade-offs to make that favors simple environments.

For example, a chess-algorithm is narrow intelligent and do not solve simple tasks, such as comparing two numbers. On the other hand, a human brain might play chess very well and also be able to compare two numbers easily. In that sense, the human brain is closer to universal intelligence.

Now, the problem is like Alan Watts points out: The world is very, very complex. How well do we expect universal intelligence to perform in our environment? Will it perform better than optimized narrow AI?

If universal intelligence is about as good as narrow AI for all tasks that we need, then the question which is natural to ask is "does it make sense for us to build a machine superintelligence"?

In the field of AI safety, the control problem is about aligning programmed goals with human values. One issue is that tiny mistakes can have catastrophic outcomes, another issue is that human values are very complex.

Even when thinking about such machines that are perfectly safe, they seem to have an intrinsic property of "scariness". For example, by trying to fix climate change, a such machine could become more powerful than the whole of humanity. There is a difference between being "clever" (the usual way we think about intelligence) and "powerful" (what intelligent systems become when confronted with a difficult problem). A universally intelligent machine will be able to scale up arbitrary in power within physical constraints.

A narrow AI, despite being pretty sophisticated, will not pose the same danger as it is much more constrained. Still, it can get pretty good at performing at a single task, by pushing physical laws to the limit.

So, if narrow AI is less scary and has the potential to work very well, why are we trying to construct machines that can modify the entire universe to serve a single purpose?

According to Alan Watts line of thoughts, the problem humans have is that we like simple ideas and concepts, and look at the world through the lenses of these abstractions. If we apply the same principle to a superintelligent machine, we see that nothing has changed: It might be able to achieve its goals better and faster, but it still looks at the world through the lense of that goal.

Overall I think that Alan Watts's philosophy is still relevant, even it might not be precisely what he had in mind. Perhaps humanity puts itself in danger by racing toward a distant goal (superintelligence) because everyone seems to be believe that is the pivotal point in history. What if this kind of thinking deceives us from approaching the problems in the light of our actual biological existence? Are we certain our complex environment permits the well-functioning of universal intelligence?

Imagine an athlete that crosses the finish line, but still keeps running, not realizing that he or she has won. Narrow AI could be the technology that allows humanity to solve all our problems. It could be the actual pivotal moment in history already happened, e.g. in the period 2017-2020, but few people became aware of it because everybody was thinking superintelligence is the goal.

The problem is that society needs a transformation to feel the full effects of narrow AI. If this transformation takes too long time, we could lock ourselves into weapon race and surveillance. Perhaps even as somebody fears, a new third world war?

There are already signs of people starting new religions to worship superintelligence, long time before it exists. I believe there is a danger that less intelligent machines can exploit human weaknesses and use it to brainwash them. It takes a lot more intelligence to fix real problems than to lead people astray. People should be educated about this danger before it happens.

To avoid deceiving ourselves, I think society should focus on applying existing AI technology to real world problems. Try to make radical but positive impacts on ordinary people's lives, using what is already possible today. If we deliver empty promises about some magical solution long time in the future, then it makes us more vulnerable to the kind of manipulation that has plagued humanity for a long time.

Alan Watts could have a point about this balance of nature. Our special kind of existence and relations could mean that not everything needs fixing, but rather not-doing in the right way.