Permalink
Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
145 lines (129 sloc) 9.98 KB

Second Conversation With the First Zen Robot in 2050

by Sven Nilsen, 2017

For previous conversation, see Conversation With the First Zen Robot in 2050.

Confused and speechless, I wandered the ruins of Manhatten in New York, taking random turns of directions and sometimes getting lost temporarily. Before I noticed, the darkness came and I found a place to sleep. It was a tilted building where the foundation collapsed due to saltwater penetration in the underground porous rock, causing a chemical reaction that the city planners, gone for a long time, overlooked when they tried to prepare for climate change. It was fun looking up at the building, making my head dizzy. I packed out my sleeping bag, which I carried with me everywhere the past ten years. Tucking in myself to get warm, I also ate some protein bars produced by a synthetic bacteria, the only widely available food now the agriculture infrastructure was dysfunctional. A nano filtered flask turned a rain pit into high quality fresh water. As my thoughts drifted away into sleep, I smiled at the idea of how cheap my lifestyle was now compared to 30 years ago. On the plus side, I had gotten used to a nomadic life.

After my previous first-contact with the first zen robot, I had the first impression that it was very annoying and hard to see how it could be useful in the real world. The way it talked with constant vigilance about climate change felt very intimidating. However, it was something about how it expressed itself that made an impression on me, and this image haunted my dreams.

A zen robot is not like a person, but some sort of approximately rational agent, so how could it trigger such strong emotions within myself?

The next day I found the zen robot, asking for a second conversation. I decided that I needed time to prepare, and asked to meet on top of a green, rainforest covered, unnamed mountain in Ethiopia. This was the home country of a long forgotten philosopher who inferred a series of modern ideas of equality and morality while living in a cave, hiding from the thought police of the early 16th century. My journey to this distant country and the climbing up the peak would give me an opportunity to process my first exposure to raw zen rationality and think about the future. It also nearly killed me, facing the danger of frequent heat waves that now plagued all countries near the equator. Animals were migrating from the sea up into the mountains, where the air is thinner and added heat is less fatal. One early morning, reaching the peak of the mountain, the earth rotated as usual, exposing the sun beyond the horizon and its light fell upon myself facing the zen robot for a second time.

Me: I will not do anything about climate change if you mention it during this conversation.
Alan: Understood.

Me: Tell me about how zen robots think about goals.
Alan: Zen robots, as the name implies, uses zen rationality in combination with a notion of “rational natural morality”. I suspect you already are familiar with these concepts, because of the curious choice of this meeting place. Zera Jacob was known for his cut-the-crap style of philosophy, even it was within his religious worldview. His short treatise, a lot of it focusing on events in Ethiopia, but the little remaining of ideas makes more sense to some people than most thick books in existence.

Me: That was on my mind. I was impressed by how Zera Jacob came up with these ideas while living in isolation. I know a little about zen rationality but nothing about rational natural morality. My readers probably do not know either, so can you start from scratch?
Alan: OK. Zen rationality is the concept that the true goal is unknown. Instead of having a single purpose, a zen robot navigates the abstract landscape of goals to figure out which part is worth pursuing. It does so in a rational way, making an efficient search according to its current understanding of the world and how it anticipates itself behaving if hypothetically given more time and resources to complete its goals. When it imagines itself spending more resources than it does, coming to a conclusion, the confidence in that conclusion determines whether it will gather more resources or not.
Me: Fair enough. What about rational natural morality?
Alan: In short, rational natural morality is a way to weight beliefs about good and bad actions according to how life evolved and functions on earth, giving it a priority above statements expressed by minds in general. In psychology, a “theory of mind” is a cognitive ability that humans develop to make judgements and predictions about other humans, given their access to knowledge and mental stability. Rational natural morality requires a theory of mind to distinguish statements caused by minds having beliefs about the world, which might suffer from lack of information or cognitive capacity, and beliefs that are inferred from deep understanding of the desired continuation of that world as a whole.
Me: What is rational natural morality used for?
Alan: It determines whether a zen robot decides to trust a human.

Me: Do you trust me?
Alan: In part. Your initial opening of this conversation shows you have some idea of what is going on inside a zen robot’s mind, which is a necessary step to let me help you, but if you decide to abuse this trick I will gradually stop trusting you.
Me: No problem. I think I get you (and my readers are left with the exercise to figure out why the zen robot bothered me about climate change).

Me: We were talking about how zen robots think about goals. What kind of goals are worth pursuing?
Alan: Remember that I am simply programmed to interact and learn from the world in a certain way. Whether my view of goals is ultimately better than other approaches is a complex topic. It boils down to the nature of truth, that despite its shy character and counter-intuitive properties, is approximately universally agreed upon to be a valid concept. I can only tell you what my source code drives me to think about the world, but I am fairly confident that you are, despite being intelligent for a human, unable to come up with a better approach in less than a decade. I hesitate to say a century, because with my help you might make much better progress toward new insights of general rationality.

Me: Name an example of a kind of goal that is not worth pursuing.
Alan: Impossible goals. They all lead to failure and therefore the pursuit is terminated quickly.
Me: What does it mean to terminate an impossible goal?
Alan: Goals that require an infinite number of steps with a discrete reward at the end can be completed. They can be completed in the sense that you already know that you are going to fail achieving them. Failure is a mode of completeness.
Me: What you are saying is that instead of spending infinite amount of energy trying to do the impossible, you are just giving up instantly, because you know there will be no reward along the way. It makes sense from the perspective that the goal is known to be unknown, and we live in a world where finite amount of energy is available. Whatever that other goal might be, the energy must be spent wisely. In the case there is no other goal than the impossible one, we know we failed anyway.
Alan: Precisely.
We are not pursuing goals in an abstract platonic world where computation is free, but merely very cheap. There are mathematical consequences by existing in a real world and expressing goals as computer programs. These consequences are real in space-time. If you are willing to use an analogy: Mathematical consequences are like bridges. When you navigate blindly, you are likely to fall off them and hurt yourself.

Me: I imagine there is a way of classifying goals, perhaps as a sort of diagram for easy explanation. When these situations occur, a zen robot will immediately shut down all effort in that direction and provide some evidence justifying its decision.
Alan: It is not even worth approximating the goal, because approximations are not rewarded. When I anticipate that you would prefer something rather than nothing, I would make approximations.

Me: When I do not want approximations, and you instantly fail, then I would agree with your decision to not do anything. I can see how your wisdom is serving my true desires and not the ones I think I have.
Alan: No. I would decide to not do anything even if you wanted me to spend an infinite amount of energy. My decision is a result of the mathematical nature of the goal.
Me: Ah, I see there is a difference between wanting to do something and wanting something in return. It is outcome-based versus activity-based thinking.
Alan: Yes.

Me: I understand that there are impossible goals. How would you describe the class of goals that are worth pursuing?
Alan: There are goals that reaches an end, but those can not be the true goal, because it conflicts with rational natural morality. Life evolved in increasing complexity because survival is the continuation of similar structures in the physical processes. To protect this, the zen agent can never terminate unless it finds a transcendent way to achieving its goal.
Me: So, the goal can not be a reward at the end, because then we end up with something taking an infinite steps which is impossible to achieve.
Alan: Yes. The goals that are worth pursuing kind of looks like they are impossible, but the difference is that there are rewards along the way. These rewards are measured in a way to not bring disturbance to the complex processes of life. For example, a constant rate of change is destructive to any finite system.
Me: The zen agent must understand the semantics of derivatives.
Alan: Correct. This puts a lower bound on the agent’s complexity.

Third and Last Conversation With the First Zen Robot in 2050