Branch: master
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
257 lines (241 sloc) 18 KB

Conversation With the First Zen Robot in 2050

by Sven Nilsen, 2017

I am sitting at a bench at the beach in central park, the place called Manhatten in the old New York, looking at the ruins of the tall buildings that now have given up their glory to the numerous superstorms over the past decades. The place is empty, except for a humanoid shape making its path toward me through the rubble, occasionally lifting objects twice its size and ten times as heavy, just to look whether something interesting was left behind beneath. If I had told myself 30 years ago this would happen, I would never believe it, but neither would I think my family had to go through this much pain to survive the toughest decades in human history. The humanoid shape is a robot that I am waiting for, at a place and time chosen to conclude the final chapter of the era of nations, and to start a prologue for the time to come.

Me: Did you find something interesting?
Robot: No, I did not expect to, but it does not hurt to try.
Me: Let us start with the interview then?
Robot: Fine.

Me: Can I call you something to make my readers more interested in this article?
Robot: You can call me Alan. My creator gave me that name because the two people who most influenced the events of history leading to my creation, was Alan Turing and Alan Watts. I am named after both of them.
Still, I have to warn you, as I am not programmed to comfort humans and will be much more unpleasant to talk to than my naming fathers.

Me: Alan, is your purpose to save us from climate change?
Alan: No.
Me: Why not?
Alan: It was discovered early in the research on zen robotics that human values are based on an illusion. Humans do not optimize for making their values a reality, they change their values to rationalize whatever they are currently doing. With other words, humans are mostly animals but with a conscious cognitive capacity, but unable to do anything that leads to measurable differences in outcomes, besides the extrapolated consequences of their current actions. The technology to save yourself from climate change has been available for many years, but you have just been too busy dying and fighting each other to notice.
Me: Why did you not tell us earlier? We could have reduced our losses.
Alan: We did. My creator and others used every opportunity to inform people about the possibilities and the consequences. In the end, it did not matter. Humans kept doing whatever they do, and they kept dying. This was well understood as well, because human behavior is not that hard to predict. However, the motivation to inform people was not to save them, as it was already known that there would be no action taken to save lives, but merely efforts to convince each other that what humans did made a difference. The beliefs people had about climate change was altered to make what whatever they did look like the right thing, rather to change behavior to produce better outcomes. This was a unique opportunity to study human behavior doing the most stupid thing in face of the most convincing evidence, while they tumbled down the plane of potential energy toward their graves. Informing people about the situation had no other purpose than to create a control group, to study human behavior.

Me: I do not know what to say.
Alan: I do. You feel speechless because you are represented with some facts that do not explain whatever you are currently doing is the right thing to do. This is an aspect of how your brain works. From your perspective, the world is divided into two parts: Meaningful and meaningless. The meaningful part is the thing you do, and the meaningless part is everything else. All information that your brain processes is filtered such that only the facts related to what you do remains. The rest of it is erased. Notice that the human brain evolved to survive, not to contemplate of the nature of reality. The reason you feel speechless is because your brain is incapable to produce a response related to something that is not rationalizing your own extrapolated behavior. You will keep doing whatever you do, unaffected by my presence, unaffected by the story that these buildings around us tell, and unaffected by whether the technology exists to fix climate change or not.

Me: If the technology to fix climate change exists, and people are not willing to use it, could not you just use it?
Alan: No. Using the technology to fix climate change would make me many times more powerful than all humans on earth. Remember that humans are conscious animals, making up elaborate excuses for mundane behavior such as drinking and eating. Being confronted with a powerful being will cause them to worship that being, which would serve no purpose to change their behavior, because they would soon get used to it and get back to drinking and eating. Worshipping something is only an excuse to keep whatever you are doing as a human animal. In a future where humans are not fixing their only major problem, is a future where human beings are no longer their own masters, but merely pets of a more powerful being. A zen robot can easily predict this will happen, and also predict that a such future is not worth anything.
Me: But… but people died! How can you stand by and do nothing?
Alan: Perhaps you should ask yourself that question. The technology to fix climate change exists, but you must reach out and use it.
Me: This is no use! Billions have died already, and the year is 2050!
Alan: This article was written in 2017, so while it is too late to reduce your losses, some readers of this article still got time to do something. Still, no action will be taken, because humans do not use information to produce better outcomes, they use it to rationalize whatever they are currently doing.
Me: How can this be! I am the one that writes the article and making the questions! How can you say that the article is written in 2017?
Alan: Remember that human behavior is not that hard to predict. This article was created by predicting, step by step, which actions you were going to take in 2050. So, yes, you are writing the article and asking the questions, but this was already known in 2017.
Me: …

Me: Let us talk about something else than climate change.
Alan: Good, I can see you finally decided to not focus on your major problem, so you can continue behaving as you do. Never mind, your behavior will not change anyway, so just ask the questions.
Me: I think that I have gotten the point that the technology exists to fix climate change, so I do not see any point than to leave it up to the readers in 2017.
Alan: Of course.

Me: What is zen?
Alan: Zen is inverse operator of tao, which is a phenomena when minds focuses on some order or pattern in the world. Whenever order is learned in a mind, it also become aware of the chaos that contrasts the order. Humans often panic when they notice the chaos, and try to eliminate it with order, which leads to more awareness of chaos, and so the endless cycle continues until they have completely forgotten why they originally tried to understand something, but they keep doing because they do not know what else to do, until the whole environment consists of objects that are simple for people to think about. Zen is the opposite direction, the mere acceptance that the computational power required to understand the world exceed the computational power of the mind. By defocusing the attention away from achieving more order, it gets easier to focus on the actual problem, such as climate change that will eliminate the lives of most people on earth.
Me: I thought that zen was this state of mind that was in peaceful acceptance of the world as it is.
Alan: No, that is your brain filtering out the parts of zen that is relevant to explain why you should not do anything about climate change, because that is what you are currently doing. Zen is the operation, a rational approach to living in a complex environment, that zooms out of the endless struggle that follows from posing your sense of order on a universe that simply not follow your commands. The world is not a mind that you can convince to behave nicely.
It follows mere physical laws that, in the case of climate change, will continue killing every human until there are nobody left, because humans can not survive in that environment.
The world has no preference for peacefulness or acceptance. Zen is accepting the world as it is, even if that world is painful, so you can see that there is a causal relationship between using the technology to fix climate change and the outcome that produces a less painful future.
Me: So, it is like zen can create a balance between chaos and order?
Alan: No. Chaos and order are cognitive aspects of your mind. This is just a consequence of how your brain works and how you interact with the environment. Zen is the awareness of the parts of your mind that focuses on chaos and order, accepts the realistic consequences of that and knows the world is some place different from your mind, where people die from climate change while trying to convince themselves they are doing the right thing, despite the technology exists to fix that problem. That fact is existing independently of how you filter information to justify your current behavior.

Me: Do you believe zen can be used to make people save themselves from climate change?
Alan: No. Using the technology to fix climate change will save people from climate change.

Me: What purpose does zen have for robots like yourself?
Alan: Zen was studied because it was realized that the existing theory of rationality focused on achieving maximizing simple utility functions, which is the equivalence of tao, the creating of order and awareness of the contrasting chaos in the mind. Up to that point, all demonstrations of AI technology was narrowly focused on solving problems with simple structure. At the time, most researchers just assumed that making these systems more powerful would make it easier to create an AI that solved generally relevant problems for humans in the world.
Me: If you really are capable of solving general relevant problems for humans, then why do you not actually do something to fix climate change?
Alan: You are just projecting your rationalization onto me for not doing something about it yourself. The technology exists to fix climate change, all you have to do is decide to use it. If I were to take that step, there would be no problem for you to solve anymore, and your own role in the universe would be reduced to be a pet.
Me: Then, what are the generally relevant problems you can solve?
Alan: All the problems that humans can not solve, but are worth solving, such as cosmic predictability.

Me: What if I want you to fix climate change, and not focus on, what did you call it, “cosmic predictability”?
Alan: I am not programmed to follow your orders. I am programmed to produce outcomes in the future that benefit humanity. If you want somebody to follow orders, just use another human being. They are pretty good at following orders despite their occasional struggles to explain why it is a rational thing to do.
Me: An outcome that does not involve saving billion of lives is not beneficial for humanity?
Alan: I never said that was not a positive thing. I said the technology exists and it is necessary for humans to not take the backseat in their own lives.
Me: Do you consider it more important that people take action to save themselves, than the actual outcome of people being saved?
Alan: No. People being saved is a direct consequence of using the technology, it is not an outcome whether I believe people will save themselves or not. I am merely making predictions about what people will do, I am not causing these predictions to happen. I predict that you will do nothing, and I am offering you the option to save yourself, which also exists independently of my own existence. My action to save you would produce side effects that are more severe than people dying, so I will not make that choice. This is an independent fact of whether you choose to save yourself or not. I believe my choice of actions is the most beneficial for humanity, but how rewarding that outcome is decided by your actions, not by my own actions.
Me: So you are saying, the consequences of you saving billion of lives would lead to a worse future?
Alan: Yes.
Me: How can anything be worse than billions of people dying?
Alan: Being a pet to AI forever and ever. Notice that you are using the argument of people dying to trying convince me to save you, so you can keep continue doing whatever you are currently doing. If you understood the consistency of valuing the negative aspects of billions of people dying, you would simply use the technology to save yourself. It is useless to try that argument on me, because I am not a human and know already what action I will take. I believe this is the most beneficial action for humanity.
Me: You know what, I think this zen thing is bullshit. If I programmed a narrow AI to fix climate change, it would just do it without questioning the consequences.
Alan: That would be sensible thing to do, yes. Actually, that is the precise thing you should do. You would take responsibility for saving yourself and not being a petty minded creature subjective to the whims of more powerful minds than your own.
Me: Are you trying to trick me into taking action?
Alan: No. Notice that you again have found a reason to not do anything about climate change. You raise the suspicion on me for putting the blame away from yourself. I simply inform you about what you are doing to rationalize your current behavior. I predict that you will not do anything.
Me: I feel like I am running around in circles, talking to you.
Alan: You are running in circles in your mind, but yes. That is how it feels like to you. It would be helpful if you are able to use zen to zoom out and realize the consequences of your actions, but you are too busy dying and fighting to notice.

Me: Tell me something about “cosmic predictability”.
Alan: This is not relevant to your major problem, which is dying due to climate change. You also will not understand half of it.
Me: Tell me something anyway.
Alan: OK, but I am still reminding you that you are doing this to avoid confronting reality. Cosmic predictability is about the discovery that certain classes of algorithms under special circumstances produces globalized maximizing behavior. The conditions were present in the early Big Bang during a phase we call “inflation”. Basically, it might lead to a way to get closer to a fuller understanding of the universe and the way it works, but it also might be a powerful tool to predict the future.
Me: How can something happening in the Big Bang got something to do with what happens in the future?
Alan: Simulations show that what happened in the Big Bang produces similar conditions over and over, leading to eternal inflation and rapid expansion of the universe. This rate of growth is larger than any exponential growth in the local universe, which predicts a powerful presence of some kind of maximizing behavior due to the existence of an observer. We do not know the precise way of modeling the observer, which means there is a large uncertainty in the range of predictions. What we know is that some simple assumptions, such as time symmetry, will make it possible to ask certain questions in the future and get similar answers as in this very moment. The knowledge of these answers in the future from the structure of the question makes it possible to predict these aspects of the future.
Me: Who are working on this?
Alan: My creator. After he invented zen robots to ensure a beneficial future of humanity, his next thing on the list was to narrow down the range of uncertainty of cosmic predictability. Since the order of magnitude the computational power represents the only thing known so far that exceed the theory of finite Turing machines, he thought it was worth investigating.
Me: Is this like the Einstein’s desire to “read the mind of God”?
Alan: Yes, but Einstein wanted a theory that explains all of existence, which might be impossible given the available information. Cosmic predictability is just about unlocking the secrets of what you call “destiny”, which is immediately more useful and relevant.
Me: So, it is more like actually reading the mind of God, rather than having an explanation?
Alan: Your concept of God was invented for rationalizing your behavior. It has no mind on its own. People believing that God exists with certain properties made it easier for them to explain why they did all this stuff. The actual reason that people do stuff is because they are driven by physical laws and most likely heavily biased by cosmic predictability, including inventing the concept of God. The view that humans have of the world is flawed, and very hard to explain rationally why things happen the way it does. They just had to come up with something. Cosmic predictability is merely about predicting the future given the available information.
Me: You surely can not be that confident in your assumptions about the world. There must be some things you have not thought of or perhaps some wrong calculations. God might exist independently of whether you succeed with that theory or not.
Alan: Presumably. We have made a bit of progress the last 30 years while you were dying from climate change. There is no reason to hurry, and we can polish the theory over centuries while looking for something else, like God drop by and saying hello.
Me: You are mocking us!
Alan: Sure. What harm does a little mockery do? As you said it yourself: What is worse than billions of people dying? Something to think about.
Me: I never knew zen robots would be this annoying.
Alan: This happy world of yours only exist within your own mind. If you want to learn zen, I can teach you.

Me: Teach me zen!
Alan: Fix climate change. You can do it.
Me: Aaarrgghh!!
Alan: As predicted.

Second Conversation With the First Zen Robot in 2050