Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nothing about Robotics and Self Driving Car? #14

Open
Shamar opened this issue Nov 8, 2018 · 12 comments

Comments

Projects
None yet
3 participants
@Shamar
Copy link

commented Nov 8, 2018

I think you should add a section about dangerous robotics, such as self driving vehicles.

The death of Elaine Herzberg caused by an Uber car that was configured to not break in such situation to avoid car-sickness to Uber's customers comes to mind.

"According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior"

Also the crash of Flight 447, where the pilots faced the paradox of automation killing themselves, 228 passengers and the crew should be mentioned to clarify the UI/UX problems that need to be solved before widespread deploy of AI systems.

@daviddao

This comment has been minimized.

Copy link
Owner

commented Nov 8, 2018

These are two amazing pointers! I really enjoyed reading through it, especially because I worked myself in autonomous driving for a while and was faced with the moral implications of the trolley problem.

However, I'm trying to grasp how we could (or even should) include it to the list? Automation and robotics itself are not awful at all (at least in aviation they generally greatly enhance security).

  • I would argue that automation failure and thus the paradox of automation is a problem on a human psychological level.
  • The Elaine Herzberg incident in self-driving cars was a ruthless decision made by the engineering and management team of Uber.

In both cases the AI system was not designed to harm or exploit us in that sense. I'm interested in what you think @chobeat @thomaid @Shashi456 and others?

@Shamar

This comment has been minimized.

Copy link
Author

commented Nov 9, 2018

This is going to be an interesting discussion. :-)

This year I gave two talks about AI. In the last I tried to explain how one of the possible interpretation of Herzberg's murder was that we have finally reached the Singularity.

The problem is deep.
No application you list here "has been designed to harm or exploit us": some would argue they are designed to help, to protect our security, to reduce soldiers' deaths, to augment our intelligence.
While I agree that this is marketing / propaganda, we have to consider that some people really believe, almost as an act of faith, that AI will be all used for good.

The road to hell is paved with good intentions.

So what a system is designed to do is irrelevant.

What we have to consider here is

  • what these systems actually do
  • how they were designed: the warranties that the design and development provide over their safety.

Because, we should never forget that we build machines to serve us.

Several self driving cars caused human deaths.
So they are awful on both what they do and how they were designed.

Now I agree that robotics in itself is not awful at all! Aviation is actually a good example.
How much it costs on average a line of code of the autonomous flying system of an airplane?
Fine, multiply that budget for a couple of orders of magnitudes and you have a forcast of what you need to build a self driving car that need to travel on roads open to human traffic.

Indeed, the complexity that an airplane have to face is a tiny fraction of the complexity that a car have to face on a common road. And the airplane is a level 2 vehicle, with pilots on boards and tower of controls that continuously monitor their movements globally.

Thus to develop a simulator able to drive a car with a similar cost of the simulator that drive an airplane, you need to similarly reduce the complexity: you need totally isolated roads and tower of controls. And still you cannot move beyond level 2 vehicles, with such costs.

This in turn makes the paradox of automation both a user interface issue and a security one: the user need to maintain constant attention despite not being engadged in the process. This in turn can become a legal problem: who is responsible of a death caused by vehicle driven by a human fooled by a broken user interface?

So, to recap, with autonomous robotics, we have:

  • a complexity problem
  • a user interface problem
  • a security problem
  • a legal problem

All of these problems are being simply ignored by the AI band wagon.

But because these are pretty obvious issues, we cannot disregards them as "ruthless decisions made by the engineering and management team". They are inherent to the matter, and they make autonomous vehicles inheritely dangerous until they are ALL tackled and solved.

the moral implications of the trolley problem.

MIT's moral machine should be in the list of Awful AI, in a section called "Propaganda & Brain Washing".
It leverage human fallacities such as herd behaviour as a form of global crowd manipulation.

The whole "experiment" assumes that you can assign a relative scalar value to the various lifes proposed so that you can chose who to kill. Such assumption is forced to the participants through the test that have to accept such assumption to answer the test.

This is perfect with the Ethics of Capitalism, where everything can be reduced to money, including human lifes.

Fortunately, more advanced ethical systems are multidimensional and do not map at all to scalars.
If for example, the value of each life is a vector orthogonal to all the others, there's no way you can decide who to kill. As Maria Chiara Pievatolo put it, "in a moral system that is not utilitarian, but deontological and based on equal dignity of people, an autonomous system can be acceptable only if designed to avoid the risks to people, that is to avoid the trolley problem to occur at all".

Now, we don't have to agree on an single Ethics to live together.
But we have to recognize the right of everybody to follow a different Ethics!

Thus we cannot use robots or artificial intelligences to impose a specific Ethics to everyone.
Not even an "average" ethics.

The whole idea of MIT's moral machine and of AI Ethics in inheritely flawn and awful.

Teaching Ethics to Machines is really like teaching sex to condoms.
It's not just weird, it's dangerous.

@daviddao

This comment has been minimized.

Copy link
Owner

commented Nov 14, 2018

@Shamar Fascinating, I learned a lot from your post! Maybe we could add a category called "Robotic Systems & Weapons" for autonomous robotics, especially use cases that harm a human? However I would like to somehow make clear that we are not condemning the whole field for autonomous robotics

@KOLANICH

This comment has been minimized.

Copy link

commented Nov 14, 2018

there's no way you can decide who to kill.

Of course there is. A self-driving car must put the highest priority on surviving of anything related to its owner (the owner has to explicitly program that, and the car must follow the rules programmed by the owner, no backdoor rules like "survival of the persons from the whitelist programmed by the manufacturer or government is top priority") and IMHO the car should be programmed by the owner the following way):

  • owner life > random passenger in owner's car life
  • owner friends life > random passenger in owner's car life
  • owner's children lives > owner life
  • owner's pets > lives of [0,∞) random people
  • random passenger in owner's car life > lives of [0,∞) random people
  • lives of [0,∞) random people going to green light > lives of [0,∞) random people going to red light
  • otherwise don't interfere and let the things be as they are going to be. This is because otherwise the owner would wilingly commit a murder without no need and excuses. In the previous cases the excuse is extreme need and owner's self-interest.

I other words the cars should mimic behavior of its owner as if he was driving the car.

The most danger here is in the laws depriving car owner from the control and prescribing to save persons from the government registry. Or, in the case of China (and the rest of other world in near future, China is just a pioneer of the trend) the law may prescribe to save the maximum amount of the so called "social credit score" (I really don't understand why people call it this way, since people having it have not borrowed anything from the government). And this stuff is really scary, fascist and unhuman.

And another threat is backdoors not prescribed legally, but installed by car manufacturer, or components manufacturers.

And one more threat is hacking using the same means used in electronic warfare in order to trick the car to kill its owner or bystanders. You know, AI is very dumb now and cwnnot detect tampering to its sensors.

So may be it is too early for self-driving cars for now.

@Shamar

This comment has been minimized.

Copy link
Author

commented Nov 14, 2018

However I would like to somehow make clear that we are not condemning the whole field for autonomous robotics

Surely we are not, @daviddao.

We are just moving a bit off from the Pensée unique that characterize current AI.

Robotics IS cool! Just like the wide set of software simulators we put under the AI umbrella.
They can be widely useful. But, just like many other technologies, they can be dangerous too.

The point with robotics is that we cannot look at the robot in isolation.
You cannot put robots in a factory and leave the rest of the factory unchanged, it doesn't work!
You cannot design the factory around the robots either, as it would be impossible to human to work there (unless the robots are totally autonomous, obviously, including for maintenance)

So you have to design and develop the robotic system as a whole.

Now, at each possible logical branch you double the complexity of the system.
This means that, up a certain level, the robotic system is doomed to fail. Like an AI. Or any other software.
So we have simply to reduce the complexity of the system so that it's both useful and keep serving all the humans interacting with it without being awful or dangerous.

In the case of self driving car, the robotic system includes the roads they travel.
So to keep the complexity treatable we need to remove from such roads anything whose behaviour cannot be easily predicted because it has too many degree of freedoms.

This is what I call "The Curse of Frankenstein": this graph (courtesy of WolframAlfa) shows the probability of working correctly for a dynamic system composed of n indipendent components each granting to work correctly 99% of time.

Probability of whole system correctness

Such probability is roughly p(n) = (0.99)ⁿ and it goes down very quickly.
Thus, to build useful and non-awful self driving cars, we need to build proper infrastructures before: dedicated roads completely isolated from humans, human-driven vehicles and animals.
And the goal of such infrastructure will be to avoid the trolley problem to occur at all.

Basically, what happened a century ago with the widespread adoption of cars.

@Shamar

This comment has been minimized.

Copy link
Author

commented Nov 14, 2018

there's no way you can decide who to kill.

Of course there is. [...]

I other words the cars should mimic behavior of its owner as if he was driving the car.

Except that what you describe is (what you think it would be) @KOLANICH's behaviour.
Assuming everybody, all over the world, would agree with these priorities is naive.

Just like I cannot impose my moral values to you and you cannot impose your moral values to me, no robot can impose any set of moral values to all of us, not even the average of the moral values collected world wide.

So may be it is too early for self-driving cars for now.

Probably.
"Move fast and break things" kills. It can't really maximize anything except money through time to market.

We must require for level 2 self driving cars the same guarantees that we have for airplanes.
And there must always be at least one human responsible and accountable for their errors, defaulting to manufacturer' CEO and board of directors, if everybody else is faultless.
And we need to build proper infrastructures before.

@KOLANICH

This comment has been minimized.

Copy link

commented Nov 14, 2018

And we need to build proper infrastructures before.

This means that it wouldn't be what is currently meant under self-driving cars - cars capable to drive themselves in the environment only a human being currently can relatively reliable drive a car. It would be like an automated subway train, but a personal one and without AI for driving.

This would faciltate surveillance (the system will have to know all the cars positions in every moment of time), deprive people of freedom of movement (since one cannot drive a car where he wants without system permission for exactly his car), and allows abuse (the system can be programmed to crash the cars into each other). Relying on the signals transmitted by other cars is also cybersecurity nightmare.

So I'm strictly aganist building this road infrastructure and mandating everybody to rely on it.

What should be done is some safeguards against appearing pedestrians on the road and elimination of same-level crossings.

@Shamar

This comment has been minimized.

Copy link
Author

commented Nov 14, 2018

So I'm strictly aganist building this road infrastructure and mandating everybody to rely on it.

You raise valid and interesting concerns. They remind me of Cory Doctorow's Car Wars.

Yet, there's nothing preventing AI driven cars to raise the same concerns (as Doctorow magistraly depicts) in mixed traffic roads. It would just be more dangerous for everybody. And with an additional issue: some AI system are, at the time of writing, impossible to debug / explain / inspect precisely.
Thus the freedom problems you describe could be impossible to exclude.

It would be like an automated subway train, but a personal one and without AI for driving.

Who said it would be personal? ;-)

It could have an AI driving, however: airplanes already have one, despite the pilots and tower of controls.

@KOLANICH

This comment has been minimized.

Copy link

commented Nov 14, 2018

no robot can impose any set of moral values to all of us,

It's not robot who imposes own immoral values on all of us. It is politicians who want regulate everything for own benefit. And for me it looks very probable that "moral machine" is just a psyop to persuade people that they want themselves and their friends and relatives to be killed by own car in order to save some politician's dumb child who have jumped into a road.

I really doubt that someone gonna buy a car that has a sign "it kills its passengers in order to save pedestrians" on a free market where other kinds of cars exist.

@Shamar

This comment has been minimized.

Copy link
Author

commented Nov 14, 2018

no robot can impose any set of moral values to all of us,

It's not robot who imposes own immoral values on all of us. It is politicians...

First of all, let me remind you that the same companies and organizations that pretend to be libertarian and promote weak regulations and government control world wide hand waving for freedom, are prone to a very strict control at home. Thinking that the free market is actually free is... a bit naive.

But what I meant is that, if we accept that robots can take moral decision for us (whatever decision), we let those who program such robots impose a specific ethic of their choice to the whole society, even beyond the national boundaries they work within. Thus robots would become trojan horses for the other countries, leveling their culture.

This because humans inheritely adapt to the environment faster than robots.
Thus it's probable that we will adapt to robots' behaviour before the robots will "adapt" to us.

I really doubt that someone gonna buy a car that has a sign "it kills its passengers in order to save pedestrians" on a free market where other kinds of cars exist.

This is a problem for the marketing office.
As for me, I just care about their safety and their utility to people. In this order.

@KOLANICH

This comment has been minimized.

Copy link

commented Nov 14, 2018

let me remind you that the same companies and organizations that pretend to be libertarian and promote weak regulations and government control world wide hand waving for freedom, are prone to a very strict control at home.

Free market is not about companies, it's about your rights. You have a right to buy a car on a free market which will obey only your orders. If someone prohibits this kind of cars, he violates your rights.

if we accept that robots can take moral decision for us

I don't, that's why it is written

the owner has to explicitly program that

owner here is the one who have bought the car, not manufacturer, not government, not a cybercriminal who got unauthorized access. And in this case owner will respond behind the court if his car killed someone because he have programmed it to save the people he values more, as if he was driving the car himself.

Machine cannot hold liability, so it cannot be moral. There is always a human behind any machine.

@Shamar

This comment has been minimized.

Copy link
Author

commented Nov 14, 2018

Machine cannot hold liability, so it cannot be moral. There is always a human behind any machine.

Not sure about the rest, but on this I totally agree.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.