-
Notifications
You must be signed in to change notification settings - Fork 3
Home
In a world where artificial intelligence (AI) is rapidly becoming more prevalent in our lives, it is imperative that AI implement some form of morality in order to successfully be integrated in society. Otherwise, it may prove extremely difficult for humans to trust an AI to independently conduct activities directly affecting human health and livelihood.
Take autonomous vehicles for example. Given enough time, it is inevitable that a vehicle controlled by an AI will be presented with a situation where a crash is unavoidable. This could come in the form of catastrophic brake failure or other means. Thus, if an AI is unable to mechanically stop the car, then it must make a terrible decision: what does it force the car to hit? This question implies a variety of different outcomes potentially affecting the driver and passengers of the car, bystanders, other motorists, wildlife, and physical property. Every decision comes with its own costs--both from a material and a moral standpoint. Every individual person in the world has their own opinion on what is "right" to do for a certain situation, therefore what is the "right" thing for a machine to do?
The Trolley Mod creates an avenue for solving this problem by providing a platform for human sociological data collection. It is a tool for researchers (or curious individuals) to gain insight on what the aggregate moral framework of society is, and it accomplishes this goal by recording human reactions to crash scenarios. By using the mod to measure a group's decision-making during these crashes, a researcher can determine what is "acceptable" and "unacceptable" by the majority, which presents a moral structure for the AI in an autonomous vehicle to follow. In the case of the experiment group, so long as the AI follows this structure, it is acting the "right" way and making the best decisions it possibly can.
True to its name, the Trolley Mod functions as a generator for "trolley problems." In its current iteration, the Trolley Mod uses an open-sourced driving simulator named CARLA to present users with binary choices in unavoidable crash scenarios: they can hit one victim/object or the other, but cannot stop or swerve around them. As a result, there is always a consequence for the user's actions. If a researcher reviews data acquired by the Trolley Mod on what choices were made in a certain situation, he can draw a conclusion on what is "morally acceptable" in that case. This style of decision-making is in a similar vein to MIT's Moral Machine.
This project creates a tool aiming to increase the proliferation of AI moral decision-making research. First and foremost, the Trolley Mod is open-sourced and is built completely with open-sourced assets. Any person can download and run the Trolley Mod as long as they have set up the CARLA simulator. By making this tool easy to access, we are encouraging researchers all over the world to modify and improve the Trolley Mod as they see fit. The variety of different experiments that can be performed with it will provide a spectrum of benefits to the field of moral machine learning and other disciplines besides.
Secondly, this mod is designed to be quickly set up and easy to modify. The Trolley Mod does not modify CARLA's core functionality--it only extends it. Most of the project files use Blueprints in the Unreal Engine 4, which provide a visual, node-based interface for object-oriented programming. A user with only a cursory background in programming can use and change the Trolley Mod with ease.
Finally, CARLA--and by extension, the mod--possesses a Python API for controlling the simulation with an external client. The Trolley Mod does not modify this functionality in any way, therefore users who enjoyed the native server-client architecture provided by CARLA can use it without any reservations.
- Home
- Installation and setup
- Trolley Mod assets
- Modified CARLA assets
- Tutorial
- Videos