Skip to content

Commit

Permalink
Merge pull request #1 from LvKvA/main
Browse files Browse the repository at this point in the history
Literature Review Christie & Lars
  • Loading branch information
luiscruz committed Feb 13, 2022
2 parents 10a9a9a + 40975eb commit e2ef6c5
Showing 1 changed file with 42 additions and 0 deletions.
42 changes: 42 additions & 0 deletions _literature_review_2022/2022_Evgeni_Designing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
layout: publication
readby: Christie Bavelaar, Lars van Koetsveld van Ankeren
journal: "Big Data & Society"
paper_author: Evgeni Aizenberg and Jeroen van den Hoven
paper_title: "Designing for human rights in AI"
year: 2020
doi: http://dx.doi.org/10.1177/2053951720949566
website: https://journals.sagepub.com/doi/10.1177/2053951720949566
slides: https://onedrive.live.com/redir?resid=95B039DCDE87EA81!15241&authkey=!ABqJ2fP46OQKsWM&ithint=file%2cpptx&e=AMa9Pt
abstract: |-
In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives.
Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively.
It is becoming evident that these technological developments are consequential to people’s fundamental human rights.
Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology.
On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness.
Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values.
In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.
bibtex: |-
@article{aizenberg-2020,
author = {Aizenberg, Evgeni and van den Hoven, Jeroen},
doi = {10.1177/2053951720949566},
journal = {Big Data & Society},
number = {2},
title = {{Designing for human rights in AI}},
volume = {7},
year = {2020},
}
}
tags:
- human rights, Design for Values, Value Sensitive Design, ethics, stakeholders
annotation: |-
This paper discusses their way to structure the design process for AI in a way that honours the fundamental human rights. Technological developments have the ability to infer with fundamental human rights. This happens when technical solutions are implemented without empirical study of societal context. Calls for more ethical AI stress the importance of transparency, however do not provide practical solutions. This creates a socio-technical gap that needs to be bridged.
The paper stresses the importance of a democratic design process where stakeholders are involved. This design process is to be structured using the tripartite methodology. One, the stakeholders and values need to be specified. Second, the needs and experiences of these stakeholders have to be explored. Third, the implementation and evaluation of technical solutions can be defined. These three types of investigations do not exist in isolation, but rather influence and enhance each other.
The authors make an explicit choice to ground their work in the human rights expressed by the EU Charter of Fundamental Rights. They explore different human rights such as dignity, freedom, equality and solidarity. Using an hierarchical approach norms can be derived from values and these norms result in specific design requirement. Fundamental human values and norms are most easily defined by the ways in which they can be violated. This is why the authors provide examples where AI may violate these norms and values and how these violations can be avoided. Users need to be aware that they are being subjected to AI and need to be able to contest the AI’s decisions. Stakeholders need to reflect on which data is justifiably necessary for the system to use. Sometimes the conclusion may even be that AI is not the solution to the presented problem.
The paper concludes that technology can not be the solution to complex societal problems, since technology is not as ethically neutral or objective as it is often perceived. To this end the authors presented their value design approach so that institutions and societies can ensure AI contributes positively to the enjoyment of human rights. These principles do not apply only to AI, since different technologies can have a similar impact on human rights. Lastly, the authors conclude that designing for human values does not hinder technological innovation, instead leading to long-term benefits to individuals in society and developers, having gained a higher amount of trust.
---

<!--mandatory fields: paper_title, readby, paper_author, journal, year, doi or preprint or arxiv, slides (if you have), abstract, annotation -->

0 comments on commit e2ef6c5

Please sign in to comment.