From 3c72f86200be5ff6dd0df5e8cb3d80650bb4f062 Mon Sep 17 00:00:00 2001 From: LvKvA <32707558+LvKvA@users.noreply.github.com> Date: Sat, 12 Feb 2022 16:14:06 +0100 Subject: [PATCH 1/5] Add files via upload --- .../2022_Evgeni_Designing.md | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 _literature_review_2022/2022_Evgeni_Designing.md diff --git a/_literature_review_2022/2022_Evgeni_Designing.md b/_literature_review_2022/2022_Evgeni_Designing.md new file mode 100644 index 00000000..2bd1b6eb --- /dev/null +++ b/_literature_review_2022/2022_Evgeni_Designing.md @@ -0,0 +1,56 @@ +```markdown +layout: publication +readby: Christie Bavelaar, Lars van Koetsveld van Ankeren +journal: "Big Data & Society" +paper_author: Evgeni Aizenberg and Jeroen van den Hoven +paper_title: "Designing for human rights in AI" +year: 2020 +doi: http://dx.doi.org/10.1177/2053951720949566 +website: https://journals.sagepub.com/doi/10.1177/2053951720949566 +preprint: https://openreview.net/pdf?id=l-PrrQrK0QR +slides: https://onedrive.live.com/redir?resid=95B039DCDE87EA81!15241&authkey=!ABqJ2fP46OQKsWM&ithint=file%2cpptx&e=AMa9Pt +abstract: |- + In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence (AI) +systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory +decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident +that these technological developments are consequential to people’s fundamental human rights. Despite increasing +attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are +often developed without empirical study of societal context and the critical input of societal stakeholders who are +impacted by the technology. On the other hand, calls for more ethically and socially aware AI often fail to provide +answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these +socio-technical gaps and the deep divide between abstract value language and design requirements is essential to +facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we +bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design +and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental +human rights into context-dependent design requirements through a structured, inclusive, and transparent process. +bibtex: |- + @article{aizenberg-2020, + author = {Aizenberg, Evgeni and van den Hoven, Jeroen}, + doi = {10.1177/2053951720949566}, + journal = {Big Data & Society}, + number = {2}, + title = {{Designing for human rights in AI}}, + volume = {7}, + year = {2020}, + } +tags: + - Artificial intelligence, human rights, Design for Values, Value Sensitive Design, ethics, stakeholders +annotation: |- + # Designing for Human Rights in AI + +###### Christie Bavelaar en Lars van Koetsveld van Ankeren + +This paper discusses their way to structure the design process for AI in a way that honours the fundamental human rights. Technological developments have the ability to infer with fundamental human rights. This happens when technical solutions are implemented without empirical study of societal context. Calls for more ethical AI stress the importance of transparency, however do not provide practical solutions. This creates a socio-technical gap that needs to be bridged. + +The paper stresses the importance of a democratic design process where stakeholders are involved. This design process is to be structured using the tripartite methodology. One, the stakeholders and values need to be specified. Second, the needs and experiences of these stakeholders have to be explored. Third, the implementation and evaluation of technical solutions can be defined. These three types of investigations do not exist in isolation, but rather influence and enhance each other. + +The authors make an explicit choice to ground their work in the human rights expressed by the EU Charter of Fundamental Rights. They explore different human rights such as dignity, freedom, equality and solidarity. Using an hierarchical approach norms can be derived from values and these norms result in specific design requirement. Fundamental human values and norms are most easily defined by the ways in which they can be violated. This is why the authors provide examples where AI may violate these norms and values and how these violations can be avoided. Users need to be aware that they are being subjected to AI and need to be able to contest the AI’s decisions. Stakeholders need to reflect on which data is justifiably necessary for the system to use. Sometimes the conclusion may even be that AI is not the solution to the presented problem. + +The paper concludes that technology can not be the solution to complex societal problems, since technology is not as ethically neutral or objective as it is often perceived. To this end the authors presented their value design approach so that institutions and societies can ensure AI contributes positively to the enjoyment of human rights. These principles do not apply only to AI, since different technologies can have a similar impact on human rights. Lastly, the authors conclude that designing for human values does not hinder technological innovation, instead leading to long-term benefits to individuals in society and developers, having gained a higher amount of trust. + +Aizenberg, E., & Van den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720949566 +--- + + +``` + From 26c489b68e70ae7ccdd45271fba3656fbfc798b3 Mon Sep 17 00:00:00 2001 From: LvKvA <32707558+LvKvA@users.noreply.github.com> Date: Sat, 12 Feb 2022 16:16:05 +0100 Subject: [PATCH 2/5] Update 2022_Evgeni_Designing.md --- _literature_review_2022/2022_Evgeni_Designing.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/_literature_review_2022/2022_Evgeni_Designing.md b/_literature_review_2022/2022_Evgeni_Designing.md index 2bd1b6eb..b9c2b29b 100644 --- a/_literature_review_2022/2022_Evgeni_Designing.md +++ b/_literature_review_2022/2022_Evgeni_Designing.md @@ -1,4 +1,3 @@ -```markdown layout: publication readby: Christie Bavelaar, Lars van Koetsveld van Ankeren journal: "Big Data & Society" @@ -49,8 +48,6 @@ The authors make an explicit choice to ground their work in the human rights exp The paper concludes that technology can not be the solution to complex societal problems, since technology is not as ethically neutral or objective as it is often perceived. To this end the authors presented their value design approach so that institutions and societies can ensure AI contributes positively to the enjoyment of human rights. These principles do not apply only to AI, since different technologies can have a similar impact on human rights. Lastly, the authors conclude that designing for human values does not hinder technological innovation, instead leading to long-term benefits to individuals in society and developers, having gained a higher amount of trust. Aizenberg, E., & Van den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720949566 ---- - -``` + From c976db5c005298136965b90f95ab2c5ce860c8c1 Mon Sep 17 00:00:00 2001 From: LvKvA <32707558+LvKvA@users.noreply.github.com> Date: Sat, 12 Feb 2022 16:31:56 +0100 Subject: [PATCH 3/5] Update 2022_Evgeni_Designing.md --- .../2022_Evgeni_Designing.md | 39 +++++++------------ 1 file changed, 14 insertions(+), 25 deletions(-) diff --git a/_literature_review_2022/2022_Evgeni_Designing.md b/_literature_review_2022/2022_Evgeni_Designing.md index b9c2b29b..fcec5352 100644 --- a/_literature_review_2022/2022_Evgeni_Designing.md +++ b/_literature_review_2022/2022_Evgeni_Designing.md @@ -1,3 +1,4 @@ +--- layout: publication readby: Christie Bavelaar, Lars van Koetsveld van Ankeren journal: "Big Data & Society" @@ -9,19 +10,14 @@ website: https://journals.sagepub.com/doi/10.1177/2053951720949566 preprint: https://openreview.net/pdf?id=l-PrrQrK0QR slides: https://onedrive.live.com/redir?resid=95B039DCDE87EA81!15241&authkey=!ABqJ2fP46OQKsWM&ithint=file%2cpptx&e=AMa9Pt abstract: |- - In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence (AI) -systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory -decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident -that these technological developments are consequential to people’s fundamental human rights. Despite increasing -attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are -often developed without empirical study of societal context and the critical input of societal stakeholders who are -impacted by the technology. On the other hand, calls for more ethically and socially aware AI often fail to provide -answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these -socio-technical gaps and the deep divide between abstract value language and design requirements is essential to -facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we -bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design -and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental -human rights into context-dependent design requirements through a structured, inclusive, and transparent process. + In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. + Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. + It is becoming evident that these technological developments are consequential to people’s fundamental human rights. + Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. + On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. + Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to + facilitate nuanced, context-dependent design choices that will support moral and social values. + In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process. bibtex: |- @article{aizenberg-2020, author = {Aizenberg, Evgeni and van den Hoven, Jeroen}, @@ -35,19 +31,12 @@ bibtex: |- tags: - Artificial intelligence, human rights, Design for Values, Value Sensitive Design, ethics, stakeholders annotation: |- - # Designing for Human Rights in AI + This paper discusses their way to structure the design process for AI in a way that honours the fundamental human rights. Technological developments have the ability to infer with fundamental human rights. This happens when technical solutions are implemented without empirical study of societal context. Calls for more ethical AI stress the importance of transparency, however do not provide practical solutions. This creates a socio-technical gap that needs to be bridged. -###### Christie Bavelaar en Lars van Koetsveld van Ankeren - -This paper discusses their way to structure the design process for AI in a way that honours the fundamental human rights. Technological developments have the ability to infer with fundamental human rights. This happens when technical solutions are implemented without empirical study of societal context. Calls for more ethical AI stress the importance of transparency, however do not provide practical solutions. This creates a socio-technical gap that needs to be bridged. - -The paper stresses the importance of a democratic design process where stakeholders are involved. This design process is to be structured using the tripartite methodology. One, the stakeholders and values need to be specified. Second, the needs and experiences of these stakeholders have to be explored. Third, the implementation and evaluation of technical solutions can be defined. These three types of investigations do not exist in isolation, but rather influence and enhance each other. - -The authors make an explicit choice to ground their work in the human rights expressed by the EU Charter of Fundamental Rights. They explore different human rights such as dignity, freedom, equality and solidarity. Using an hierarchical approach norms can be derived from values and these norms result in specific design requirement. Fundamental human values and norms are most easily defined by the ways in which they can be violated. This is why the authors provide examples where AI may violate these norms and values and how these violations can be avoided. Users need to be aware that they are being subjected to AI and need to be able to contest the AI’s decisions. Stakeholders need to reflect on which data is justifiably necessary for the system to use. Sometimes the conclusion may even be that AI is not the solution to the presented problem. - -The paper concludes that technology can not be the solution to complex societal problems, since technology is not as ethically neutral or objective as it is often perceived. To this end the authors presented their value design approach so that institutions and societies can ensure AI contributes positively to the enjoyment of human rights. These principles do not apply only to AI, since different technologies can have a similar impact on human rights. Lastly, the authors conclude that designing for human values does not hinder technological innovation, instead leading to long-term benefits to individuals in society and developers, having gained a higher amount of trust. - -Aizenberg, E., & Van den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720949566 + The paper stresses the importance of a democratic design process where stakeholders are involved. This design process is to be structured using the tripartite methodology. One, the stakeholders and values need to be specified. Second, the needs and experiences of these stakeholders have to be explored. Third, the implementation and evaluation of technical solutions can be defined. These three types of investigations do not exist in isolation, but rather influence and enhance each other. + The authors make an explicit choice to ground their work in the human rights expressed by the EU Charter of Fundamental Rights. They explore different human rights such as dignity, freedom, equality and solidarity. Using an hierarchical approach norms can be derived from values and these norms result in specific design requirement. Fundamental human values and norms are most easily defined by the ways in which they can be violated. This is why the authors provide examples where AI may violate these norms and values and how these violations can be avoided. Users need to be aware that they are being subjected to AI and need to be able to contest the AI’s decisions. Stakeholders need to reflect on which data is justifiably necessary for the system to use. Sometimes the conclusion may even be that AI is not the solution to the presented problem. + The paper concludes that technology can not be the solution to complex societal problems, since technology is not as ethically neutral or objective as it is often perceived. To this end the authors presented their value design approach so that institutions and societies can ensure AI contributes positively to the enjoyment of human rights. These principles do not apply only to AI, since different technologies can have a similar impact on human rights. Lastly, the authors conclude that designing for human values does not hinder technological innovation, instead leading to long-term benefits to individuals in society and developers, having gained a higher amount of trust. +--- From c7e29962e1c0fad42d1866c2e2473e3e98c02916 Mon Sep 17 00:00:00 2001 From: LvKvA <32707558+LvKvA@users.noreply.github.com> Date: Sat, 12 Feb 2022 16:43:38 +0100 Subject: [PATCH 4/5] Fix table --- .../2022_Evgeni_Designing.md | 45 ++++++++++--------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/_literature_review_2022/2022_Evgeni_Designing.md b/_literature_review_2022/2022_Evgeni_Designing.md index fcec5352..ede462b8 100644 --- a/_literature_review_2022/2022_Evgeni_Designing.md +++ b/_literature_review_2022/2022_Evgeni_Designing.md @@ -10,33 +10,34 @@ website: https://journals.sagepub.com/doi/10.1177/2053951720949566 preprint: https://openreview.net/pdf?id=l-PrrQrK0QR slides: https://onedrive.live.com/redir?resid=95B039DCDE87EA81!15241&authkey=!ABqJ2fP46OQKsWM&ithint=file%2cpptx&e=AMa9Pt abstract: |- - In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. - Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. - It is becoming evident that these technological developments are consequential to people’s fundamental human rights. - Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. - On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. - Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to - facilitate nuanced, context-dependent design choices that will support moral and social values. - In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process. + In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. + Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. + It is becoming evident that these technological developments are consequential to people’s fundamental human rights. + Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. + On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. + Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. + In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process. bibtex: |- @article{aizenberg-2020, - author = {Aizenberg, Evgeni and van den Hoven, Jeroen}, - doi = {10.1177/2053951720949566}, - journal = {Big Data & Society}, - number = {2}, - title = {{Designing for human rights in AI}}, - volume = {7}, - year = {2020}, - } + author = {Aizenberg, Evgeni and van den Hoven, Jeroen}, + doi = {10.1177/2053951720949566}, + journal = {Big Data & Society}, + number = {2}, + title = {{Designing for human rights in AI}}, + volume = {7}, + year = {2020}, + } + } tags: - - Artificial intelligence, human rights, Design for Values, Value Sensitive Design, ethics, stakeholders + - human rights, Design for Values, Value Sensitive Design, ethics, stakeholders annotation: |- - This paper discusses their way to structure the design process for AI in a way that honours the fundamental human rights. Technological developments have the ability to infer with fundamental human rights. This happens when technical solutions are implemented without empirical study of societal context. Calls for more ethical AI stress the importance of transparency, however do not provide practical solutions. This creates a socio-technical gap that needs to be bridged. + This paper discusses their way to structure the design process for AI in a way that honours the fundamental human rights. Technological developments have the ability to infer with fundamental human rights. This happens when technical solutions are implemented without empirical study of societal context. Calls for more ethical AI stress the importance of transparency, however do not provide practical solutions. This creates a socio-technical gap that needs to be bridged. - The paper stresses the importance of a democratic design process where stakeholders are involved. This design process is to be structured using the tripartite methodology. One, the stakeholders and values need to be specified. Second, the needs and experiences of these stakeholders have to be explored. Third, the implementation and evaluation of technical solutions can be defined. These three types of investigations do not exist in isolation, but rather influence and enhance each other. + The paper stresses the importance of a democratic design process where stakeholders are involved. This design process is to be structured using the tripartite methodology. One, the stakeholders and values need to be specified. Second, the needs and experiences of these stakeholders have to be explored. Third, the implementation and evaluation of technical solutions can be defined. These three types of investigations do not exist in isolation, but rather influence and enhance each other. - The authors make an explicit choice to ground their work in the human rights expressed by the EU Charter of Fundamental Rights. They explore different human rights such as dignity, freedom, equality and solidarity. Using an hierarchical approach norms can be derived from values and these norms result in specific design requirement. Fundamental human values and norms are most easily defined by the ways in which they can be violated. This is why the authors provide examples where AI may violate these norms and values and how these violations can be avoided. Users need to be aware that they are being subjected to AI and need to be able to contest the AI’s decisions. Stakeholders need to reflect on which data is justifiably necessary for the system to use. Sometimes the conclusion may even be that AI is not the solution to the presented problem. - - The paper concludes that technology can not be the solution to complex societal problems, since technology is not as ethically neutral or objective as it is often perceived. To this end the authors presented their value design approach so that institutions and societies can ensure AI contributes positively to the enjoyment of human rights. These principles do not apply only to AI, since different technologies can have a similar impact on human rights. Lastly, the authors conclude that designing for human values does not hinder technological innovation, instead leading to long-term benefits to individuals in society and developers, having gained a higher amount of trust. + The authors make an explicit choice to ground their work in the human rights expressed by the EU Charter of Fundamental Rights. They explore different human rights such as dignity, freedom, equality and solidarity. Using an hierarchical approach norms can be derived from values and these norms result in specific design requirement. Fundamental human values and norms are most easily defined by the ways in which they can be violated. This is why the authors provide examples where AI may violate these norms and values and how these violations can be avoided. Users need to be aware that they are being subjected to AI and need to be able to contest the AI’s decisions. Stakeholders need to reflect on which data is justifiably necessary for the system to use. Sometimes the conclusion may even be that AI is not the solution to the presented problem. + The paper concludes that technology can not be the solution to complex societal problems, since technology is not as ethically neutral or objective as it is often perceived. To this end the authors presented their value design approach so that institutions and societies can ensure AI contributes positively to the enjoyment of human rights. These principles do not apply only to AI, since different technologies can have a similar impact on human rights. Lastly, the authors conclude that designing for human values does not hinder technological innovation, instead leading to long-term benefits to individuals in society and developers, having gained a higher amount of trust. --- + + From 40975ebc7a80f04012b96222cac2060ed6440c5a Mon Sep 17 00:00:00 2001 From: LvKvA <32707558+LvKvA@users.noreply.github.com> Date: Sat, 12 Feb 2022 16:50:28 +0100 Subject: [PATCH 5/5] Remove preprint --- _literature_review_2022/2022_Evgeni_Designing.md | 1 - 1 file changed, 1 deletion(-) diff --git a/_literature_review_2022/2022_Evgeni_Designing.md b/_literature_review_2022/2022_Evgeni_Designing.md index ede462b8..e62fcd17 100644 --- a/_literature_review_2022/2022_Evgeni_Designing.md +++ b/_literature_review_2022/2022_Evgeni_Designing.md @@ -7,7 +7,6 @@ paper_title: "Designing for human rights in AI" year: 2020 doi: http://dx.doi.org/10.1177/2053951720949566 website: https://journals.sagepub.com/doi/10.1177/2053951720949566 -preprint: https://openreview.net/pdf?id=l-PrrQrK0QR slides: https://onedrive.live.com/redir?resid=95B039DCDE87EA81!15241&authkey=!ABqJ2fP46OQKsWM&ithint=file%2cpptx&e=AMa9Pt abstract: |- In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives.