-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🎓💖 Feedback: Facilitator Training 29th June #131
Comments
Lacks community involvement: Suggesting a new Safety Precaution -
|
Ranks Or Classifies People: |
I would love for there to be an AI Hype label with the drawing of a toaster! We hear all too often about new AI systems being "sentient", "intelligent" and so on. In the meantime, the vast majority of us don't understand the underlying mechanics of these systems. If we did, I would imagine the hype would be mitigated. |
A suggestion to for the Examples for Difficult to understand https://datahazards.com/contents/hazards/difficult-to-understand.html The advanced use of spreadsheets with complicated formulae, macros, VB, multiple tabs with links. It often hides algorithms and assumptions. |
I would suggest a change to the May Cause Direct Harm label, because as you see in the news, there have been many examples that can apply to this label. |
A suggestion for the Examples for General Hazards https://datahazards.com/contents/hazards/general-hazard.html Issues around data collection/study design. |
Automates decision making. Safety precaution: make sure people have an easy, accessible and timely way to challenge the automated decision making and/or not having it applied in the first place |
[Lacks community involvement] + [Classifies and ranks people] [Lacks community involvement] label mentions "technology is being produced without input from the community it is supposed to serve." A lot of the time there is a bias in deeming some communities over others, including human species over non-human species. Example: research on welfare of horses and riders puts people over horses (is it supposed to serve horses too?), if the human dies the problem is deemed as bigger. And this relates to [classifies and ranks people] hazard. Which the hazard in itself is not including the thought that some parts of nature are being ranked over others. What happens when ranking is inaccurate? Is speciesism inaccurate? This hazard relies on an ethical framework that is unclear. In the same way someone might not agree something is speciesist, or something is sexist. |
A suggestion for the "Difficult to understand" hazard: |
[Reinforces existing bias] hazard would benefit from branching out into more specific sub-hazards. What kind of bias is being reinforced? Racist? Sexist? Speciesist? |
Danger of misuse: Example 1: Misinterpreting statistical methods or failing to appreciate their limitations, for example assuming that psychometrics will accurately or definitively predict future human behaviour. Additional e.g.: Re-appropriation of data for unintended purposes, for example, data collected for medical purposes being used for insurance adjustment. |
Explicitly setting out assumptions, and any key research that is being relied upon for the project. This can help the 'project owner' (project designer?) explain to the 'audience' (stakeholder representatives) why the project is designed this way - how it builds on existing work, and what we should learn/achieve from the project. |
Have added all of these suggestions to the V1.0 update in PR #169 :) |
Use this issue to send us feedback, and comment with your suggestions for new safety precautions or examples for a Hazard label of your choice :).
We can then make your suggestions and add you to our contributors list!
Please tell us:
The text was updated successfully, but these errors were encountered: