Skip to content

Commit

Permalink
Create 24-04-24_meeting.md
Browse files Browse the repository at this point in the history
  • Loading branch information
HuwWDay committed Apr 10, 2024
1 parent 642451a commit f279c20
Showing 1 changed file with 82 additions and 0 deletions.
82 changes: 82 additions & 0 deletions site/join_in/meetings/2024/04-April/24-04-24_meeting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Data Ethics Club meeting [24-04-24, 1pm UK time][timedate]

<!--
TODO:
- [ ] Change to a new branch (DD-MM-YY_meeting)
- [ ] Copy this template to meetings/YEAR/DD-MM-YY_meeting.md (put in actual year + date)
- [ ] Put in the Event time on: https://www.timeanddate.com/worldclock/fixedform.html and copy result to LINK-TO-TIMEDATE
- [ ] Change all ALL-CAPS placeholders in this form
- [ ] Update the hyperlinks at the bottom of the template
- [ ] Add link to the new file in meetings.md
- [ ] Update the next-meeting.md file
- [ ] Pull request!
- [ ] Create or edit the calendar invite to copy and paste this info over and send it/send an update.
- [ ] Maybe tweet it? #DataEthicsClub @jgiBristol
Repeat meeting link is currently: https://bristol-ac-uk.zoom.us/j/94475153265
Usual time 13:00-14:00
-->
## Meeting info

### Quick links

[Zoom link][zoom]

Link to content: [Artificial Intelligence Act: MEPs adopt landmark law][content]

### Description
You're welcome to join us for our next Data Ethics Club meeting on [24th April at 1pm UK time][timedate].
You don't need to register, just pop in.
This time we're going to watch/read [Artificial Intelligence Act: MEPs adopt landmark law][content], which is a press release from the European Parliament.

Thank you to Huw Day for suggesting this week's content and writing the summary below. The article itself is very short, so is worth reading in full if you have time!

### Summary

The European Parliament has recently approved the Artificial Intelligence Act with 523 votes in favour, 46 against and 49 abstentions.
“It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.”
The act bans applications that threaten citizen rights including:
- biometric categorisation systems based on sensitive characteristics
- untargeted facial scraping from the internet or CCTV to create facial recognition databases
- emotion recognition in the workplace and schools
- social scoring
- predictive policing (when it is based solely on profiling a person or assessing their characteristics)
- AI that manipulates human behaviour or exploits people’s vulnerabilities

The use of biometric identification systems by law enforcement are prohibited in principle, except for certain exceptions where their use is limited in time and geographic scope and given specific, case by case judicial or administrative authorisations (e.g. targeted search for a missing person or preventing a terrorist attacks). Using such tools retrospectively is considered a high-risk use case.

Other high-risk use cases include “critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections).”

Whilst these systems “are required to assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight”, the press release does not go into detail about how this will work in practise. The article also states: “Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights”. This press relief does not detail how complaint submission would work in practise.

“General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.”

The article does not outline the penalties for violating any of the act at this time.

### Discussion points

There will be time to talk about whatever we like, relating to the paper, but here are some specific questions to think about while you're reading.
- How do you feel about the ban of applications threatening citizens right and the exceptions for law enforcement?
- How do you feel about the list of high-risk use cases? Is the list incomplete? What would you add? Is there anything listed that you think isn't high-risk?
- How do you feel about the transpracrency requirements for General-purpose AI (GPAI) systems?

---

<!--
## Meeting notes
### Who came
Number of people:
### What did we think?
Notes here!
Shall we email the author? If so, who'll send the email?
-->

[timedate]: https://www.timeanddate.com/worldclock/fixedtime.html?msg=Data+Ethics+Club&iso=20240424T13&p1=299&ah=1
[content]: https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
[zoom]: https://bristol-ac-uk.zoom.us/j/94475153265

0 comments on commit f279c20

Please sign in to comment.