Skip to content
/ genia Public

GENIA: Study of gender biases in machine learning models using explainable artificial intelligence

License

Notifications You must be signed in to change notification settings

aurorarq/genia

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 

Repository files navigation

GENIA: Study of gender biases in machine learning models using explainable artificial intelligence

Summary

Artificial intelligence (AI) systems are already part of our daily lives, making autonomous decisions with a high impact on how we inform ourselves, what entertainment we consume, what medical treatment we receive or what job we are qualified for. These types of decisions are based on the historical analysis of the data that we generate through our devices and actions, and which are used to train machine learning (ML) algorithms. The quantity, quality and representativeness of the data is therefore essential for these algorithms to make accurate predictions. However, these ML algorithms are not exempt from making discriminatory or biased decisions if, for example, under-represented groups or a gender perspective were not taken into account in the data collection and subsequent analysis. To limit such cases, it is necessary to know how AI makes decisions, so that we can detect biases based on race, gender, age, etc. Explainable AI (XAI) can provide methods to identify whether these factors have an impact on the behaviour of ML algorithms, as they seek to explain how AI reaches its conclusions in order to facilitate transparency and trust in its results. This project aims to investigate how XAI methods can be effective in detecting gender bias in ML-based predictive systems. The aim is to study the impact of gender information, or its omission, on AI decision-making in domains of great interest such as medicine or education.

Activities

Results

  • Ciencia Violeta: 1st Scientific Meeting on Research with a Gender Perspective (Córdoba, 14th February 2023).
  • Workshop paper: Exploring gender bias in misclassification with clustering and local explanations. 5th International Workshop on eXplainable Knowledge Discovery in Data Mining organised as part of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (Turin, 18th September 2023). Supplementary material available on a dedicated repository.

Funding

This project is funded by the University of Córdoba within its Annual Research Plan (2022), for a 1-year period (Academic year 2022/2023).

About

GENIA: Study of gender biases in machine learning models using explainable artificial intelligence

Topics

Resources

License

Stars

Watchers

Forks