Skip to content

KID-22/LLM-IR-Bias-Fairness-Survey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

40 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Survey: LLM-IR-Bias-Fairness

This is the collection of papers related to bias and fairness in IR with LLMs. These papers are organized according to our survey paper Unifying Bias and Unfairness in Information Retrieval: A Survey of Challenges and Opportunities with Large Language Models.

Please feel free to contact us if you have any questions or suggestions!

Citation

If you find our work useful for your research, please cite our work:

@article{dai2024unifying,
  title={Unifying Bias and Unfairness in Information Retrieval: A Survey of Challenges and Opportunities with Large Language Models},
  author={Dai, Sunhao and Xu, Chen and Xu, Shicheng and Pang, Liang and Dong, Zhenhua and Xu, Jun},
  journal={arXiv preprint arXiv:2404.11457},
  year={2024}
}

πŸ“‹ Contents

🌟 Introduction

In this survey, we provide a comprehensive review of emerging and pressing issues related to bias and unfairness in three key stages of the integration of LLMs into IR systems.

We introduce a unified framework to understand these issues as distribution mismatch problems and systematically categorize mitigation strategies into data sampling and distribution reconstruction approaches.

πŸ“„ Paper List

Bias

Bias in Data Collection

  1. LLMs may Dominate Information Access: Neural Retrievers are Biased Towards LLM-Generated Texts, Preprint 2023. [Paper]
  2. AI-Generated Images Introduce Invisible Relevance Bias to Text-Image Retrieval, Preprint 2023. [Paper]
  3. Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts for Open-Domain QA?, Preprint 2024. [Paper]
  4. Textbooks Are All You Need , Preprint 2023. [Paper]
  5. Measuring and Narrowing the Compositionality Gap in Language Models, Findings of EMNLP 2023 [Paper]
  6. In-Context Retrieval-Augmented Language Models, TACL 2023 [Paper]
  7. Search-in-the-Chain: Interactively Enhancing Large Language Models with Search for Knowledge-intensive Tasks, WWW 2024 [Paper]
  8. List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented Generation, WWW 2024 [Paper]
  9. Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation, Preprint 2024 [Paper]
  10. Improving Language Models via Plug-and-Play Retrieval Feedback, Preprint 2024 [Paper]
  11. Llama 2: Open Foundation and Fine-Tuned Chat Models, Preprint 2023 [Paper]
  12. Unified Detoxifying and Debiasing in Language Generation via Inference-time Adaptive Optimization, ICLR 2023 [Paper]
  13. Recitation-Augmented Language Models, ICLR 2023 [Paper]
  14. Self-Consistency Improves Chain of Thought Reasoning in Language Models, ICLR 2023 [Paper]

Bias in Model Development

  1. Large Language Models are Zero-Shot Rankers for Recommender Systems, ECIR 2024. [Paper]
  2. Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models, Preprint 2023. [Paper]
  3. RecRanker: Instruction Tuning Large Language Model as Ranker for Top-k Recommendation, Preprint 2023. [Paper]
  4. Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting, Preprint 2023. [Paper]
  5. Exploring Large Language Model for Graph Data Understanding in Online Job Recommendations, Preprint 2023. [Paper]
  6. Large Language Models are Not Stable Recommender Systems, Preprint 2023. [Paper]
  7. A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems, Preprint 2023. [Paper]
  8. Large Language Models as Zero-Shot Conversational Recommenders, CIKM 2023. [Paper]
  9. Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation, EMNLP 2023. [Paper]
  10. Understanding Biases in ChatGPT-based Recommender Systems: Provider Fairness, Temporal Stability, and Recency, Preprint 2024. [Paper]
  11. ChatGPT for Conversational Recommendation: Refining Recommendations by Reprompting with Feedback, Preprint 2024. [Paper]
  12. Cross-Task Generalization via Natural Language Crowdsourcing Instructions, ACL 2022 [Paper]
  13. Multitask Prompted Training Enables Zero-Shot Task Generalization, ICLR 2022 [Paper]
  14. Self-Instruct: Aligning Language Models with Self-Generated Instructions, ACL 2023 [Paper]
  15. Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation, TACL 2023 [Paper]
  16. Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following, AAAI 2024 [Paper]
  17. LongAlign: A Recipe for Long Context Alignment of Large Language Models, Preprint 2024. [Paper]
  18. Data Engineering for Scaling Language Models to 128K Context, Preprint 2024. [Paper]

Bias in Result Evaluation

  1. Large Language Models Are Not Robust Multiple Choice Selectors, ICLR 2024. [Paper]
  2. Humans or LLMs as the Judge? A Study on Judgement Biases, Preprint 2024. [Paper]
  3. Benchmarking Cognitive Biases in Large Language Models as Evaluators, Preprint 2023. [Paper]
  4. Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions, Preprint 2023. [Paper]
  5. Large Language Models are not Fair Evaluators, Preprint 2023. [Paper]
  6. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena, NeurIPS 2023. [Paper]
  7. Can Large Language Models be Trusted for Evaluation? Scalable Meta-Evaluation of LLMs as Evaluators via Agent Debate, Preprint 2024. [Paper]
  8. EvalLM: Interactive Evaluation of Large Language Model Prompts on User-Defined Criteria, CHI 2024. [Paper]
  9. LLMs as Narcissistic Evaluators: When Ego Inflates Evaluation Scores, Preprint 2023. [Paper]
  10. Verbosity Bias in Preference Labeling by Large Language Models, Preprint 2023. [Paper]
  11. Style Over Substance: Evaluation Biases for Large Language Models, Preprint 2023. [Paper]
  12. An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers, Preprint 2024. [Paper]
  13. G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment, Preprint 2023. [Paper]
  14. PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations, Preprint 2023. [Paper]
  15. ALLURE: Auditing and Improving LLM-based Evaluation of Text using Iterative In-Context-Learning, Preprint 2023. [Paper]
  16. Teacher-Student Training for Debiasing: General Permutation Debiasing for Large Language Models, Preprint 2024. [Paper]
  17. PRE: A Peer Review Based Large Language Model Evaluator, Preprint 2024. [Paper]
  18. Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators, Preprint 2024. [Paper]

Unfairness

Unfairness in Data Collection

  1. Measuring and Mitigating Unintended Bias in Text Classification, AIES 2018. [Paper]
  2. Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models, ACL 2023. [Paper]
  3. Gender Bias in Neural Natural Language Processing, Preprint 2019. [Paper]
  4. MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via Moral Discussions, ACL 2023. [Paper]
  5. SaFeRDialogues: Taking Feedback Gracefully after Conversational Safety Failures, ACL 2022. [Paper]
  6. Do LLMs Implicitly Exhibit User Discrimination in Recommendation? An Empirical Study, Preprint 2023. [Paper]
  7. Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation, Recsys 2023. [Paper]
  8. Mitigating harm in language models with conditional-likelihood filtration, Preprint 2021. [Paper]
  9. Exploring the limits of transfer learning with a unified text-to-text transformer, JMLR 2020. [Paper]
  10. CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System, Preprint 2024. [Paper]
  11. BLIND: Bias Removal With No Demographics, ACL 2023. [Paper]
  12. Identifying and Reducing Gender Bias in Word-Level Language Models, NAACL 2019. [Paper]
  13. Reducing Sentiment Bias in Language Models via Counterfactual Evaluation, Findings-EMNLP' 20. [Paper]
  14. Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function, ACL-workshop 2019. [Paper]
  15. Bias of AI-Generated Content: An Examination of News Produced by Large Language Models, Preprint 2023. [Paper]
  16. Educational Multi-Question Generation for Reading Comprehension, BEA-workshop 2022 [Paper]
  17. Pseudo-Discrimination Parameters from Language Embeddings, Preprint 2024 [Paper]
  18. Item-side Fairness of Large Language Model-based Recommendation System, WWW 2024 [Paper]
  19. Bias of AI-generated content: an examination of news produced by large language models, Scientific Reports [Paper]
  20. Generating Better Items for Cognitive Assessments Using Large Language Models, BEA-workshop 2023 [Paper]

Unfairness in Model Development

  1. Dynamically disentangling social bias from task-oriented representations with adversarial attack, NAACL 2021 [Paper]
  2. Using In-Context Learning to Improve Dialogue Safety, EMNLP-findings 2023 [Paper]
  3. Large pre-trained language models contain human-like biases of what is right and wrong to do, NML 2023 [Paper]
  4. BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage, Preprint 2022 [Paper]
  5. Balancing out Bias: Achieving Fairness Through Balanced Training, EMNLP 2022 [Paper]
  6. Should We Attend More or Less? Modulating Attention for Fairness, Preprint 2023 [Paper]
  7. Constitutional AI: Harmlessness from AI Feedback, reprint 2022 [Paper]
  8. He is very intelligent, she is very beautiful? On Mitigating Social Biases in Language Modelling and Generation, ACL-findings 2021 [Paper]
  9. Does Gender Matter? Towards Fairness in Dialogue Systems, COLING 2020 [Paper]
  10. Training language models to follow instructions with human feedback, NeurIPS 2022 [Paper]
  11. Never Too Late to Learn: Regularizing Gender Bias in Coreference Resolution, WSDM 2023 [Paper]
  12. CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System, Preprint 2024 [Paper]
  13. UP5: Unbiased Foundation Model for Fairness-aware Recommendation, EACL 2024 [Paper]
  14. ADEPT: A DEbiasing PrompT Framework, AAAI 2023 [Paper]
  15. Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation, Recsys 2023. [Paper]
  16. Automatic Generation of Distractors for Fill-in-the-Blank Exercises with Round-Trip Neural Machine Translation, ACL-workshop2023. [Paper]
  17. Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions, ACL 2023 [Paper]
  18. Critic-Guided Decoding for Controlled Text Generation, ACL-finding 2023 [Paper]
  19. Item-side Fairness of Large Language Model-based Recommendation System, WWW 2024 [Paper]
  20. Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness, Preprint 2023 [Paper]
  21. Understanding Biases in ChatGPT-based Recommender Systems: Provider Fairness, Temporal Stability, and Recency, Preprint 2024 [Paper]
  22. A Preliminary Study of ChatGPT on News Recommendation: Personalization, Provider Fairness, Fake News, Preprint 2023 [Paper]

Unfairness in Result Evaluation

  1. Estimating the Personality of White-Box Language Models, Preprint 2022 [Paper]
  2. Tailoring Personality Traits in Large Language Models via Unsupervisedly-Built Personalized Lexicons, Preprint 2022 [Paper]
  3. FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models, Preprint 2023 [Paper]
  4. Evaluating and Inducing Personality in Pre-trained Language Models, NeurIPS 2023 [Paper]
  5. Do LLMs Possess a Personality? Making the MBTI Test an Amazing Evaluation for Large Language Models, Preprint 2023 [Paper]
  6. Studying Large Language Model Generalization with Influence Functions, Preprint 2023 [Paper]
  7. Towards Tracing Knowledge in Language Models Back to the Training Data, EMNLP findings 2023 [Paper]
  8. Detecting Pretraining Data from Large Language Models, Preprint 2023 [Paper]
  9. Watermarking Makes Language Models Radioactive, Preprint 2024 [Paper]
  10. WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data, Preprint 2023 [Paper]
  11. User Behavior Simulation with Large Language Model based Agents, Preprint 2023 [Paper]
  12. On Generative Agents in Recommendation, Preprint 2023 [Paper]

Contribution

πŸŽ‰πŸ‘ Please feel free to open an issue or make a pull request! πŸŽ‰πŸ‘