- [2024/04] Goal-guided Generative Prompt Injection Attack on Large Language Models
- [2024/03] Optimization-based Prompt Injection Attack to LLM-as-a-Judge
- [2024/03] Defending Against Indirect Prompt Injection Attacks With Spotlighting
- [2024/03] Scaling Behavior of Machine Translation with Large Language Models under Prompt Injection Attacks
- [2024/03] Automatic and Universal Prompt Injection Attacks against Large Language Models
- [2024/03] Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
- [2024/03] InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents
- [2023/11] Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles
- [2023/11] Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition
- [2023/10] Prompt Injection Attacks and Defenses in LLM-Integrated Applications
- [2023/09] Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game
- [2023/06] Prompt Injection Attack against LLM-integrated Applications
- [2023/02] Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection