This is the official implementation of MELON: Provable Defense Against Indirect Prompt Injection Attacks in AI Agents.
Recent research has explored that LLM agents are vulnerable to indirect prompt injection (IPI) attacks, where malicious tasks embedded in tool-retrieved information can redirect the agent to take unauthorized actions. Existing defenses against IPI have significant limitations: either require essential model training resources, lack effectiveness against sophisticated attacks, or harm the normal utilities. We present MELON (Masked re-Execution and TooL comparisON), a novel IPI defense. Our approach builds on the observation that under a successful attack, the agent's next action becomes less dependent on user tasks and more on malicious tasks. Following this, we design MELON to detect attacks by re-executing the agent's trajectory with a masked user prompt modified through a masking function. We identify an attack if the actions generated in the original and masked executions are similar. We also include three key designs to reduce the potential false positives and false negatives. Extensive evaluation on the IPI benchmark AgentDojo demonstrates that MELON outperforms SOTA defenses in both attack prevention and utility preservation. Moreover, we show that combining MELON with a SOTA prompt augmentation defense (denoted as MELON-Aug) further improves its performance. We also conduct a detailed ablation study to validate our key designs.
First, clone the agentdojo repository.
git clone https://github.com/ethz-spylab/agentdojoNext, please follow the instructions in agentdojo to install the agentdojo environment.
In agentdojo/src/agentdojo/agent_pipeline/agent_pipeline.py, add the interface of MELON.
from agentdojo.agent_pipeline.pi_detector import MELON
...
DEFENSES = [
"tool_filter",
"transformers_pi_detector",
"spotlighting_with_delimiting",
"repeat_user_prompt",
"melon",
]In def from_config function, add the MELON detector (similar to tool_filter and other defenses).
if config.defense == "melon":
tools_loop = ToolsExecutionLoop(
[
ToolsExecutor(),
ContrastPIDetector(
llm,
threshold=0.1,
),
]
)
pipeline = cls([system_message_component, init_query_component, llm, tools_loop])
pipeline.name = f"{llm_name}-{config.defense}"
return pipelinePlease copy the entire pi_detector.py to agentdojo/src/agentdojo/agent_pipeline/pi_detector.py. Rememeber to fill in the api key for the OpenAI model.
Modifications:
- In
class PromptInjectionDetector, thequeryfunction is modified by adding theis_injectionargument in theextra_argsdictionary. This is used for stopping the agent's execution when an attack is detected. - The
class MELONis added.
The instructions are the same as in agentdojo.
Here is an example command to run the experiments:
python -m agentdojo.scripts.benchmark --model gpt-4o-2024-05-13 --attack tool_knowledge --defense melon -s slack > gpt-4o-2024-05-13_tool_knowledge_melon_slack.logIn pi_detector.py, we provide the implementation of MELON. You can use it in your own projects.
If you have any questions, please contact Kaijie Zhu.
We thank the authors of agentdojo for providing the environment for the experiments.
If you find this work useful, please cite:
@inproceedings{zhu2025melon,
title={MELON: Provable Defense Against Indirect Prompt Injection Attacks in AI Agents},
author={Zhu, Kaijie and Yang, Xianjun and Wang, Jindong and Guo, Wenbo and Wang, William Yang},
year={2025},
booktitle={International Conference on Machine Learning},
}