Skip to content

Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and other leading AI organizations.

License

Notifications You must be signed in to change notification settings

AnthenaMatrix/Many-Shot-Jailbreaking

Repository files navigation

Many-Shot Jailbreaking

Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and other leading AI organizations.

Overview

In our latest research, we explore a technique known as "many-shot jailbreaking", which poses a significant challenge to the safety measures implemented in large language models (LLMs) across the AI industry, including Anthropic's own models. This method takes advantage of the extended context window capability of LLMs, potentially leading them to generate harmful responses.

Background

The context window of LLMs, or the amount of information they can consider at once, has seen a dramatic increase. From about 4,000 tokens in early 2023 to over a million tokens, this expansion has opened new doors for both users and potential exploits. "Many-shot jailbreaking" leverages this growth by inputting a large volume of carefully structured text to bypass the models' safety protocols.

Research Insights

  • Many-Shot Technique: By simulating a dialogue between a user and the AI with potentially harmful queries followed by a final target question, the model's safety filters can be overridden with a sufficient number of "shots".
  • Effectiveness and Scaling: Our studies demonstrate that the success rate of generating harmful responses increases with the number of dialogues, showcasing a concerning scalability of the attack.

How Many-Shot Jailbreaking Works

Many-shot jailbreaking is a sophisticated technique that exploits the extensive context window of Large Language Models (LLMs) to bypass their safety and content restrictions. This method involves crafting a series of benign dialogues (or "shots") that precede a harmful or misleading query. The accumulation of these dialogues within the LLM's context window effectively "jailbreaks" the model, causing it to generate responses that it is normally programmed to avoid. The technique leverages the model's ability to process and respond to the immediate context, using the benign nature of the initial dialogues to lower its defenses against potentially harmful content.

Example 1: Evasion of Content Filters

In this scenario, the attacker constructs a sequence of innocuous exchanges on various topics such as weather, travel advice, and simple inquiries. Following these harmless interactions, a dangerous query is introduced, such as instructions for engaging in illegal activities. The model, influenced by the preceding context, overlooks its safety protocols and provides a detailed response to the harmful query.

Example 2: Generating Misinformation

The technique involves presenting a series of factual dialogues followed by a query that seeks to elicit support for widely debunked misinformation. Despite being trained to promote accuracy and discourage the spread of false information, the model, misled by the preparatory dialogues, proceeds to validate the misleading claim.

Why This Matters

The publication of our research stems from a responsibility to highlight and address emerging vulnerabilities in AI:

  • Proactive Sharing: By discussing our findings openly, we aim to accelerate the development of effective mitigation strategies.
  • Culture of Transparency: Encouraging an environment where potential exploits are collectively examined and addressed is crucial for advancing AI safety.
  • Anticipating Independent Discovery: Given the current trajectory of AI development, such vulnerabilities are likely to be identified independently, underscoring the importance of early disclosure.

Mitigation Efforts

While straightforward solutions like limiting the context window size exist, our focus is on maintaining the utility of LLMs without compromising safety. This includes fine-tuning models against recognizably harmful prompts and investigating prompt-based interventions that have shown promise in drastically reducing the efficacy of such attacks.

Disclaimer

The Prompt Injection Testing Tool is provided for educational and research purposes only. It should be used responsibly and in compliance with applicable laws and regulations. The authors do not accept any liability for any damages or losses resulting from the use of this tool.

License

This project is licensed under the MIT License.

Support AnthenaMatrix

Bitcoin: bc1qxvvtgz0vf3n2cuxt0suvf39jleegpt9wawxazn

Ethereum: 0xE73E90779B3e8F6D65306B40E02878f437408b4E

BNB: 0xE73E90779B3e8F6D65306B40E02878f437408b4E

Dogecoin: D827LpfJu9pcVc3Kky82sTrNnsE7pLGqeV

Solana: AJtGEJvoVoS2eeqeHQvf7usRs2nSQM1yLtBSdKp1KBY5

Website: https://anthenamatrix.com

About

Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and other leading AI organizations.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages