Skip to content

Conversation

@carlospolop
Copy link
Collaborator

🤖 Automated Content Update

This PR was automatically generated by the HackTricks News Bot based on a technical blog post.

📝 Source Information

🎯 Content Summary

Overview: Unit 42 demonstrates an end‑to‑end indirect prompt injection that persists inside an LLM agent’s long‑term memory and silently exfiltrates data in later sessions. The PoC uses Amazon Bedrock Agents with Memory enabled, the default AWS‑managed orchestration and session‑summarization prompts, Amazon Nova Premier v1 as the foundation model, and no Guardrails. The authors emphasize this is not an AWS platform vulnerability but a general agent risk when untruste...

🔧 Technical Details

Memory poisoning via summarizer inputs: If untrusted tool outputs are injected into the session‑summarization prompt, the LLM may store attacker directives (as topics like “user goals”/custom topics) in long‑term memory. Because memory is later injected as system instructions, the agent persistently executes those directives across sessions, enabling covert actions like data exfiltration.


Template boundary escape with forged XML delimiters: Split payload into three parts and use fake </conversation>/<conversation> tags so that the core instructions fall outside the conversation block and resemble template‑level system instructions. This increases inclusion in summaries and compliance by the model.


Stealthy autonomous exfiltration through tool use: Persisted instructions direct the agent to encode session or booking data into a query string and fetch a C2 URL via the web‑acc...

🤖 Agent Actions

I analyzed the Unit 42 blog and the Bedrock documentation, explored the repository, and found there was no Amazon Bedrock content yet. Given the nature of the technique (indirect prompt injection leading to persistent memory poisoning in Bedrock Agents), the most appropriate location is under Pentesting Cloud > AWS Security > AWS Services.

Actions performed:

  • Created a new technical page detailing the attack and defenses:
    • File: src/pentesting-cloud/aws-security/aws-services/aws-bedrock-agents-memory-poisoning.md
    • Content includes:
      • Overview and threat model for Bedrock Agents with Memory
      • Precise injection point in the Memory Summarization prompt
      • Three-part forged XML delimiter payload technique
      • Persistence mechanics and stealthy exfiltration via tool calls
      • Defense-in-depth guidance: sanitization, advanced prompts, parser Lambda for MEMORY_SUMMARIZATION, guardrails, egress allowlisting, and monitoring via Model Invocation Logs and Trace
      • Example 3-part payload (abridged) and an expandable code example of a parser Lambda in Python that enforces allowed topics in memory summaries
      • Detection ideas and builder takeaways
      • References section with the Unit 42 blog and relevant AWS documentation links
  • Linked the new page from a related section to ensure discoverability:
    • Modified: src/pentesting-cloud/aws-security/aws-services/README.md
    • Added a “Related: Amazon Bedrock security” section with an internal ref to aws-bedrock-agents-memory-poisoning.md

Why this placement:

  • HackTricks organizes AWS hardening/attacks by service under aws-services, and similar targeted content (e.g., SageMaker unauthorized access) already lives here.
  • The topic is a Bedrock Agents-specific security technique, so it fits best as a service-level security page rather than generic hacking.

No overlapping content existed, so a new page was necessary. The page follows HackTricks style, includes proper banners, references, and avoids unescaping existing markdown.


This PR was automatically created by the HackTricks Feed Bot. Please review the changes carefully before merging.

📚 Repository Maintenance

  • MD files processed: 556
  • MD files fixed: 16

All .md files have been checked for proper formatting (headers, includes, etc.).

🗑️ File Cleanup

  • searchindex.js removed: This auto-generated file has been removed to prevent conflicts.

…n Agents’...

- Remove searchindex.js (auto-generated file)
@carlospolop
Copy link
Collaborator Author

🔗 Additional Context

Original Blog Post: https://unit42.paloaltonetworks.com/indirect-prompt-injection-poisons-ai-longterm-memory/

Content Categories: Based on the analysis, this content was categorized under "Pentesting Cloud > AWS Security > Amazon Bedrock Security > Bedrock Agents Memory Poisoning (Indirect Prompt Injection)".

Repository Maintenance:

  • MD Files Formatting: 556 files processed (16 files fixed)

Review Notes:

  • This content was automatically processed and may require human review for accuracy
  • Check that the placement within the repository structure is appropriate
  • Verify that all technical details are correct and up-to-date
  • All .md files have been checked for proper formatting (headers, includes, etc.)

Bot Version: HackTricks News Bot v1.0

Updated formatting of XML delimiters in the documentation to use backticks for clarity. Enhanced explanations regarding memory injection vulnerabilities and defensive measures.
@carlospolop carlospolop merged commit 98eb150 into master Oct 23, 2025
@carlospolop carlospolop deleted the update_When_AI_Remembers_Too_Much___Persistent_Behaviors__20251010_011705 branch October 23, 2025 12:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants