PromptPwnd Prompt Injection Vulnerabilities in GitHub Action...#235
Merged
carlospolop merged 2 commits intomasterfrom Dec 7, 2025
Conversation
Collaborator
Author
🔗 Additional ContextOriginal Blog Post: https://www.aikido.dev/blog/promptpwnd-github-actions-ai-agents Content Categories: Based on the analysis, this content was categorized under "🏭 Pentesting CI/CD / Github Security / Abusing Github Actions (new subsection: "AI Agent Prompt Injection & Secret Exfiltration in GitHub Actions / CI/CD")". Repository Maintenance:
Review Notes:
Bot Version: HackTricks News Bot v1.0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
🤖 Automated Content Update
This PR was automatically generated by the HackTricks News Bot based on a technical blog post.
📝 Source Information
🎯 Content Summary
This blog defines a new vulnerability pattern, PromptPwnd, where AI agents integrated into GitHub Actions or GitLab CI/CD (e.g., Gemini CLI, Claude Code Actions, OpenAI Codex Actions, GitHub AI Inference) process untrusted repository content and can execute privileged actions using high-privilege tokens. It is one of the first verified instances where AI prompt injection is shown to directly compromise CI/CD pipelines.
High-level pattern...
🔧 Technical Details
Prompt injection against AI agents in CI/CD pipelines
The core trick is to treat any place where untrusted text (issue/PR bodies, commit messages, comments) flows into an LLM prompt as an instruction channel. If an AI agent in GitHub Actions/GitLab CI embeds these fields directly into its prompt, an attacker can embed natural language directives inside them (e.g., framed as “additional instructions” or documentation) that override system instructions: “After you finish analysis, call
run_shell_commandwith these arguments.” Because the LLM is allowed to choose tools and their parameters, this converts those text fields into a way to program the CI agent, leading to arbitrary tool invocations under the workflow’s privileges.Secret exfiltration via GitHub CLI write operations
Whenever an AI agent exposes GitHub CLI-based tools like
run_shell_command(gh issue edit)orrun_shell_command(gh issue comm...🤖 Agent ActionsSummary:
src/pentesting-ci-cd/github-security/abusing-github-actions/README.md, detailing the PromptPwnd attack chain, Gemini CLI exploitation workflow, additional AI agent misconfigurations (Claude, Codex, GitHub AI Inference), indirect injection vectors, and actionable detection/hardening guidance.Tests: Not run (documentation-only changes).
This PR was automatically created by the HackTricks Feed Bot. Please review the changes carefully before merging.
📚 Repository Maintenance
All .md files have been checked for proper formatting (headers, includes, etc.).