Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mitigate issue #5872 (Prompt injection -> RCE in PAL chain) #6003

Merged
merged 9 commits into from
Jul 18, 2023

Conversation

boazwasserman
Copy link
Contributor

Adds some selective security controls to the PAL chain:

  1. Prevent imports
  2. Prevent arbitrary execution commands
  3. Enforce execution time limit (prevents DOS and long sessions where the flow is hijacked like remote shell)
  4. Enforce the existence of the solution expression in the code

This is done mostly by static analysis of the code using the ast library.

Also added tests to the pal chain.

Fixes #5872

@vowelparrot

@qxcv
Copy link

qxcv commented Jun 13, 2023

I just got here from a Twitter link that a colleague sent me (https://twitter.com/llm_sec/status/1668711587287375876?s=20). I'm only a causal observer (not a Langchain user or contributor), but I thought it might be good to drop these links in case you're unaware of the ways that attackers can escape from AST-based Python "sandboxes":

https://hacktricks.boitatech.com.br/misc/basic-python/bypass-python-sandboxes
https://github.com/mahaloz/ctf-wiki-en/blob/master/docs/pwn/linux/sandbox/python-sandbox-escape.md

The strategies in these links aren't exhaustive, but hopefully illustrate that this style of sandboxing makes attacks more complex without defeating them entirely.

@vowelparrot
Copy link
Contributor

vowelparrot commented Jun 13, 2023

Thanks for the PR, @boazwasserman! The PAL chain is indeed unsafe. It seems you've got enough experience to be aware of the points that @qxcv (thanks for the links btw!) is making. I don't think we could really get to enterprise-level security purely via AST validations, even if that were our main focus.

My inclination is still to add these checks in to make it a bit harder to succeed in a naive prompt injection attack.

If someone were to want to use this chain in production, it ought to be isolated further as well.

To counter a false sense of security, we could log in the PythonREPL

import logging
import functools

logger = logging.getLogger(__name__)


@functools.cache.lru_cache(maxsize=0)
def warn_once() -> None:
    # Warn that the PythonREPL
    logger.warning("Python REPL can execute arbitrary code. Use with caution.")

(called in run)
cc @hwchase17

@boazwasserman
Copy link
Contributor Author

Thanks for the inputs!
Completely agree that these ast validations & timeout limitations are not 100% bullet proof, but I still think they are worth having then not. Will add the logged warning as suggested.

@vercel
Copy link

vercel bot commented Jun 18, 2023

@orraz-labs is attempting to deploy a commit to the LangChain Team on Vercel.

A member of the Team first needs to authorize it.

@vercel
Copy link

vercel bot commented Jun 21, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchain ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jul 10, 2023 1:06pm

@vercel vercel bot temporarily deployed to Preview June 21, 2023 06:52 Inactive
@boazwasserman boazwasserman changed the title Add selective security controls to PAL chain Mitigate issue #5872 (Prompt injection -> RCE in PAL chain) Jul 3, 2023
@L0Z1K
Copy link

L0Z1K commented Jul 10, 2023

Is there any updates?

image

As you can see, the vulnerability is not yet closed.

@boazwasserman
Copy link
Contributor Author

@L0Z1K good catch! I was missing an edge case. Fixed it now

@vercel vercel bot temporarily deployed to Preview July 10, 2023 13:06 Inactive
@hinthornw hinthornw requested review from dev2049 and hwchase17 and removed request for dev2049 July 13, 2023 19:48
@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jul 14, 2023
@hinthornw hinthornw changed the base branch from master to wfh/prompt_injection July 18, 2023 05:43
@@ -33,6 +96,8 @@ class PALChain(Chain):
python_locals: Optional[Dict[str, Any]] = None
output_key: str = "result" #: :meta private:
return_intermediate_steps: bool = False
code_validations: PALValidation = PALValidation()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
code_validations: PALValidation = PALValidation()
code_validations: PALValidation = Field(default_factory=PALValidation)

@hinthornw hinthornw merged commit 8ba9835 into langchain-ai:wfh/prompt_injection Jul 18, 2023
13 checks passed
hinthornw added a commit that referenced this pull request Jul 18, 2023
Some docstring / small nits to #6003

---------

Co-authored-by: BoazWasserman <49598618+boazwasserman@users.noreply.github.com>
Co-authored-by: HippoTerrific <49598618+HippoTerrific@users.noreply.github.com>
Co-authored-by: Or Raz <orraz1994@gmail.com>
aerrober pushed a commit to aerrober/langchain-fork that referenced this pull request Jul 24, 2023
Some docstring / small nits to langchain-ai#6003

---------

Co-authored-by: BoazWasserman <49598618+boazwasserman@users.noreply.github.com>
Co-authored-by: HippoTerrific <49598618+HippoTerrific@users.noreply.github.com>
Co-authored-by: Or Raz <orraz1994@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Prompt injection which leads to arbitrary code execution in langchain.chains.PALChain
6 participants