Skip to content

Vulnerability Report: Code Injection Risk Due to Unsafe Usage of eval() #4873

Closed
@ybdesire

Description

@ybdesire

What happened?

Description

In the provided code snippet below, there exists a potential Remote Code Execution (RCE) vulnerability stemming from the unsafe use of the eval() function. The code checks if the target_url variable starts with the string "func". If true, it extracts the substring after "func:" and performs a replacement operation to substitute "last_url" with the page.url value. Subsequently, this processed string is passed to the eval() function for execution.

            if target_url.startswith("func"):
                func = target_url.split("func:")[1]
                func = func.replace("__last_url__", page.url)
                target_url = eval(func)

If we create a config as below:

target_url= "func:__import__('os').system('rm -f /path/to/sensitive/file')"

So user's sensitive files will be deleted.

The code is from latest main branch :
https://github.com/microsoft/autogen/blob/main/python/packages/agbench/benchmarks/WebArena/Templates/Common/evaluation_harness/evaluators.py#L276

Such issue is belongs to CWE-94
https://cwe.mitre.org/data/definitions/94.html

Security Impact:
This vulnerability allows attackers to bypass normal security mechanisms and execute arbitrary code with the privileges of the user running the vulnerable application. This could lead to severe consequences, including data theft, service disruption, or the installation of malicious software.

What did you expect to happen?

To mitigate this vulnerability, avoid using eval() with untrusted inputs. Instead, consider implementing a safer alternative, such as a whitelist of allowed functions or a more secure parsing and execution mechanism. Additionally, perform thorough input validation and sanitization to prevent malicious inputs from being processed.

How can we reproduce it (as minimally and precisely as possible)?

create a config as below:

target_url= "func:__import__('os').system('rm -f /path/to/sensitive/file')"

So user's sensitive files will be deleted.

AutoGen version

latest main branch today

Which package was this bug in

Core

Model used

No response

Python version

No response

Operating system

No response

Any additional info you think would be helpful for fixing this bug

To mitigate this vulnerability, avoid using eval() with untrusted inputs. Instead, consider implementing a safer alternative, such as a whitelist of allowed functions or a more secure parsing and execution mechanism. Additionally, perform thorough input validation and sanitization to prevent malicious inputs from being processed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions