Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 41 additions & 7 deletions docs/tools/built-in-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,12 +61,27 @@ like calculations, data manipulation, or running small scripts.

The GKE Code Executor (`GkeCodeExecutor`) provides a secure and scalable method
for running LLM-generated code by leveraging the GKE (Google Kubernetes Engine)
Sandbox environment, which uses gVisor for workload isolation.
Sandbox environment, which uses gVisor for workload isolation. For each code
execution request, it dynamically creates an ephemeral, sandboxed Kubernetes Job
with a hardened Pod configuration. You should use this executor for production
environments on GKE where security and isolation are critical.

For each code execution request, it dynamically creates an ephemeral, sandboxed
Kubernetes Job with a hardened Pod configuration. This is the recommended
executor for production environments on GKE where security and isolation are
critical.
#### How it Works

When a request to execute code is made, the `GkeCodeExecutor` performs the following steps:

1. **Creates a ConfigMap:** A Kubernetes ConfigMap is created to store the Python code that needs to be executed.
2. **Creates a Sandboxed Pod:** A new Kubernetes Job is created, which in turn creates a Pod with a hardened security context and the gVisor runtime enabled. The code from the ConfigMap is mounted into this Pod.
3. **Executes the Code:** The code is executed within the sandboxed Pod, isolated from the underlying node and other workloads.
4. **Retrieves the Result:** The standard output and error streams from the execution are captured from the Pod's logs.
5. **Cleans Up Resources:** Once the execution is complete, the Job and the associated ConfigMap are automatically deleted, ensuring that no artifacts are left behind.

#### Key Benefits

* **Enhanced Security:** Code is executed in a gVisor-sandboxed environment with kernel-level isolation.
* **Ephemeral Environments:** Each code execution runs in its own ephemeral Pod, to prevent state transfer between executions.
* **Resource Control:** You can configure CPU and memory limits for the execution Pods to prevent resource abuse.
* **Scalability:** Allows you to run a large number of code executions in parallel, with GKE handling the scheduling and scaling of the underlying nodes.

#### System requirements

Expand All @@ -87,18 +102,21 @@ sample. For more information on deploying ADK workflows to GKE, see

=== "Python"

```py
```python
from google.adk.agents import LlmAgent
from google.adk.code_executors import GkeCodeExecutor

# Initialize the executor, targeting the namespace where its ServiceAccount
# has the required RBAC permissions.
# This example also sets a custom timeout and resource limits.
gke_executor = GkeCodeExecutor(
namespace="agent-sandbox",
timeout_seconds=600,
cpu_limit="1000m", # 1 CPU core
mem_limit="1Gi",
)

# The agent will now use this executor for any code it generates.
# The agent now uses this executor for any code it generates.
gke_agent = LlmAgent(
name="gke_coding_agent",
model="gemini-2.0-flash",
Expand All @@ -107,6 +125,22 @@ sample. For more information on deploying ADK workflows to GKE, see
)
```

#### Configuration parameters

The `GkeCodeExecutor` can be configured with the following parameters:

| Parameter | Type | Description |
| -------------------- | ------ | --------------------------------------------------------------------------------------- |
| `namespace` | `str` | Kubernetes namespace where the execution Jobs will be created. Defaults to `"default"`. |
| `image` | `str` | Container image to use for the execution Pod. Defaults to `"python:3.11-slim"`. |
| `timeout_seconds` | `int` | Timeout in seconds for the code execution. Defaults to `300`. |
| `cpu_requested` | `str` | Amount of CPU to request for the execution Pod. Defaults to `"200m"`. |
| `mem_requested` | `str` | Amount of memory to request for the execution Pod. Defaults to `"256Mi"`. |
| `cpu_limit` | `str` | Maximum amount of CPU the execution Pod can use. Defaults to `"500m"`. |
| `mem_limit` | `str` | Maximum amount of memory the execution Pod can use. Defaults to `"512Mi"`. |
| `kubeconfig_path` | `str` | Path to a kubeconfig file to use for authentication. Falls back to in-cluster config or the default local kubeconfig. |
| `kubeconfig_context` | `str` | The `kubeconfig` context to use. |

### Vertex AI RAG Engine

The `vertex_ai_rag_retrieval` tool allows the agent to perform private data retrieval using Vertex
Expand Down