Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,10 @@ logs/
# Docker volume configs (keep .env.example but ignore actual .env)
volumes/env/.env

# Vendored proxy sources (kept locally for reference)
ai/proxy/bifrost/
ai/proxy/litellm/

# Test project databases and configurations
test_projects/*/.fuzzforge/
test_projects/*/findings.db*
Expand Down Expand Up @@ -304,4 +308,4 @@ test_projects/*/.npmrc
test_projects/*/.git-credentials
test_projects/*/credentials.*
test_projects/*/api_keys.*
test_projects/*/ci-*.sh
test_projects/*/ci-*.sh
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### 🐛 Bug Fixes

- Fixed default parameters from metadata.yaml not being applied to workflows when no parameters provided
- Fixed gitleaks workflow failing on uploaded directories without Git history
- Fixed worker startup command suggestions (now uses `docker compose up -d` with service names)
- Fixed missing `cognify_text` method in CogneeProjectIntegration
Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,9 @@ For AI-powered workflows, configure your LLM API keys:
```bash
cp volumes/env/.env.example volumes/env/.env
# Edit volumes/env/.env and add your API keys (OpenAI, Anthropic, Google, etc.)
# Add your key to LITELLM_GEMINI_API_KEY
```
> Dont change the OPENAI_API_KEY default value, as it is used for the LLM proxy.

This is required for:
- `llm_secret_detection` workflow
Expand Down
10 changes: 0 additions & 10 deletions ai/agents/task_agent/.env.example

This file was deleted.

5 changes: 5 additions & 0 deletions ai/agents/task_agent/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,9 @@ COPY . /app/agent_with_adk_format
WORKDIR /app/agent_with_adk_format
ENV PYTHONPATH=/app

# Copy and set up entrypoint
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh

ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
34 changes: 25 additions & 9 deletions ai/agents/task_agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,18 +43,34 @@ cd task_agent
# cp .env.example .env
```

Edit `.env` (or `.env.example`) and add your API keys. The agent must be restarted after changes so the values are picked up:
Edit `.env` (or `.env.example`) and add your proxy + API keys. The agent must be restarted after changes so the values are picked up:
```bash
# Set default model
LITELLM_MODEL=gemini/gemini-2.0-flash-001

# Add API keys for providers you want to use
GOOGLE_API_KEY=your_google_api_key
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
OPENROUTER_API_KEY=your_openrouter_api_key
# Route every request through the proxy container (use http://localhost:10999 from the host)
FF_LLM_PROXY_BASE_URL=http://llm-proxy:4000

# Default model + provider the agent boots with
LITELLM_MODEL=openai/gpt-4o-mini
LITELLM_PROVIDER=openai

# Virtual key issued by the proxy to the task agent (bootstrap replaces the placeholder)
OPENAI_API_KEY=sk-proxy-default

# Upstream keys stay inside the proxy. Store real secrets under the LiteLLM
# aliases and the bootstrapper mirrors them into .env.litellm for the proxy container.
LITELLM_OPENAI_API_KEY=your_real_openai_api_key
LITELLM_ANTHROPIC_API_KEY=your_real_anthropic_key
LITELLM_GEMINI_API_KEY=your_real_gemini_key
LITELLM_MISTRAL_API_KEY=your_real_mistral_key
LITELLM_OPENROUTER_API_KEY=your_real_openrouter_key
```

> When running the agent outside of Docker, swap `FF_LLM_PROXY_BASE_URL` to the host port (default `http://localhost:10999`).

The bootstrap container provisions LiteLLM, copies provider secrets into
`volumes/env/.env.litellm`, and rewrites `volumes/env/.env` with the virtual key.
Populate the `LITELLM_*_API_KEY` values before the first launch so the proxy can
reach your upstream providers as soon as the bootstrap script runs.

### 2. Install Dependencies

```bash
Expand Down
31 changes: 31 additions & 0 deletions ai/agents/task_agent/docker-entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
#!/bin/bash
set -e

# Wait for .env file to have keys (max 30 seconds)
echo "[task-agent] Waiting for virtual keys to be provisioned..."
for i in $(seq 1 30); do
if [ -f /app/config/.env ]; then
# Check if TASK_AGENT_API_KEY has a value (not empty)
KEY=$(grep -E '^TASK_AGENT_API_KEY=' /app/config/.env | cut -d'=' -f2)
if [ -n "$KEY" ] && [ "$KEY" != "" ]; then
echo "[task-agent] Virtual keys found, loading environment..."
# Export keys from .env file
export TASK_AGENT_API_KEY="$KEY"
export OPENAI_API_KEY=$(grep -E '^OPENAI_API_KEY=' /app/config/.env | cut -d'=' -f2)
export FF_LLM_PROXY_BASE_URL=$(grep -E '^FF_LLM_PROXY_BASE_URL=' /app/config/.env | cut -d'=' -f2)
echo "[task-agent] Loaded TASK_AGENT_API_KEY: ${TASK_AGENT_API_KEY:0:15}..."
echo "[task-agent] Loaded FF_LLM_PROXY_BASE_URL: $FF_LLM_PROXY_BASE_URL"
break
fi
fi
echo "[task-agent] Keys not ready yet, waiting... ($i/30)"
sleep 1
done

if [ -z "$TASK_AGENT_API_KEY" ]; then
echo "[task-agent] ERROR: Virtual keys were not provisioned within 30 seconds!"
exit 1
fi

echo "[task-agent] Starting uvicorn..."
exec "$@"
19 changes: 17 additions & 2 deletions ai/agents/task_agent/litellm_agent/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,28 @@

import os


def _normalize_proxy_base_url(raw_value: str | None) -> str | None:
if not raw_value:
return None
cleaned = raw_value.strip()
if not cleaned:
return None
# Avoid double slashes in downstream requests
return cleaned.rstrip("/")

AGENT_NAME = "litellm_agent"
AGENT_DESCRIPTION = (
"A LiteLLM-backed shell that exposes hot-swappable model and prompt controls."
)

DEFAULT_MODEL = os.getenv("LITELLM_MODEL", "gemini-2.0-flash-001")
DEFAULT_PROVIDER = os.getenv("LITELLM_PROVIDER")
DEFAULT_MODEL = os.getenv("LITELLM_MODEL", "openai/gpt-4o-mini")
DEFAULT_PROVIDER = os.getenv("LITELLM_PROVIDER") or None
PROXY_BASE_URL = _normalize_proxy_base_url(
os.getenv("FF_LLM_PROXY_BASE_URL")
or os.getenv("LITELLM_API_BASE")
or os.getenv("LITELLM_BASE_URL")
)

STATE_PREFIX = "app:litellm_agent/"
STATE_MODEL_KEY = f"{STATE_PREFIX}model"
Expand Down
170 changes: 169 additions & 1 deletion ai/agents/task_agent/litellm_agent/state.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,15 @@
from __future__ import annotations

from dataclasses import dataclass
import os
from typing import Any, Mapping, MutableMapping, Optional

import httpx

from .config import (
DEFAULT_MODEL,
DEFAULT_PROVIDER,
PROXY_BASE_URL,
STATE_MODEL_KEY,
STATE_PROMPT_KEY,
STATE_PROVIDER_KEY,
Expand Down Expand Up @@ -66,11 +70,109 @@ def instantiate_llm(self):
"""Create a LiteLlm instance for the current state."""

from google.adk.models.lite_llm import LiteLlm # Lazy import to avoid cycle
from google.adk.models.lite_llm import LiteLLMClient
from litellm.types.utils import Choices, Message, ModelResponse, Usage

kwargs = {"model": self.model}
if self.provider:
kwargs["custom_llm_provider"] = self.provider
return LiteLlm(**kwargs)
if PROXY_BASE_URL:
provider = (self.provider or DEFAULT_PROVIDER or "").lower()
if provider and provider != "openai":
kwargs["api_base"] = f"{PROXY_BASE_URL.rstrip('/')}/{provider}"
else:
kwargs["api_base"] = PROXY_BASE_URL
kwargs.setdefault("api_key", os.environ.get("TASK_AGENT_API_KEY") or os.environ.get("OPENAI_API_KEY"))

provider = (self.provider or DEFAULT_PROVIDER or "").lower()
model_suffix = self.model.split("/", 1)[-1]
use_responses = provider == "openai" and (
model_suffix.startswith("gpt-5") or model_suffix.startswith("o1")
)
if use_responses:
kwargs.setdefault("use_responses_api", True)

llm = LiteLlm(**kwargs)

if use_responses and PROXY_BASE_URL:

class _ResponsesAwareClient(LiteLLMClient):
def __init__(self, base_client: LiteLLMClient, api_base: str, api_key: str):
self._base_client = base_client
self._api_base = api_base.rstrip("/")
self._api_key = api_key

async def acompletion(self, model, messages, tools, **kwargs): # type: ignore[override]
use_responses_api = kwargs.pop("use_responses_api", False)
if not use_responses_api:
return await self._base_client.acompletion(
model=model,
messages=messages,
tools=tools,
**kwargs,
)

resolved_model = model
if "/" not in resolved_model:
resolved_model = f"openai/{resolved_model}"

payload = {
"model": resolved_model,
"input": _messages_to_responses_input(messages),
}

timeout = kwargs.get("timeout", 60)
headers = {
"Authorization": f"Bearer {self._api_key}",
"Content-Type": "application/json",
}

async with httpx.AsyncClient(timeout=timeout) as client:
response = await client.post(
f"{self._api_base}/v1/responses",
json=payload,
headers=headers,
)
try:
response.raise_for_status()
except httpx.HTTPStatusError as exc:
text = exc.response.text
raise RuntimeError(
f"LiteLLM responses request failed: {text}"
) from exc
data = response.json()

text_output = _extract_output_text(data)
usage = data.get("usage", {})

return ModelResponse(
id=data.get("id"),
model=model,
choices=[
Choices(
finish_reason="stop",
index=0,
message=Message(role="assistant", content=text_output),
provider_specific_fields={"bifrost_response": data},
)
],
usage=Usage(
prompt_tokens=usage.get("input_tokens"),
completion_tokens=usage.get("output_tokens"),
reasoning_tokens=usage.get("output_tokens_details", {}).get(
"reasoning_tokens"
),
total_tokens=usage.get("total_tokens"),
),
)

llm.llm_client = _ResponsesAwareClient(
llm.llm_client,
PROXY_BASE_URL,
os.environ.get("TASK_AGENT_API_KEY") or os.environ.get("OPENAI_API_KEY", ""),
)

return llm

@property
def display_model(self) -> str:
Expand All @@ -84,3 +186,69 @@ def apply_state_to_agent(invocation_context, state: HotSwapState) -> None:

agent = invocation_context.agent
agent.model = state.instantiate_llm()


def _messages_to_responses_input(messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
inputs: list[dict[str, Any]] = []
for message in messages:
role = message.get("role", "user")
content = message.get("content", "")
text_segments: list[str] = []

if isinstance(content, list):
for item in content:
if isinstance(item, dict):
text = item.get("text") or item.get("content")
if text:
text_segments.append(str(text))
elif isinstance(item, str):
text_segments.append(item)
elif isinstance(content, str):
text_segments.append(content)

text = "\n".join(segment.strip() for segment in text_segments if segment)
if not text:
continue

entry_type = "input_text"
if role == "assistant":
entry_type = "output_text"

inputs.append(
{
"role": role,
"content": [
{
"type": entry_type,
"text": text,
}
],
}
)

if not inputs:
inputs.append(
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "",
}
],
}
)
return inputs


def _extract_output_text(response_json: dict[str, Any]) -> str:
outputs = response_json.get("output", [])
collected: list[str] = []
for item in outputs:
if isinstance(item, dict) and item.get("type") == "message":
for part in item.get("content", []):
if isinstance(part, dict) and part.get("type") == "output_text":
text = part.get("text", "")
if text:
collected.append(str(text))
return "\n\n".join(collected).strip()
5 changes: 5 additions & 0 deletions ai/proxy/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# LLM Proxy Integrations

This directory contains vendor source trees that were vendored only for reference when integrating LLM gateways. The actual FuzzForge deployment uses the official Docker images for each project.

See `docs/docs/how-to/llm-proxy.md` for up-to-date instructions on running the proxy services and issuing keys for the agents.
15 changes: 12 additions & 3 deletions ai/src/fuzzforge_ai/agent_executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -1049,10 +1049,19 @@ async def get_task_list(task_list_id: str) -> str:
FunctionTool(get_task_list)
])


# Create the agent

# Create the agent with LiteLLM configuration
llm_kwargs = {}
api_key = os.getenv('OPENAI_API_KEY') or os.getenv('LLM_API_KEY')
api_base = os.getenv('LLM_ENDPOINT') or os.getenv('LLM_API_BASE') or os.getenv('OPENAI_API_BASE')

if api_key:
llm_kwargs['api_key'] = api_key
if api_base:
llm_kwargs['api_base'] = api_base

self.agent = LlmAgent(
model=LiteLlm(model=self.model),
model=LiteLlm(model=self.model, **llm_kwargs),
name="fuzzforge_executor",
description="Intelligent A2A orchestrator with memory",
instruction=self._build_instruction(),
Expand Down
Loading