Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 16 additions & 10 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -78,22 +78,28 @@ FEATURE_AGENT_MODE_AVAILABLE=true
AGENT_LOOP_STRATEGY=think-act
# (Adjust above to stage rollouts. For a bare-bones chat set them all to false.)

APP_LOG_DIR=/workspaces/atlas-ui-3-11/logs
APP_LOG_DIR=/workspaces/atlas-ui-3/logs

CAPABILITY_TOKEN_SECRET=blablah

#############################################
# S3/MinIO Storage Configuration
#############################################
# MinIO endpoint (use localhost for local dev, minio for docker-compose)
S3_ENDPOINT=http://localhost:9000
S3_BUCKET_NAME=atlas-files
S3_ACCESS_KEY=minioadmin
S3_SECRET_KEY=minioadmin
S3_REGION=us-east-1
S3_TIMEOUT=30

S3_USE_SSL=false
# Choose ONE option below (comment out the other)

# --- Option 1: Mock S3 (Default - No Docker required) ---
USE_MOCK_S3=true

# --- Option 2: MinIO (Requires Docker) ---
# Uncomment below and set USE_MOCK_S3=false to use MinIO
# USE_MOCK_S3=false
# S3_ENDPOINT=http://localhost:9000
# S3_BUCKET_NAME=atlas-files # Must match bucket created in docker-compose.yml
# S3_ACCESS_KEY=minioadmin
# S3_SECRET_KEY=minioadmin
# S3_REGION=us-east-1
# S3_TIMEOUT=30
# S3_USE_SSL=false


SECURITY_CSP_VALUE="default-src 'self'; img-src 'self' data: blob:; script-src 'self'; style-src 'self' 'unsafe-inline'; connect-src 'self'; frame-src 'self' blob: data:; frame-ancestors 'self'"
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
*.pptx
*.jsonl

.ruff_cache
# Environment variables
.env
.claude
Expand Down
37 changes: 28 additions & 9 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,12 +61,16 @@ mkdir -p logs
```bash
bash agent_start.sh
```
This script handles: killing old processes, clearing logs, building frontend, starting mock S3, and starting backend.
This script handles: killing old processes, clearing logs, building frontend, starting S3 storage (MinIO or Mock based on `USE_MOCK_S3` in `.env`), and starting backend.

**Options:**
- `bash agent_start.sh -f` - Only rebuild frontend
- `bash agent_start.sh -b` - Only restart backend

**Note:** The script automatically reads `USE_MOCK_S3` from `.env`:
- If `true`: Uses in-process Mock S3 (no Docker)
- If `false`: Starts MinIO via docker-compose

### Manual Development Workflow

**Frontend Build (CRITICAL):**
Expand All @@ -82,11 +86,24 @@ cd backend
python main.py # NEVER use uvicorn --reload (causes problems)
```

**Mock S3 (Optional):**
```bash
cd mocks/s3-mock
python main.py # Runs on http://127.0.0.1:8003
```
**S3 Storage (Mock vs MinIO):**

The project supports two S3 storage backends:

1. **Mock S3 (Default, Recommended for Development)**
- Set `USE_MOCK_S3=true` in `.env`
- Uses in-process FastAPI TestClient (no Docker required)
- Files stored in `minio-data/chatui/` on disk
- No external server needed - integrated directly into backend
- Faster startup, simpler development workflow

2. **MinIO (Production-like)**
- Set `USE_MOCK_S3=false` in `.env`
- Requires Docker: `docker-compose up -d minio minio-init`
- Full S3 compatibility with all features
- Use for testing production-like scenarios

The mock automatically activates when the backend starts if `USE_MOCK_S3=true`.

### Testing

Expand Down Expand Up @@ -246,9 +263,11 @@ Three agent loop strategies implement different reasoning patterns:
- **Act** (`backend/application/chat/agent/act_loop.py`): Pure action loop without explicit reasoning steps, fastest with minimal overhead. LLM calls tools directly and signals completion via the "finished" tool

### File Storage
S3-compatible storage via `backend/modules/file_storage/s3_client.py`:
- Production: Real S3 or S3-compatible service
- Development: Mock S3 (`mocks/s3-mock/`)
S3-compatible storage via `backend/modules/file_storage/`:
- Production/MinIO: `s3_client.py` - boto3-based client for real S3/MinIO
- Development: `mock_s3_client.py` - TestClient-based in-process mock (no Docker)
- Controlled by `USE_MOCK_S3` env var (default: true)
- Both implementations share the same interface

### Security Middleware Stack
```
Expand Down
65 changes: 65 additions & 0 deletions GEMINI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# GEMINI.md

This file provides guidance to the Gemini AI agent when working with code in this repository.

## Project Overview

Atlas UI 3 is a full-stack LLM chat interface with Model Context Protocol (MCP) integration, supporting multiple LLM providers (OpenAI, Anthropic Claude, Google Gemini), RAG, and agentic capabilities.

**Tech Stack:**
- Backend: FastAPI + WebSockets, LiteLLM, FastMCP
- Frontend: React 19 + Vite 7 + Tailwind CSS
- Python Package Manager: **uv** (NOT pip!)
- Configuration: Pydantic with YAML/JSON configs

## Building and Running

### Quick Start (Recommended)
```bash
bash agent_start.sh
```
This script handles: killing old processes, clearing logs, building frontend, and starting the backend.

### Manual Development Workflow

**Frontend Build (CRITICAL):**
```bash
cd frontend
npm install
npm run build # Use build, NOT npm run dev (WebSocket issues)
```

**Backend Start:**
```bash
cd backend
python main.py # NEVER use uvicorn --reload (causes problems)
```

### Testing

**Run all tests:**
```bash
./test/run_tests.sh all
```

**Individual test suites:**
```bash
./test/run_tests.sh backend
./test/run_tests.sh frontend
./test/run_tests.sh e2e
```

## Development Conventions

- **Python Package Manager**: **ALWAYS use `uv`**, never pip or conda.
- **Frontend Development**: **NEVER use `npm run dev`**, it has WebSocket connection problems. Always use `npm run build`.
- **Backend Development**: **NEVER use `uvicorn --reload`**, it causes problems.
- **File Naming**: Do not use generic names like `utils.py` or `helpers.py`. Use descriptive names that reflect the file's purpose.
- **No Emojis**: No emojis should ever be added in this repo.
- **Linting**: Run `ruff check backend/` for Python and `npm run lint` for the frontend before committing.


Also read.
/workspaces/atlas-ui-3/.github/copilot-instructions.md

and CLAUDE.md
6 changes: 2 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,10 +51,7 @@ cp .env.example .env # Edit with your API keys
# Build frontend
cd frontend && npm install && npm run build

# there is a mock s3 that you might want to enable. Switching to minio sooon.
cd mocks/s3-mocks && python main.py

# Start backend
# Start backend
cd ../backend && python main.py

# OR the quickest way to start is to use the agent_start.sh
Expand All @@ -79,6 +76,7 @@ bash agent_start.sh
- **Use `npm run build`** instead of `npm run dev` for frontend development
- **File limit**: Maximum 400 lines per file for maintainability
- **Container Environment**: Use Fedora latest for Docker containers (GitHub Actions uses Ubuntu runners)
- **Mock S3**: The included S3 mock (`mocks/s3-mock/`) is for development/testing only and must NEVER be used in production due to lack of authentication, encryption, and other critical security features.

## License

Expand Down
26 changes: 19 additions & 7 deletions agent_start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,26 @@ done
# Configuration
USE_NEW_FRONTEND=${USE_NEW_FRONTEND:-true}

# Check if MinIO is running
if ! docker ps | grep -q atlas-minio; then
echo "⚠️ MinIO is not running. Starting MinIO with docker-compose..."
docker-compose up -d minio minio-init
echo "✅ MinIO started successfully"
sleep 3
# Read USE_MOCK_S3 from .env file
if [ -f .env ]; then
USE_MOCK_S3=$(grep -E "^USE_MOCK_S3=" .env | cut -d '=' -f2)
else
echo "✅ MinIO is already running"
USE_MOCK_S3="true" # Default to mock if no .env
fi

# Only start MinIO if not using mock S3
if [ "$USE_MOCK_S3" = "true" ]; then
echo "Using Mock S3 (no Docker required)"
else
# Check if MinIO is running
if ! docker ps | grep -q atlas-minio; then
echo "MinIO is not running. Starting MinIO with docker-compose..."
docker-compose up -d minio minio-init
echo "MinIO started successfully"
sleep 3
else
echo "MinIO is already running"
fi
fi

# Kill any running uvicorn processes (skip if only rebuilding frontend)
Expand Down
13 changes: 9 additions & 4 deletions backend/application/chat/preprocessors/message_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,22 +41,27 @@ async def build_messages(
) -> List[Dict[str, Any]]:
"""
Build messages array from session history and context.

Args:
session: Current chat session
include_files_manifest: Whether to append files manifest

Returns:
List of messages ready for LLM call
"""
# Get conversation history from session
messages = session.history.get_messages_for_llm()

# Optionally add files manifest
if include_files_manifest:
session_context = build_session_context(session)
files_in_context = session_context.get("files", {})
logger.debug(f"Session has {len(files_in_context)} files: {list(files_in_context.keys())}")
files_manifest = file_utils.build_files_manifest(session_context)
if files_manifest:
logger.debug(f"Adding files manifest to messages: {files_manifest['content'][:100]}")
messages.append(files_manifest)

else:
logger.warning("No files manifest generated despite include_files_manifest=True")

Comment on lines +65 to +66
Copy link

Copilot AI Nov 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The warning message could be clearer by indicating this is expected behavior when there are no files in the session context. Consider rewording to: logger.debug(\"No files manifest generated (no files in session context)\") or adding a check to only warn if files exist but manifest wasn't generated.

Suggested change
logger.warning("No files manifest generated despite include_files_manifest=True")
if not files_in_context:
logger.debug("No files manifest generated (no files in session context)")
else:
logger.warning("No files manifest generated despite files present in session context")

Copilot uses AI. Check for mistakes.
return messages
8 changes: 7 additions & 1 deletion backend/infrastructure/app_factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from interfaces.transport import ChatConnectionProtocol
from modules.config import ConfigManager
from modules.file_storage import S3StorageClient, FileManager
from modules.file_storage.mock_s3_client import MockS3StorageClient
from modules.llm.litellm_caller import LiteLLMCaller
from modules.mcp_tools import MCPToolManager
from modules.rag import RAGClient
Expand Down Expand Up @@ -37,7 +38,12 @@ def __init__(self) -> None:
)

# File storage & manager
self.file_storage = S3StorageClient()
if self.config_manager.app_settings.use_mock_s3:
logger.info("Using MockS3StorageClient (in-process, no Docker required)")
self.file_storage = MockS3StorageClient()
else:
logger.info("Using S3StorageClient (MinIO/AWS S3)")
self.file_storage = S3StorageClient()
self.file_manager = FileManager(self.file_storage)

logger.info("AppFactory initialized")
Expand Down
1 change: 1 addition & 0 deletions backend/modules/config/manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ def agent_mode_available(self) -> bool:
test_user: str = "test@test.com" # Test user for development

# S3/MinIO storage settings
use_mock_s3: bool = False # Use in-process S3 mock (no Docker required)
s3_endpoint: str = "http://localhost:9000"
s3_bucket_name: str = "atlas-files"
s3_access_key: str = "minioadmin"
Expand Down
Loading
Loading