From 6d71a2cc9def4c53051529e3b8c0e06e9f468da3 Mon Sep 17 00:00:00 2001
From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com>
Date: Sat, 22 Nov 2025 03:32:02 +0000
Subject: [PATCH 1/3] Initial plan
From b02a95d2f57183796d4769a5954820defe192b5c Mon Sep 17 00:00:00 2001
From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com>
Date: Sat, 22 Nov 2025 03:40:46 +0000
Subject: [PATCH 2/3] Split documentation into organized topic-based files
- Created getting-started, admin, and developer directories
- Split 01_getting_started.md into installation.md
- Split 02_admin_guide.md into 10 focused topic files
- Split 03_developer_guide.md into 6 focused topic files
- Updated all references in README.md and QUICKSTART.md
- Each directory now has an index README.md for navigation
Co-authored-by: garland3 <1162675+garland3@users.noreply.github.com>
---
README.md | 6 +-
.../mcp/progress_updates_demo/QUICKSTART.md | 2 +-
docs/admin/README.md | 26 +++
docs/admin/admin-panel.md | 9 +
docs/admin/authentication.md | 168 +++++++++++++++
docs/admin/compliance.md | 15 ++
docs/admin/configuration.md | 36 ++++
docs/admin/file-storage.md | 50 +++++
docs/admin/help-config.md | 29 +++
docs/admin/llm-config.md | 113 ++++++++++
docs/admin/logging-monitoring.md | 28 +++
docs/admin/mcp-servers.md | 149 ++++++++++++++
docs/admin/splash-config.md | 58 ++++++
docs/admin/tool-approval.md | 34 +++
docs/developer/README.md | 17 ++
docs/developer/architecture.md | 22 ++
docs/developer/canvas-renderers.md | 56 +++++
docs/developer/conventions.md | 11 +
docs/developer/creating-mcp-servers.md | 99 +++++++++
docs/developer/progress-updates.md | 194 ++++++++++++++++++
docs/developer/working-with-files.md | 50 +++++
docs/getting-started/README.md | 14 ++
docs/getting-started/installation.md | 135 ++++++++++++
23 files changed, 1317 insertions(+), 4 deletions(-)
create mode 100644 docs/admin/README.md
create mode 100644 docs/admin/admin-panel.md
create mode 100644 docs/admin/authentication.md
create mode 100644 docs/admin/compliance.md
create mode 100644 docs/admin/configuration.md
create mode 100644 docs/admin/file-storage.md
create mode 100644 docs/admin/help-config.md
create mode 100644 docs/admin/llm-config.md
create mode 100644 docs/admin/logging-monitoring.md
create mode 100644 docs/admin/mcp-servers.md
create mode 100644 docs/admin/splash-config.md
create mode 100644 docs/admin/tool-approval.md
create mode 100644 docs/developer/README.md
create mode 100644 docs/developer/architecture.md
create mode 100644 docs/developer/canvas-renderers.md
create mode 100644 docs/developer/conventions.md
create mode 100644 docs/developer/creating-mcp-servers.md
create mode 100644 docs/developer/progress-updates.md
create mode 100644 docs/developer/working-with-files.md
create mode 100644 docs/getting-started/README.md
create mode 100644 docs/getting-started/installation.md
diff --git a/README.md b/README.md
index 52e0d53..4aa1828 100644
--- a/README.md
+++ b/README.md
@@ -27,11 +27,11 @@ A modern LLM chat interface with MCP (Model Context Protocol) integration.
We have created a set of comprehensive guides to help you get the most out of Atlas UI 3.
-* **[Getting Started](./docs/01_getting_started.md)**: The perfect starting point for all users. This guide covers how to get the application running with Docker or on your local machine.
+* **[Getting Started](./docs/getting-started/installation.md)**: The perfect starting point for all users. This guide covers how to get the application running with Docker or on your local machine.
-* **[Administrator's Guide](./docs/02_admin_guide.md)**: For those who will deploy and manage the application. This guide details configuration, security settings, access control, and other operational topics.
+* **[Administrator's Guide](./docs/admin/README.md)**: For those who will deploy and manage the application. This guide details configuration, security settings, access control, and other operational topics.
-* **[Developer's Guide](./docs/03_developer_guide.md)**: For developers who want to contribute to the project. It provides an overview of the architecture and instructions for creating new MCP servers.
+* **[Developer's Guide](./docs/developer/README.md)**: For developers who want to contribute to the project. It provides an overview of the architecture and instructions for creating new MCP servers.
## For AI Agent Contributors
diff --git a/backend/mcp/progress_updates_demo/QUICKSTART.md b/backend/mcp/progress_updates_demo/QUICKSTART.md
index 897927a..e2da3c3 100644
--- a/backend/mcp/progress_updates_demo/QUICKSTART.md
+++ b/backend/mcp/progress_updates_demo/QUICKSTART.md
@@ -270,4 +270,4 @@ See `/backend/mcp/progress_updates_demo/main.py` for complete working examples.
## Documentation
-Full documentation: [Developer Guide - Progress Updates](../docs/03_developer_guide.md#progress-updates-and-intermediate-results)
+Full documentation: [Developer Guide - Progress Updates](../../../docs/developer/progress-updates.md)
diff --git a/docs/admin/README.md b/docs/admin/README.md
new file mode 100644
index 0000000..9d1f68e
--- /dev/null
+++ b/docs/admin/README.md
@@ -0,0 +1,26 @@
+# Administrator's Guide
+
+This guide is for administrators responsible for deploying, configuring, and managing the Atlas UI 3 application.
+
+## Topics
+
+### Configuration
+- [Configuration Architecture](configuration.md) - Understanding the layered configuration system
+- [MCP Server Configuration](mcp-servers.md) - Setting up and configuring MCP tool servers
+- [LLM Configuration](llm-config.md) - Configuring Large Language Models
+
+### Security & Access Control
+- [Authentication & Authorization](authentication.md) - User authentication and group-based access control
+- [Compliance & Data Security](compliance.md) - Compliance levels and data segregation
+- [Tool Approval System](tool-approval.md) - Managing tool execution permissions
+
+### Storage & Infrastructure
+- [File Storage (S3)](file-storage.md) - Configuring S3-compatible object storage
+
+### Operations
+- [Logging & Monitoring](logging-monitoring.md) - Application logs and health monitoring
+- [Admin Panel](admin-panel.md) - Using the administrative interface
+
+### UI Customization
+- [Help Modal](help-config.md) - Customizing the Help/About modal
+- [Splash Screen](splash-config.md) - Configuring the startup splash screen
diff --git a/docs/admin/admin-panel.md b/docs/admin/admin-panel.md
new file mode 100644
index 0000000..c273eda
--- /dev/null
+++ b/docs/admin/admin-panel.md
@@ -0,0 +1,9 @@
+# Admin Panel
+
+The application includes an admin panel that provides access to configuration values and application logs.
+
+* **Access**: To access the admin panel, a user must be in the `admin` group. This requires a correctly configured `is_user_in_group` function.
+* **Icon**: Admin users will see a shield icon on the main page, which leads to the admin panel.
+* **Features**:
+ * View the current application configuration.
+ * View the application logs (`app.jsonl`).
diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md
new file mode 100644
index 0000000..d057ccc
--- /dev/null
+++ b/docs/admin/authentication.md
@@ -0,0 +1,168 @@
+# Authentication & Authorization
+
+The application is designed with the expectation that it operates behind a reverse proxy in a production environment. It does **not** handle user authentication (i.e., logging users in) by itself. Instead, it trusts a header that is injected by an upstream authentication service.
+
+## Production Authentication Flow
+
+The intended flow for user authentication in a production environment is as follows:
+
+```
+ +-----------+ +-----------------+ +----------------+ +--------------------+
+ | | | | | | | |
+ | User |----->| Reverse Proxy |----->| Auth Service |----->| Atlas UI Backend |
+ | | 1. | | 2. | | 3. | |
+ +-----------+ +-----------------+ +----------------+ +--------------------+
+```
+
+1. The user makes a request to the application's public URL, which is handled by the **Reverse Proxy**.
+2. The Reverse Proxy communicates with an **Authentication Service** (e.g., an SSO provider, an OAuth server) to validate the user's credentials (like cookies or tokens).
+3. Once the user is authenticated, the Reverse Proxy **injects the user's identity** (e.g., their email address) into an HTTP header and forwards the request to the **Atlas UI Backend**.
+
+The backend application reads this header to identify the user. The header name is configurable via the `AUTH_USER_HEADER` environment variable (default: `X-User-Email`). This allows flexibility for different reverse proxy setups that may use different header names (e.g., `X-Authenticated-User`, `X-Remote-User`). This model is secure only if the backend is not directly exposed to the internet, ensuring that all requests are processed by the proxy first.
+
+If using AWS Application Load Balancer (ALB) as the Auth Service, the following authentication configuration should be used:
+
+```
+ AUTH_USER_HEADER=x-amzn-oidc-data
+ AUTH_USER_HEADER_TYPE=aws-alb-jwt
+ AUTH_AWS_EXPECTED_ALB_ARN=arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/your-alb-name/...
+ AUTH_AWS_REGION=us-east-1
+```
+
+This configuration will decode the base64-encoded JWT passed in the x-amzn-oidc-data header, validate it, and extract the user's email address from the validated JWT.
+
+## Development Behavior
+
+In a local development environment (when `DEBUG_MODE=true` in the `.env` file), the system falls back to using a default `test@test.com` user if the configured authentication header is not present.
+
+## Configuring the Authentication Header
+
+Different reverse proxy setups use different header names to pass authenticated user information. The application supports configuring the header name via the `AUTH_USER_HEADER` environment variable.
+
+**Default Configuration:**
+```
+AUTH_USER_HEADER=X-User-Email
+```
+
+**Common Alternative Headers:**
+```
+# For Apache mod_auth setups
+AUTH_USER_HEADER=X-Remote-User
+
+# For some SSO providers
+AUTH_USER_HEADER=X-Authenticated-User
+
+# For custom reverse proxy configurations
+AUTH_USER_HEADER=X-Custom-Auth-Header
+```
+
+This setting allows the application to work with various authentication infrastructures without code changes.
+
+## Proxy Secret Authentication (Optional Security Layer)
+
+For additional security, you can configure the application to require a secret value in a specific header to validate that requests are coming from your trusted reverse proxy. This prevents direct access to the backend application, even if it's accidentally exposed.
+
+**When to Use Proxy Secret Authentication:**
+- When you want an additional layer of security beyond network isolation
+- To prevent unauthorized access if the backend accidentally becomes publicly accessible
+- To ensure requests only come from your approved reverse proxy
+
+**Configuration:**
+
+Add the following to your `.env` file:
+
+```bash
+# Enable proxy secret validation
+FEATURE_PROXY_SECRET_ENABLED=true
+
+# Header name for the proxy secret (default: X-Proxy-Secret)
+PROXY_SECRET_HEADER=X-Proxy-Secret
+
+# The actual secret value - use a strong, randomly generated value
+PROXY_SECRET=your-secure-random-secret-here
+
+# Optional: Customize the redirect URL for failed authentication (default: /auth)
+AUTH_REDIRECT_URL=/auth
+```
+
+**Reverse Proxy Configuration:**
+
+Configure your reverse proxy to inject the secret header with every request. Examples:
+
+**NGINX:**
+```nginx
+location / {
+ proxy_pass http://backend:8000;
+ proxy_set_header X-Proxy-Secret "your-secure-random-secret-here";
+ proxy_set_header X-User-Email $remote_user;
+ # ... other headers
+}
+```
+
+**Apache:**
+```apache
+
+ RequestHeader set X-Proxy-Secret "your-secure-random-secret-here"
+ RequestHeader set X-User-Email %{REMOTE_USER}e
+ ProxyPass http://backend:8000/
+ ProxyPassReverse http://backend:8000/
+
+```
+
+**Behavior:**
+- When enabled, the middleware validates the proxy secret on every request (except static files and the auth endpoint)
+- If the secret is missing or incorrect:
+ - **API endpoints** (`/api/*`): Return 401 Unauthorized
+ - **Browser endpoints**: Redirect to the configured auth URL
+- **Debug mode** (`DEBUG_MODE=true`): Proxy secret validation is automatically disabled for local development
+
+**Security Best Practices:**
+- Generate a strong, random secret (e.g., 32+ characters)
+- Store the secret securely in environment variables, not in configuration files
+- Use different secrets for different environments (dev, staging, production)
+- Rotate the secret periodically as part of your security policy
+- Never commit the secret to version control
+
+## Customizing Authorization
+
+**IMPORTANT: For production deployments, configuring authorization is essential.** The default implementation is a mock and **must be replaced** with your organization's actual authorization system. You have two primary methods to achieve this:
+
+### Recommended Method: HTTP Endpoint
+
+You can configure the application to call an external HTTP endpoint to check for group membership. This is the most flexible and maintainable solution, requiring no code changes to the application itself.
+
+1. **Configure the Endpoint in `.env`**:
+ Add the following variables to your `.env` file:
+ ```
+ # The URL of your authorization service
+ AUTH_GROUP_CHECK_URL=https://your-auth-service.example.com/api/check-group
+
+ # The API key for authenticating with your service
+ AUTH_GROUP_CHECK_API_KEY=your-secret-api-key
+ ```
+
+2. **Endpoint Requirements**:
+ Your authorization endpoint must:
+ * Accept a `POST` request.
+ * Expect a JSON body with `user_id` and `group_id`:
+ ```json
+ {
+ "user_id": "user@example.com",
+ "group_id": "admin"
+ }
+ ```
+ * Authenticate requests using a bearer token in the `Authorization` header.
+ * Return a JSON response with a boolean `is_member` field:
+ ```json
+ {
+ "is_member": true
+ }
+ ```
+
+If `AUTH_GROUP_CHECK_URL` is not set, the application will fall back to the mock implementation in `backend/core/auth.py`.
+
+When using the mock implementation (no external endpoint configured), **all users are treated as part of the `users` group by default**. This ensures that basic, non-privileged features remain available even without an authorization service. Higher-privilege groups such as `admin` still require explicit membership via the mock group table or your real authorization system.
+
+### Legacy Method: Modifying the Code
+
+For advanced use cases, you can still directly modify the `is_user_in_group` function located in `backend/core/auth.py`. The default implementation is a mock and **must be replaced** if you are not using the HTTP endpoint method.
diff --git a/docs/admin/compliance.md b/docs/admin/compliance.md
new file mode 100644
index 0000000..2b77f59
--- /dev/null
+++ b/docs/admin/compliance.md
@@ -0,0 +1,15 @@
+# Compliance and Data Security
+
+The compliance system is designed to prevent the unintentional mixing of data from different security environments. This is essential for organizations that handle sensitive information.
+
+## Compliance Levels
+
+You can assign a `compliance_level` to LLM endpoints, RAG data sources, and MCP servers. These levels are defined in `config/defaults/compliance-levels.json` (which can be overridden).
+
+**Example:** A tool that accesses internal-only data can be marked with `compliance_level: "Internal"`, while a tool that uses a public API can be marked as `compliance_level: "Public"`.
+
+## The Allowlist Model
+
+The compliance system uses an explicit **allowlist**. Each compliance level defines which other levels it is allowed to interact with. This prevents data from a highly secure environment (e.g., "HIPAA") from being accidentally sent to a less secure one (e.g., "Public").
+
+For example, a session running with a "HIPAA" compliance level will not be able to use tools or data sources marked as "Public", preventing sensitive data from being exposed.
diff --git a/docs/admin/configuration.md b/docs/admin/configuration.md
new file mode 100644
index 0000000..9103e70
--- /dev/null
+++ b/docs/admin/configuration.md
@@ -0,0 +1,36 @@
+# Configuration Architecture
+
+The application uses a layered configuration system that loads settings from three primary sources in the following order of precedence:
+
+1. **Environment Variables (`.env`)**: Highest priority. These override any settings from files.
+2. **Override Files (`config/overrides/`)**: For custom, instance-specific configurations. These files are not checked into version control.
+3. **Default Files (`config/defaults/`)**: The base configuration that is part of the repository.
+
+**Note**: The definitive source for all possible configuration options and their default values is the `AppSettings` class within `backend/modules/config/config_manager.py`. This class dictates how the application reads and interprets all its settings.
+
+## Key Override Files
+
+To customize your instance, you will place your own versions of the configuration files in the `config/overrides/` directory. The most common files to override are:
+
+* **`mcp.json`**: Registers and configures the MCP (tool) servers that provide capabilities to the LLM.
+* **`llmconfig.yml`**: Defines the list of available Large Language Models and their connection details.
+* **`compliance-levels.json`**: Defines the security compliance levels (e.g., Public, Internal, HIPAA) and the rules for how they can interact.
+* **`help-config.json`**: Populates the content of the "Help" modal in the user interface.
+* **`splash-config.json`**: Configures the startup splash screen for displaying policies and information to users.
+* **`messages.txt`**: Defines the text for system-wide banner messages that can be displayed to all users.
+
+## The `.env` File
+
+This file is crucial for setting up your instance. Start by copying the example file:
+
+```bash
+cp .env.example .env
+```
+
+Key settings in the `.env` file include:
+
+* **API Keys**: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, etc.
+* **Authentication Header**: `AUTH_USER_HEADER` configures the HTTP header name used to extract the authenticated username from your reverse proxy (default: `X-User-Email`).
+* **Feature Flags**: Enable or disable major features like `FEATURE_AGENT_MODE_AVAILABLE`.
+* **S3 Connection**: Configure the connection to your S3-compatible storage. For local testing, you can set `USE_MOCK_S3=true` to use an in-memory mock instead of a real S3 bucket. **This mock must never be used in production.**
+* **Log Directory**: The `APP_LOG_DIR` variable points to the folder where the application log file (`app.jsonl`) will be stored. This path must be updated to a valid directory in your deployment environment.
diff --git a/docs/admin/file-storage.md b/docs/admin/file-storage.md
new file mode 100644
index 0000000..d7259b3
--- /dev/null
+++ b/docs/admin/file-storage.md
@@ -0,0 +1,50 @@
+# File Storage and Tool Integration
+
+The application uses S3-compatible object storage for handling all user-uploaded files. This system is designed to be secure and flexible, allowing tools to access files without ever needing direct S3 credentials.
+
+## Configuration Modes
+
+You can configure the file storage in one of two modes using the `.env` file.
+
+### 1. Development Mode (Mock S3)
+For local development and testing, you can use a built-in mock S3 service.
+
+* **Setting**: `USE_MOCK_S3=true`
+* **Behavior**: Files are stored on the local filesystem in the `minio-data/` directory. This mode is convenient as it requires no external services or credentials.
+* **Use Case**: Ideal for local development. **This must not be used in production.**
+
+### 2. Production Mode (Real S3)
+For production, you must connect to a real S3-compatible object store like AWS S3, MinIO, or another provider.
+
+* **Setting**: `USE_MOCK_S3=false`
+* **Configuration**: You must provide the connection details in your `.env` file:
+ ```
+ S3_ENDPOINT_URL=https://your-s3-provider.com
+ S3_BUCKET_NAME=your-bucket-name
+ S3_ACCESS_KEY=your-access-key
+ S3_SECRET_KEY=your-secret-key
+ S3_REGION=us-east-1
+ ```
+
+## How MCP Tools Access Files
+
+The application uses a secure workflow that prevents MCP tools from needing direct access to S3 credentials. Instead, the backend acts as a proxy.
+
+```
+1. User uploads file
+ [User] -> [Atlas UI Backend] -> [S3 Bucket]
+ |
+ | 2. LLM calls tool with filename
+ v
+4. Tool downloads file from Atlas UI API
+ [MCP Tool] <- [Atlas UI Backend] <- [S3 Bucket]
+ ^
+ | 3. Backend creates temporary, secure URL
+```
+
+1. **File Upload**: A user uploads a file, which is stored in the configured S3 bucket.
+2. **Tool Call**: The LLM decides to use a tool that needs the file and passes the `filename` as an argument.
+3. **Secure URL Generation**: The Atlas UI backend intercepts the tool call. It generates a temporary, secure URL that points back to its own API (e.g., `/api/files/download/...`). This URL contains a short-lived capability token that grants access only to that specific file.
+4. **Tool Execution**: The backend replaces the original `filename` argument with this new secure URL and sends it to the MCP tool. The tool can then make a simple `GET` request to the URL to download the file content.
+
+This process ensures that MCP tools can access the files they need without ever handling sensitive S3 credentials, enhancing the overall security of the system.
diff --git a/docs/admin/help-config.md b/docs/admin/help-config.md
new file mode 100644
index 0000000..a5d38b1
--- /dev/null
+++ b/docs/admin/help-config.md
@@ -0,0 +1,29 @@
+# Customizing the Help Modal
+
+You can customize the content that appears in the "Help" or "About" modal in the UI by creating a `help-config.json` file.
+
+* **Location**: Place your custom file at `config/overrides/help-config.json`.
+
+The file consists of a title and a list of sections, each with a title and content that can include markdown for formatting.
+
+## Example `help-config.json`
+
+```json
+{
+ "title": "About Our Chat Application",
+ "sections": [
+ {
+ "title": "Welcome",
+ "content": "This is a custom chat application for our organization. It provides access to internal tools and data sources."
+ },
+ {
+ "title": "Available Tools",
+ "content": "You can use tools for:\n\n* Querying databases\n* Analyzing documents\n* Searching our internal knowledge base"
+ },
+ {
+ "title": "Support",
+ "content": "For questions or issues, please contact the support team at [support@example.com](mailto:support@example.com)."
+ }
+ ]
+}
+```
diff --git a/docs/admin/llm-config.md b/docs/admin/llm-config.md
new file mode 100644
index 0000000..131b553
--- /dev/null
+++ b/docs/admin/llm-config.md
@@ -0,0 +1,113 @@
+# LLM Configuration
+
+The `llmconfig.yml` file is where you define all the Large Language Models that the application can use. The application uses the `LiteLLM` library, which allows it to connect to a wide variety of LLM providers.
+
+* **Location**: The default configuration is at `config/defaults/llmconfig.yml`. You should place your instance-specific configuration in `config/overrides/llmconfig.yml`.
+
+## Comprehensive Example
+
+Here is an example of a model configuration that uses all available options.
+
+```yaml
+models:
+ MyCustomGPT:
+ model_name: openai/gpt-4-turbo-preview
+ model_url: https://api.openai.com/v1/chat/completions
+ api_key: "${OPENAI_API_KEY}"
+ description: "The latest and most capable model from OpenAI."
+ max_tokens: 8000
+ temperature: 0.7
+ extra_headers:
+ "x-my-custom-header": "value"
+ compliance_level: "External"
+
+ OpenRouterLlama:
+ model_name: meta-llama/llama-3-70b-instruct
+ model_url: https://openrouter.ai/api/v1
+ api_key: "${OPENROUTER_API_KEY}"
+ description: "Llama 3 70B via OpenRouter"
+ max_tokens: 4096
+ temperature: 0.7
+ extra_headers:
+ "HTTP-Referer": "${OPENROUTER_SITE_URL}"
+ "X-Title": "${OPENROUTER_SITE_NAME}"
+ compliance_level: "External"
+```
+
+**Note**: The second example demonstrates environment variable expansion in `extra_headers`, which is useful for services like OpenRouter that require site identification headers.
+
+## Environment Variable Expansion in LLM Configs
+
+Similar to MCP server authentication, LLM configurations support environment variable expansion for API keys and header values. This feature provides security and flexibility in managing sensitive credentials.
+
+### Security Best Practice
+
+**Never store API keys directly in configuration files.** Instead, use environment variable substitution:
+
+```yaml
+models:
+ my-openai-model:
+ model_name: openai/gpt-4
+ model_url: https://api.openai.com/v1
+ api_key: "${OPENAI_API_KEY}"
+ extra_headers:
+ "X-Custom-Header": "${MY_CUSTOM_HEADER_VALUE}"
+```
+
+Then set the environment variables:
+```bash
+export OPENAI_API_KEY="sk-your-secret-api-key"
+export MY_CUSTOM_HEADER_VALUE="your-custom-value"
+```
+
+### How It Works
+
+1. **API Key Expansion**: The `api_key` value is processed at runtime. If it contains the `${VAR_NAME}` pattern, it's replaced with the value of the environment variable `VAR_NAME`.
+2. **Extra Headers Expansion**: Each value in the `extra_headers` dictionary is also processed for environment variable expansion, allowing you to use dynamic values for headers like `HTTP-Referer` or `X-Title`.
+3. **Error Handling**: If a required environment variable is missing, the application will raise a clear error message indicating which variable needs to be set. This prevents silent failures where unexpanded variables might be sent to the API provider.
+4. **Literal Values**: You can still use literal string values without environment variables for development or testing purposes (though not recommended for production).
+
+### Common Use Cases
+
+**OpenRouter Configuration:**
+```yaml
+models:
+ openrouter-claude:
+ model_name: anthropic/claude-3-opus
+ model_url: https://openrouter.ai/api/v1
+ api_key: "${OPENROUTER_API_KEY}"
+ extra_headers:
+ "HTTP-Referer": "${OPENROUTER_SITE_URL}"
+ "X-Title": "${OPENROUTER_SITE_NAME}"
+```
+
+**Custom LLM Provider with Authentication Headers:**
+```yaml
+models:
+ custom-provider:
+ model_name: custom/model-name
+ model_url: https://custom-llm.example.com/v1
+ api_key: "${CUSTOM_PROVIDER_API_KEY}"
+ extra_headers:
+ "X-Tenant-ID": "${TENANT_IDENTIFIER}"
+ "X-Region": "${DEPLOYMENT_REGION}"
+```
+
+### Security Considerations
+
+- **Recommended**: Use environment variables for all production API keys and sensitive header values
+- **Alternative**: For development/testing, you can use direct string values (not recommended for production)
+- **Never**: Commit API keys to `config/defaults/llmconfig.yml` or any version-controlled files
+
+This environment variable expansion system works identically to the MCP server `auth_token` field, providing consistent behavior across all authentication and configuration mechanisms in the application.
+
+## Configuration Fields Explained
+
+* **`model_name`**: (string) The identifier for the model that will be sent to the LLM provider. For `LiteLLM`, you often need to prefix this with the provider name (e.g., `openai/`, `anthropic/`).
+* **`model_url`**: (string) The API endpoint for the model.
+* **`api_key`**: (string) The API key for authenticating with the model's provider. **Security Best Practice**: Use environment variable substitution with the `${VAR_NAME}` syntax (e.g., `"${OPENAI_API_KEY}"`). The application will automatically expand these variables at runtime and provide clear error messages if a required variable is not set. This works identically to the `auth_token` field in MCP server configurations. You can also use literal API key values for development/testing (not recommended for production).
+* **`description`**: (string) A short description of the model that will be shown to users in the model selection dropdown.
+* **`max_tokens`**: (integer) The maximum number of tokens to generate in a response.
+* **`temperature`**: (float) A value between 0.0 and 1.0 that controls the creativity of the model's responses. Higher values are more creative.
+* **`extra_headers`**: (dictionary) A set of custom HTTP headers to include in the request, which is useful for some proxy services or custom providers. **Environment Variable Support**: Header values can also use the `${VAR_NAME}` syntax for environment variable expansion. This is particularly useful for services like OpenRouter that require headers like `HTTP-Referer` and `X-Title`. If an environment variable is missing, the application will raise a clear error message.
+* **`compliance_level`**: (string) The security compliance level of this model (e.g., "Public", "Internal"). This is used to filter which models can be used in certain compliance contexts.
diff --git a/docs/admin/logging-monitoring.md b/docs/admin/logging-monitoring.md
new file mode 100644
index 0000000..8540da4
--- /dev/null
+++ b/docs/admin/logging-monitoring.md
@@ -0,0 +1,28 @@
+# Logging and Monitoring
+
+The application produces structured logs in JSON Lines format (`.jsonl`), which makes them easy to parse and analyze.
+
+## The `app.jsonl` File
+
+All application events, errors, and important information are written to a single log file named `app.jsonl`. This file is the primary source for debugging issues and monitoring the application's health. You can view the contents of this file directly from the **Admin Panel**.
+
+## Configuring the Log Directory
+
+It is essential to configure the location where the `app.jsonl` file is stored, especially in a production environment.
+
+* **Configuration**: Set the `APP_LOG_DIR` variable in your `.env` file.
+* **Example**:
+ ```
+ APP_LOG_DIR=/var/logs/atlas-ui
+ ```
+* **Default**: If this variable is not set, the application will attempt to create a `logs` directory in the project's root, which may not be desirable or possible in a production deployment. Ensure the specified directory exists and the application has the necessary permissions to write to it.
+
+## Health Monitoring
+
+The application provides a public health check endpoint at `/api/health` specifically designed for monitoring tools, load balancers, and orchestration platforms. This endpoint requires no authentication and returns a JSON response containing the service status, version, and current timestamp in ISO-8601 format.
+
+You can integrate this endpoint into your monitoring infrastructure (such as Kubernetes liveness/readiness probes, AWS ELB health checks, or Prometheus monitoring) to verify that the backend service is running and responding correctly.
+
+The endpoint is lightweight and does not check database connectivity or external dependencies, making it ideal for high-frequency health polling without impacting application performance.
+
+For more detailed system status information that includes configuration and component health, admin users can access the `/admin/system-status` endpoint, which requires authentication and admin group membership.
diff --git a/docs/admin/mcp-servers.md b/docs/admin/mcp-servers.md
new file mode 100644
index 0000000..cc41d68
--- /dev/null
+++ b/docs/admin/mcp-servers.md
@@ -0,0 +1,149 @@
+# MCP Server Configuration
+
+The `mcp.json` file defines the MCP (Model Context Protocol) servers that the application can connect to. These servers provide the tools and capabilities available to the LLM.
+
+* **Location**: The default configuration is at `config/defaults/mcp.json`. You should place your instance-specific configuration in `config/overrides/mcp.json`.
+
+## Comprehensive Example
+
+Here is an example of a server configuration that uses all available options.
+
+```json
+{
+ "MyExampleServer": {
+ "enabled": true,
+ "description": "A full description of what this server does, which appears in the marketplace.",
+ "short_description": "A short description for tooltips.",
+ "author": "Your Team Name",
+ "help_email": "support@example.com",
+ "groups": ["admin", "engineering"],
+ "command": ["python", "mcp/MyExampleServer/main.py"],
+ "cwd": "backend",
+ "env": {
+ "API_KEY": "${MY_API_KEY}",
+ "DEBUG_MODE": "false",
+ "MAX_RETRIES": "3"
+ },
+ "url": null,
+ "transport": "stdio",
+ "compliance_level": "Internal",
+ "require_approval": ["dangerous_tool", "another_risky_tool"],
+ "allow_edit": ["dangerous_tool"]
+ }
+}
+```
+
+## Configuration Fields Explained
+
+* **`enabled`**: (boolean) If `false`, the server is completely disabled and will not be loaded.
+* **`description`**: (string) A detailed description of the server's purpose and capabilities. This is shown to users in the MCP Marketplace.
+* **`short_description`**: (string) A brief, one-line description used for tooltips or other compact UI elements.
+* **`author`**: (string) The name of the team or individual who created the server.
+* **`help_email`**: (string) A contact email for users who need help with the server's tools.
+* **`groups`**: (list of strings) A list of user groups that are allowed to access this server. If a user is not in any of these groups, the server will be hidden from them.
+* **`command`**: (list of strings) For servers using `stdio` transport, this is the command and its arguments used to start the server process.
+* **`cwd`**: (string) The working directory from which to run the `command`.
+* **`env`**: (object) Environment variables to set for `stdio` servers. Keys are variable names, values can be literal strings or use environment variable substitution (e.g., `"${ENV_VAR}"`). This is only applicable to stdio servers and will be ignored for HTTP/SSE servers.
+* **`url`**: (string) For servers using `http` or `sse` transport, this is the URL of the server's endpoint.
+* **`transport`**: (string) The communication protocol to use. Can be `stdio`, `http`, or `sse`. This takes priority over auto-detection.
+* **`auth_token`**: (string) For HTTP/SSE servers, the bearer token used for authentication. Use environment variable substitution (e.g., `"${MCP_SERVER_TOKEN}"`) to avoid storing secrets in config files. Stdio servers ignore this field.
+* **`compliance_level`**: (string) The security compliance level of this server (e.g., "Public", "Internal", "SOC2"). This is used for data segregation and access control.
+* **`require_approval`**: (list of strings) A list of tool names (without the server prefix) that will always require user approval before execution.
+* **`allow_edit`**: (list of strings) A list of tool names for which the user is allowed to edit the arguments before approving. (Note: This is a legacy field and may be deprecated; the UI may allow editing for all approval requests).
+
+## Server Types
+
+The system can connect to different types of MCP servers:
+* **Standard I/O (`stdio`)**: Servers that are started as a subprocess and communicate over `stdin` and `stdout`.
+* **HTTP (`http`)**: Servers that expose a standard HTTP endpoint.
+* **Server-Sent Events (`sse`)**: Servers that stream responses over an HTTP connection.
+
+## MCP Server Authentication
+
+For MCP servers that require authentication, you can configure bearer token authentication using the `auth_token` field.
+
+### Environment Variable Substitution
+
+**Security Best Practice**: Never store API keys or tokens directly in configuration files. Instead, use environment variable substitution:
+
+```json
+{
+ "my-external-api": {
+ "url": "https://api.example.com/mcp",
+ "transport": "http",
+ "auth_token": "${MCP_EXTERNAL_API_TOKEN}",
+ "groups": ["users"]
+ }
+}
+```
+
+Then set the environment variable:
+```bash
+export MCP_EXTERNAL_API_TOKEN="your-secret-api-key"
+```
+
+### How It Works
+
+1. **HTTP/SSE Servers**: The `auth_token` value is passed as a Bearer token in the `Authorization` header when connecting to the MCP server.
+2. **Stdio Servers**: The `auth_token` field is ignored since stdio servers don't use HTTP authentication.
+3. **Environment Variables**: If the token contains `${VAR_NAME}` pattern, it's replaced with the value of the environment variable `VAR_NAME`.
+4. **Error Handling**: If a required environment variable is missing, the server initialization will fail gracefully with a clear error message.
+
+### Security Considerations
+
+- **Recommended**: Use environment variables for all production tokens
+- **Alternative**: For development/testing, you can use direct string values (not recommended for production)
+- **Never**: Commit tokens to `config/defaults/mcp.json` or any version-controlled files
+
+## Environment Variables for Stdio Servers
+
+For stdio servers, you can pass custom environment variables to the server process using the `env` field. This is useful for:
+- Configuring server behavior without modifying command arguments
+- Passing credentials or API keys securely
+- Setting runtime configuration options
+
+### Example Configuration
+
+```json
+{
+ "my-external-tool": {
+ "command": ["wrapper-cli", "my.external.tool@latest", "--allow-write"],
+ "cwd": "backend",
+ "env": {
+ "CLOUD_PROFILE": "my-profile-9",
+ "CLOUD_REGION": "us-east-7",
+ "API_KEY": "${MY_API_KEY}",
+ "DEBUG_MODE": "false"
+ },
+ "groups": ["users"]
+ }
+}
+```
+
+Then set the environment variable before starting Atlas UI:
+```bash
+export MY_API_KEY="your-secret-api-key"
+```
+
+### Environment Variable Features
+
+- **Literal Values**: Environment variables can contain literal string values (e.g., `"CLOUD_REGION": "us-east-7"`)
+- **Variable Substitution**: Use `${VAR_NAME}` syntax to reference system environment variables (e.g., `"API_KEY": "${MY_API_KEY}"`)
+- **Empty Values**: An empty object `{}` is valid and will set no environment variables
+- **Error Handling**: If a referenced environment variable (e.g., `${MY_API_KEY}`) is not set in the system, the server initialization will fail with a clear error message
+- **Stdio Only**: The `env` field only applies to stdio servers; it is ignored for HTTP/SSE servers
+
+### Security Best Practices
+
+- Use environment variable substitution for all sensitive values (API keys, passwords, tokens)
+- Never store secrets directly in the `env` object values
+- Set environment variables via your deployment system (Docker, Kubernetes, systemd, etc.)
+- Use different values for development, staging, and production environments
+
+## Access Control with Groups
+
+You can restrict access to MCP servers based on user groups. This is a critical feature for controlling which users can access powerful or sensitive tools. If a user is not in the required group, the server will be completely invisible to them in the UI, and any attempt to call its functions will be blocked.
+
+## A Note on the `username` Argument
+
+As a security measure, if a tool is designed to accept a `username` argument, the Atlas UI backend will **always** overwrite this argument with the authenticated user's identity before calling the tool. This ensures that a tool always runs with the correct user context and prevents the LLM from impersonating another user.
diff --git a/docs/admin/splash-config.md b/docs/admin/splash-config.md
new file mode 100644
index 0000000..f45ab44
--- /dev/null
+++ b/docs/admin/splash-config.md
@@ -0,0 +1,58 @@
+# Configuring the Splash Screen
+
+The splash screen feature allows you to display important policies and information to users when they first access the application. This is commonly used for displaying cookie policies, acceptable use policies, and other legal or organizational information.
+
+* **Location**: Place your custom file at `config/overrides/splash-config.json`.
+* **Feature Flag**: Enable the splash screen by setting `FEATURE_SPLASH_SCREEN_ENABLED=true` in your `.env` file.
+
+The splash screen supports two operational modes:
+
+1. **Accept Mode** (`require_accept: true`): Users must explicitly click "I Accept" to proceed. The close (X) button is hidden.
+2. **Dismiss Mode** (`require_accept: false`): Users can dismiss the screen by clicking "Close" or the X button in the header.
+
+User dismissals are tracked in the browser's local storage and will not show again until the configured duration expires (default: 30 days).
+
+## Example `splash-config.json`
+
+```json
+{
+ "enabled": true,
+ "title": "Important Policies and Information",
+ "messages": [
+ {
+ "type": "heading",
+ "content": "Cookie Policy"
+ },
+ {
+ "type": "text",
+ "content": "This application uses cookies to enhance your experience and maintain your session. By continuing to use this application, you consent to our use of cookies."
+ },
+ {
+ "type": "heading",
+ "content": "Acceptable Use Policy"
+ },
+ {
+ "type": "text",
+ "content": "This system is for authorized use only. Users must comply with all applicable policies and regulations. Unauthorized access or misuse of this system may result in disciplinary action and/or legal prosecution."
+ }
+ ],
+ "dismissible": true,
+ "require_accept": true,
+ "dismiss_duration_days": 30,
+ "accept_button_text": "I Accept",
+ "dismiss_button_text": "Close",
+ "show_on_every_visit": false
+}
+```
+
+## Configuration Fields
+
+* **`enabled`**: (boolean) Whether the splash screen is shown. Must be `true` and `FEATURE_SPLASH_SCREEN_ENABLED=true` in `.env`.
+* **`title`**: (string) The title displayed at the top of the splash screen modal.
+* **`messages`**: (array) A list of message objects. Each message has a `type` (`"heading"` or `"text"`) and `content` (string).
+* **`dismissible`**: (boolean) Whether users can dismiss the splash screen.
+* **`require_accept`**: (boolean) If `true`, users must click the accept button. If `false`, users can dismiss casually.
+* **`dismiss_duration_days`**: (number) Number of days before showing the splash screen again after dismissal.
+* **`accept_button_text`**: (string) Text for the accept button (shown when `require_accept` is `true`).
+* **`dismiss_button_text`**: (string) Text for the dismiss button (shown when `require_accept` is `false`).
+* **`show_on_every_visit`**: (boolean) If `true`, the splash screen will show every time, ignoring dismissal tracking.
diff --git a/docs/admin/tool-approval.md b/docs/admin/tool-approval.md
new file mode 100644
index 0000000..d9d48e1
--- /dev/null
+++ b/docs/admin/tool-approval.md
@@ -0,0 +1,34 @@
+# Tool Approval System
+
+The tool approval system provides a safety layer by requiring user confirmation before a tool is executed. This gives administrators and users fine-grained control over tool usage.
+
+## Admin-Forced Approvals
+
+As an administrator, you can mandate that certain high-risk functions always require user approval.
+
+* **Configuration**: In your `config/overrides/mcp.json` file, you can add a `require_approval` list to a server's definition.
+* **Behavior**: Any function listed here will always prompt the user for approval, and the user cannot disable this check.
+
+**Example:**
+```json
+{
+ "filesystem_tool": {
+ "groups": ["admin"],
+ "require_approval": ["delete_file", "overwrite_file"]
+ }
+}
+```
+
+## Global Approval Requirement
+
+You can enforce that **all** tools require user approval by setting the following in your `.env` file:
+
+```
+FORCE_TOOL_APPROVAL_GLOBALLY=true
+```
+
+This setting overrides all other user preferences and is a simple way to enforce maximum safety.
+
+## User-Controlled Auto-Approval
+
+For tools that are not mandated to require approval by an admin, users can choose to "auto-approve" them to streamline their workflow. This option is available in the user settings panel.
diff --git a/docs/developer/README.md b/docs/developer/README.md
new file mode 100644
index 0000000..d594ed1
--- /dev/null
+++ b/docs/developer/README.md
@@ -0,0 +1,17 @@
+# Developer's Guide
+
+This guide provides technical details for developers contributing to the Atlas UI 3 project.
+
+## Topics
+
+### Getting Started
+- [Architecture Overview](architecture.md) - System architecture and design patterns
+- [Development Conventions](conventions.md) - Coding standards and best practices
+
+### Building MCP Servers
+- [Creating MCP Servers](creating-mcp-servers.md) - How to build tool servers
+- [Working with Files](working-with-files.md) - File access patterns for tools
+- [Progress Updates](progress-updates.md) - Sending intermediate results to users
+
+### Frontend Development
+- [Custom Canvas Renderers](canvas-renderers.md) - Adding support for new file types
diff --git a/docs/developer/architecture.md b/docs/developer/architecture.md
new file mode 100644
index 0000000..0f52d99
--- /dev/null
+++ b/docs/developer/architecture.md
@@ -0,0 +1,22 @@
+# Architecture Overview
+
+The application is composed of a React frontend and a FastAPI backend, communicating via WebSockets.
+
+## Backend
+
+The backend follows a clean architecture pattern, separating concerns into distinct layers:
+
+* **`domain`**: Contains the core business logic and data models, with no dependencies on frameworks or external services.
+* **`application`**: Orchestrates the business logic from the domain layer to perform application-specific use cases.
+* **`infrastructure`**: Handles communication with external systems like databases, web APIs, and the file system. It's where adapters for external services are implemented.
+* **`interfaces`**: Defines the contracts (protocols) that the different layers use to communicate, promoting loose coupling.
+* **`routes`**: Defines the HTTP API endpoints.
+
+## Frontend
+
+The frontend is a modern React 19 application built with Vite.
+
+* **State Management**: Uses React's Context API for managing global state. There is no Redux.
+ * `ChatContext`: Manages the state of the chat, including messages and user selections.
+ * `WSContext`: Manages the WebSocket connection.
+* **Styling**: Uses Tailwind CSS for utility-first styling.
diff --git a/docs/developer/canvas-renderers.md b/docs/developer/canvas-renderers.md
new file mode 100644
index 0000000..842b69e
--- /dev/null
+++ b/docs/developer/canvas-renderers.md
@@ -0,0 +1,56 @@
+# Adding Custom Canvas Renderers
+
+The canvas panel displays tool-generated files (PDFs, images, HTML). To add support for new file types (e.g., `.stl`, `.obj`, `.ipynb`):
+
+## Canvas Architecture Flow
+
+1. Backend tool returns artifacts → stored in S3 → sends `canvas_files` WebSocket message
+2. Frontend receives file metadata (filename, s3_key, type)
+3. Frontend fetches file content from `/api/files/download/{s3_key}`
+4. `CanvasPanel` renders based on file type
+
+## Steps to Add a New Type
+
+**1. Extend type detection** in `frontend/src/hooks/chat/useFiles.js`:
+```javascript
+function getFileType(filename) {
+ const extension = filename.toLowerCase().split('.').pop()
+ if (['stl', 'obj', 'gltf'].includes(extension)) return '3d-model'
+ // ... existing types
+}
+```
+
+**2. Install any required viewer libraries:**
+```bash
+cd frontend
+npm install three @react-three/fiber @react-three/drei
+```
+
+**3. Add rendering case** in `frontend/src/components/CanvasPanel.jsx` (around line 211):
+```javascript
+case '3d-model':
+ return (
+
+
+
+ );
+```
+
+**4. Create the viewer component** (e.g., `frontend/src/components/STLViewer.jsx`).
+
+## Backend Considerations
+
+No backend changes needed. Tools just return artifacts with proper filenames:
+
+```python
+return {
+ "results": {"summary": "Generated 3D model"},
+ "artifacts": [{
+ "name": "model.stl",
+ "b64": base64.b64encode(stl_bytes).decode(),
+ "mime": "model/stl"
+ }]
+}
+```
+
+The `ChatService` automatically processes artifacts, uploads to S3, and sends canvas notifications.
diff --git a/docs/developer/conventions.md b/docs/developer/conventions.md
new file mode 100644
index 0000000..ddd9067
--- /dev/null
+++ b/docs/developer/conventions.md
@@ -0,0 +1,11 @@
+# Development Conventions
+
+To ensure code quality and consistency, please adhere to the following conventions.
+
+* **Python Package Manager**: **Always** use `uv`. Do not use `pip` or `conda` directly for package management.
+* **Frontend Development**: **Never** use `npm run dev`. It has known WebSocket connectivity issues. Always use `npm run build` to create a production build that the backend will serve.
+* **Backend Development**: **Never** use `uvicorn --reload`. This can cause unexpected issues. Restart the server manually (`python main.py`) to apply changes.
+* **File Naming**: Avoid generic names like `utils.py` or `helpers.py`. Use descriptive names that clearly indicate the file's purpose (e.g., `mcp_tool_manager.py`).
+* **Linting**: Before committing, run the linters to check for style issues:
+ * **Python**: `ruff check backend/`
+ * **Frontend**: `cd frontend && npm run lint`
diff --git a/docs/developer/creating-mcp-servers.md b/docs/developer/creating-mcp-servers.md
new file mode 100644
index 0000000..a0643b2
--- /dev/null
+++ b/docs/developer/creating-mcp-servers.md
@@ -0,0 +1,99 @@
+# Creating an MCP Server
+
+MCP servers are the backbone of the tool system. They are independent processes that expose functions (tools) that the LLM can call.
+
+## 1. Server Structure
+
+A new MCP server is a simple Python script using the `FastMCP` library.
+
+1. Create a new directory for your server inside `backend/mcp/`.
+2. Inside that directory, create a `main.py` file.
+
+**Example `main.py`:**
+```python
+from fastmcp import FastMCP
+from typing import Dict, Any
+
+mcp = FastMCP(name="MyExampleServer")
+
+@mcp.tool
+def my_tool(parameter: str) -> Dict[str, Any]:
+ """
+ A simple tool that takes a string and returns a dictionary.
+ The docstring is used as the tool's description for the LLM.
+ """
+ # The dictionary you return is the 'results' object.
+ return {
+ "input_received": parameter,
+ "status": "success"
+ }
+
+if __name__ == "__main__":
+ mcp.run()
+```
+
+## 2. The MCP v2 Tool Contract
+
+When a tool is executed, its return value must adhere to a specific JSON structure, known as the "v2 contract." This allows the UI to correctly process and display the results.
+
+The returned JSON object has the following structure:
+
+* **`results`** (required): A JSON object containing the primary, human-readable result of the tool. This should be concise.
+* **`artifacts`** (optional): A list of files generated by the tool. This is the preferred way to return files or large data blobs. Each artifact is an object with:
+ * `name`: The filename (e.g., `report.html`).
+ * `b64`: The base64-encoded content of the file.
+ * `mime`: The MIME type (e.g., `text/html`, `image/png`).
+* **`display`** (optional): A JSON object that provides hints to the UI on how to display the artifacts, such as whether to open the canvas automatically.
+
+**Example tool returning an artifact:**
+```python
+import base64
+
+@mcp.tool
+def create_html_report(title: str) -> Dict[str, Any]:
+ """Generates and returns an HTML report as an artifact."""
+ html_content = f"{title}
"
+ b64_content = base64.b64encode(html_content.encode()).decode()
+
+ return {
+ "results": {"summary": f"Successfully generated report: {title}"},
+ "artifacts": [{
+ "name": "report.html",
+ "b64": b64_content,
+ "mime": "text/html"
+ }],
+ "display": {
+ "open_canvas": True,
+ "primary_file": "report.html"
+ }
+ }
+```
+
+## 3. Registering the Server
+
+After creating your server, you must register it in `config/overrides/mcp.json`.
+
+```json
+{
+ "MyExampleServer": {
+ "command": ["python", "mcp/MyExampleServer/main.py"],
+ "cwd": "backend",
+ "groups": ["users"],
+ "description": "An example server with a simple tool."
+ }
+}
+```
+
+* **`command`**: The command to start the server.
+* **`cwd`**: The working directory from which to run the command.
+* **`groups`**: A list of user groups that can access this server's tools.
+* **`description`**: A description of the server shown in the UI.
+* **`auth_token`**: (optional) For HTTP/SSE servers, the bearer token for authentication. Use environment variable substitution like `"${MCP_TOKEN}"` for security.
+
+## 4. Argument Injection
+
+To enhance security and simplify tool development, the backend can automatically inject certain arguments into your tool calls.
+
+* **`username` Injection (Security Feature)**: If your tool's function signature includes a `username: str` parameter, the backend will **always overwrite** any value for it with the authenticated user's identity. This is a critical security feature to ensure a tool always runs with the correct user context.
+
+* **File URL Injection**: If your tool accepts a `filename: str` or `file_names: List[str]` parameter, the backend will automatically convert any session files passed by the LLM into secure, temporary URLs. Your tool can then fetch the file content from these URLs. See the [Working with Files](working-with-files.md) guide for more details.
diff --git a/docs/developer/progress-updates.md b/docs/developer/progress-updates.md
new file mode 100644
index 0000000..e7f1789
--- /dev/null
+++ b/docs/developer/progress-updates.md
@@ -0,0 +1,194 @@
+# Progress Updates and Intermediate Results
+
+Long-running MCP tools can now send intermediate updates to the frontend during execution, providing users with real-time feedback. This includes:
+
+- **Canvas Updates**: Display HTML visualizations, plots, or images in the canvas panel as the tool progresses
+- **System Messages**: Add rich, formatted messages to the chat history to show what's happening at each stage
+- **Progressive Artifacts**: Send file artifacts as they're generated, rather than only at the end
+
+## Basic Progress Reporting
+
+FastMCP provides a `Context` object that tools can use to report progress:
+
+```python
+from fastmcp import FastMCP, Context
+
+mcp = FastMCP("MyServer")
+
+@mcp.tool
+async def long_task(
+ steps: int = 5,
+ ctx: Context | None = None
+) -> dict:
+ """A tool that reports progress."""
+
+ for i in range(steps):
+ # Standard progress reporting
+ if ctx:
+ await ctx.report_progress(
+ progress=i,
+ total=steps,
+ message=f"Processing step {i+1} of {steps}"
+ )
+
+ # Do work...
+ await asyncio.sleep(1)
+
+ return {"results": {"status": "completed", "steps": steps}}
+```
+
+This shows a progress bar in the UI with percentage and message updates.
+
+## Enhanced Progress Updates
+
+To send richer updates (canvas content, system messages, or artifacts), encode structured data in the progress message with the `MCP_UPDATE:` prefix:
+
+### 1. Canvas Updates
+
+Display HTML content in the canvas panel during execution:
+
+```python
+import json
+
+@mcp.tool
+async def task_with_visualization(
+ steps: int = 5,
+ ctx: Context | None = None
+) -> dict:
+ """Shows visual progress in the canvas."""
+
+ for step in range(1, steps + 1):
+ # Create HTML visualization
+ html_content = f"""
+
+
+ Processing Step {step}/{steps}
+
+
+
+ """
+
+ # Send canvas update
+ if ctx:
+ update_payload = {
+ "type": "canvas_update",
+ "content": html_content,
+ "progress_message": f"Step {step}/{steps}"
+ }
+ await ctx.report_progress(
+ progress=step,
+ total=steps,
+ message=f"MCP_UPDATE:{json.dumps(update_payload)}"
+ )
+
+ return {"results": {"status": "completed"}}
+```
+
+### 2. System Messages
+
+Add informative messages to the chat history:
+
+```python
+@mcp.tool
+async def task_with_status_updates(
+ stages: list[str],
+ ctx: Context | None = None
+) -> dict:
+ """Reports status updates as chat messages."""
+
+ for i, stage in enumerate(stages, 1):
+ # Do work for this stage...
+ await process_stage(stage)
+
+ # Send system message
+ if ctx:
+ update_payload = {
+ "type": "system_message",
+ "message": f"**{stage}** completed successfully",
+ "subtype": "success", # or "info", "warning", "error"
+ "progress_message": f"Completed {stage}"
+ }
+ await ctx.report_progress(
+ progress=i,
+ total=len(stages),
+ message=f"MCP_UPDATE:{json.dumps(update_payload)}"
+ )
+
+ return {"results": {"status": "completed", "stages": len(stages)}}
+```
+
+### 3. Progressive Artifacts
+
+Send file artifacts as they're generated:
+
+```python
+import base64
+
+@mcp.tool
+async def task_with_intermediate_files(
+ files_to_generate: int = 3,
+ ctx: Context | None = None
+) -> dict:
+ """Generates and displays files progressively."""
+
+ for file_num in range(1, files_to_generate + 1):
+ # Generate file content
+ html_content = f"Result {file_num}
"
+
+ # Send artifact
+ if ctx:
+ artifact_data = {
+ "type": "artifacts",
+ "artifacts": [
+ {
+ "name": f"result_{file_num}.html",
+ "b64": base64.b64encode(html_content.encode()).decode(),
+ "mime": "text/html",
+ "size": len(html_content),
+ "description": f"Intermediate result {file_num}",
+ "viewer": "html"
+ }
+ ],
+ "display": {
+ "open_canvas": True,
+ "primary_file": f"result_{file_num}.html",
+ "mode": "replace"
+ },
+ "progress_message": f"Generated file {file_num}"
+ }
+ await ctx.report_progress(
+ progress=file_num,
+ total=files_to_generate,
+ message=f"MCP_UPDATE:{json.dumps(artifact_data)}"
+ )
+
+ return {"results": {"files_generated": files_to_generate}}
+```
+
+## Update Types Reference
+
+| Type | Fields | Description |
+|------|--------|-------------|
+| `canvas_update` | `content` (HTML string), `progress_message` (optional) | Displays HTML content in the canvas panel |
+| `system_message` | `message` (string), `subtype` (info/success/warning/error), `progress_message` (optional) | Adds a formatted message to chat history |
+| `artifacts` | `artifacts` (list), `display` (object), `progress_message` (optional) | Sends file artifacts with display hints |
+
+## Example: Complete Demo Server
+
+See `/backend/mcp/progress_updates_demo/` for a complete working example with three tools demonstrating all update types. To try it:
+
+1. Add the server to your `config/overrides/mcp.json`:
+```json
+{
+ "progress_updates_demo": {
+ "command": ["python", "mcp/progress_updates_demo/main.py"],
+ "cwd": "backend",
+ "groups": ["users"],
+ "description": "Demo server showing enhanced progress updates"
+ }
+}
+```
+
+2. Restart the backend and ask: "Show me a task with canvas updates"
diff --git a/docs/developer/working-with-files.md b/docs/developer/working-with-files.md
new file mode 100644
index 0000000..6288bf1
--- /dev/null
+++ b/docs/developer/working-with-files.md
@@ -0,0 +1,50 @@
+# Working with Files (S3 Storage)
+
+When a user uploads a file, it is stored in an S3-compatible object store. As a tool developer, you do not need to interact with S3 directly. The backend provides a secure mechanism for your tools to access file content.
+
+## The File Access Workflow
+
+1. **Define Your Tool**: Create a tool that accepts a `filename` (or `file_names`) argument.
+ ```python
+ @mcp.tool
+ def process_file(filename: str) -> Dict[str, Any]:
+ # ...
+ ```
+2. **Receive a Secure URL**: When the LLM calls your tool, the backend intercepts the call. It replaces the simple filename (e.g., `my_document.pdf`) with a full, temporary URL that points back to the Atlas UI API (e.g., `http://localhost:8000/api/files/download/...`). This URL contains a short-lived security token.
+3. **Fetch the File Content**: Your tool should then make a standard HTTP `GET` request to this URL to download the file's content.
+
+## Example Tool for File Processing
+
+```python
+import httpx
+from fastmcp import FastMCP
+from typing import Dict, Any
+
+mcp = FastMCP(name="FileProcessor")
+
+@mcp.tool
+def get_file_size(filename: str) -> Dict[str, Any]:
+ """
+ Accepts a file URL, downloads the file, and returns its size.
+ """
+ try:
+ with httpx.stream("GET", filename, timeout=30) as response:
+ response.raise_for_status()
+ content = response.read()
+ file_size = len(content)
+
+ return {
+ "results": {
+ "file_size_bytes": file_size
+ }
+ }
+ except httpx.HTTPStatusError as e:
+ return {"error": f"HTTP error fetching file: {e.response.status_code}"}
+ except Exception as e:
+ return {"error": f"Failed to process file: {str(e)}"}
+
+if __name__ == "__main__":
+ mcp.run()
+```
+
+This architecture ensures that your tool does not need to handle any S3 credentials, making the system more secure and easier to develop for.
diff --git a/docs/getting-started/README.md b/docs/getting-started/README.md
new file mode 100644
index 0000000..a70b997
--- /dev/null
+++ b/docs/getting-started/README.md
@@ -0,0 +1,14 @@
+# Getting Started with Atlas UI 3
+
+Welcome to Atlas UI 3! This guide will help you get the application up and running.
+
+## Quick Links
+
+- [Installation Guide](installation.md) - How to install and run Atlas UI 3
+
+## Next Steps
+
+Once you have the application running:
+
+- **For Administrators**: See the [Administrator's Guide](../admin/README.md) to learn about configuration, security, and deployment
+- **For Developers**: See the [Developer's Guide](../developer/README.md) to learn about the architecture and how to extend the system
diff --git a/docs/getting-started/installation.md b/docs/getting-started/installation.md
new file mode 100644
index 0000000..4a678fa
--- /dev/null
+++ b/docs/getting-started/installation.md
@@ -0,0 +1,135 @@
+# Installation
+
+This guide provides everything you need to get Atlas UI 3 running, whether you prefer using Docker for a quick setup or setting up a local development environment.
+
+## Quick Start with Docker (Recommended)
+
+Using Docker is the fastest way to get the application running.
+
+1. **Build the Docker Image:**
+ From the root of the project, run the build command:
+ ```bash
+ docker build -t atlas-ui-3 .
+ ```
+
+2. **Run the Container:**
+ Once the image is built, start the container:
+ ```bash
+ docker run -p 8000:8000 atlas-ui-3
+ ```
+
+3. **Access the Application:**
+ Open your web browser and navigate to [http://localhost:8000](http://localhost:8000).
+
+## Local Development Setup
+
+For those who want to contribute to the code or run the application natively, follow these steps.
+
+### Prerequisites
+
+* **Python 3.12+**
+* **Node.js 18+** and npm
+* **uv**: This project uses `uv` as the Python package manager. It's required.
+
+### 1. Install `uv`
+
+If you don't have `uv` installed, open your terminal and run the following command. This is a critical step.
+
+```bash
+# Install uv on macOS, Linux, or WSL
+curl -LsSf https://astral.sh/uv/install.sh | sh
+
+# On Windows (PowerShell):
+powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
+
+# Verify the installation
+uv --version
+```
+
+### 2. Set Up the Environment
+
+From the project's root directory, set up the Python virtual environment and install the required packages.
+
+```bash
+# Create the virtual environment
+uv venv
+
+# Activate the environment
+# On macOS, Linux, or WSL:
+source .venv/bin/activate
+# On Windows:
+.venv\Scripts\activate
+
+# Install Python dependencies
+uv pip install -r requirements.txt
+```
+
+### 3. Configure Your Environment
+
+Copy the example `.env` file to create your local configuration.
+
+```bash
+cp .env.example .env
+```
+
+Now, open the `.env` file and add your API keys for the LLM providers you intend to use (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`).
+
+**Important Configuration Notes:**
+* **`APP_LOG_DIR`**: It is essential to set `APP_LOG_DIR=/workspaces/atlas-ui-3/logs` (or another appropriate path) to ensure application logs are correctly stored.
+* **`USE_MOCK_S3`**: For local development and personal use, setting `USE_MOCK_S3=true` is acceptable. However, **this must never be used in a production environment** due to security and data durability concerns.
+
+### 4. All-in-One Start Script (Recommended)
+
+For convenience, you can use the `agent_start.sh` script, which automates the process of building the frontend and starting the backend. This is the recommended way to run the application for local development.
+
+```bash
+bash agent_start.sh
+```
+
+#### Starting with MCP Mock Server
+
+If you want to test MCP functionality during development, you can start the MCP mock server alongside the main application:
+
+```bash
+# Start both the main application and MCP mock server
+bash agent_start.sh -m
+
+# Other options
+bash agent_start.sh -f # Only rebuild frontend
+bash agent_start.sh -b # Only start backend
+```
+
+The MCP mock server will be available at `http://127.0.0.1:8005/mcp` and provides simulated database tools for testing.
+
+After running the script, the application will be available at `http://localhost:8000`.
+
+### Manual Setup
+
+If you prefer to run the frontend and backend processes separately, follow these steps.
+
+#### 5. Build the Frontend
+
+The frontend is a React application that needs to be built before the backend can serve it.
+
+```bash
+cd frontend
+npm install
+npm run build
+```
+
+**Important:** Always use `npm run build`. Do not use `npm run dev`, as it has known issues with WebSocket connections in this project.
+
+#### 6. Start the Backend
+
+Finally, start the FastAPI backend server.
+
+```bash
+cd backend
+python main.py
+```
+
+The backend will be available at `http://localhost:8000`.
+
+## Next Steps
+
+With the application running, you can now explore its features. For more detailed information on configuration and administration, refer to the [Administrator's Guide](../admin/README.md). If you plan to contribute, the [Developer's Guide](../developer/README.md) provides in-depth architectural details.
From 71c0b2c377f8911cbf55e21c3b4c7832dce4d1a4 Mon Sep 17 00:00:00 2001
From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com>
Date: Sat, 22 Nov 2025 03:42:00 +0000
Subject: [PATCH 3/3] Archive old documentation files and add archive README
- Moved old doc files to docs/archive/
- Created archive/README.md explaining the reorganization
- All new documentation is properly organized and linked
- Total content preserved and enhanced with navigation
Co-authored-by: garland3 <1162675+garland3@users.noreply.github.com>
---
docs/{ => archive}/01_getting_started.md | 0
docs/{ => archive}/02_admin_guide.md | 0
docs/{ => archive}/03_developer_guide.md | 0
docs/archive/README.md | 21 +++++++++++++++++++++
4 files changed, 21 insertions(+)
rename docs/{ => archive}/01_getting_started.md (100%)
rename docs/{ => archive}/02_admin_guide.md (100%)
rename docs/{ => archive}/03_developer_guide.md (100%)
create mode 100644 docs/archive/README.md
diff --git a/docs/01_getting_started.md b/docs/archive/01_getting_started.md
similarity index 100%
rename from docs/01_getting_started.md
rename to docs/archive/01_getting_started.md
diff --git a/docs/02_admin_guide.md b/docs/archive/02_admin_guide.md
similarity index 100%
rename from docs/02_admin_guide.md
rename to docs/archive/02_admin_guide.md
diff --git a/docs/03_developer_guide.md b/docs/archive/03_developer_guide.md
similarity index 100%
rename from docs/03_developer_guide.md
rename to docs/archive/03_developer_guide.md
diff --git a/docs/archive/README.md b/docs/archive/README.md
new file mode 100644
index 0000000..bf643e3
--- /dev/null
+++ b/docs/archive/README.md
@@ -0,0 +1,21 @@
+# Archived Documentation
+
+This directory contains archived versions of documentation files that have been superseded by newer, more organized versions.
+
+## November 2025 - Documentation Reorganization
+
+The following files were split into topic-based guides for better organization and maintainability:
+
+- **01_getting_started.md** → See [../getting-started/](../getting-started/)
+- **02_admin_guide.md** → See [../admin/](../admin/)
+- **03_developer_guide.md** → See [../developer/](../developer/)
+
+The new structure organizes documentation by topic, making it easier to find specific information and maintain the docs as the project evolves.
+
+## Current Documentation
+
+For current documentation, please refer to:
+
+- [Getting Started Guide](../getting-started/README.md)
+- [Administrator's Guide](../admin/README.md)
+- [Developer's Guide](../developer/README.md)