Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions config/gni/devtools_grd_files.gni
Original file line number Diff line number Diff line change
Expand Up @@ -655,6 +655,7 @@ grd_files_bundled_sources = [
"front_end/panels/ai_chat/LLM/LLMClient.js",
"front_end/panels/ai_chat/LLM/MessageSanitizer.js",
"front_end/panels/ai_chat/tools/Tools.js",
"front_end/panels/ai_chat/tools/LLMTracingWrapper.js",
"front_end/panels/ai_chat/tools/SequentialThinkingTool.js",
"front_end/panels/ai_chat/tools/CombinedExtractionTool.js",
"front_end/panels/ai_chat/tools/CritiqueTool.js",
Expand Down
15 changes: 12 additions & 3 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,21 @@ RUN /workspace/depot_tools/ensure_bootstrap
# Build standard DevTools first
RUN npm run build

# Add Browser Operator fork and switch to it
# Add Browser Operator fork and switch to it
RUN git remote add upstream https://github.com/BrowserOperator/browser-operator-core.git
RUN git fetch upstream
RUN git checkout upstream/main

# Build Browser Operator version
# Allow configurable automated mode
ARG AUTOMATED_MODE=false

# Set build-time flags based on Docker arg
RUN if [ "$AUTOMATED_MODE" = "true" ]; then \
sed -i 's/AUTOMATED_MODE: false/AUTOMATED_MODE: true/' \
front_end/panels/ai_chat/core/BuildConfig.ts; \
fi
Comment on lines +51 to +57
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Doc mismatch: AUTOMATED_MODE default is false but README claims automated is default.

Either set the ARG default to true here or update README commands to pass --build-arg AUTOMATED_MODE=true.

-ARG AUTOMATED_MODE=false
+ARG AUTOMATED_MODE=true
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ARG AUTOMATED_MODE=false
# Set build-time flags based on Docker arg
RUN if [ "$AUTOMATED_MODE" = "true" ]; then \
sed -i 's/AUTOMATED_MODE: false/AUTOMATED_MODE: true/' \
front_end/panels/ai_chat/core/BuildConfig.ts; \
fi
ARG AUTOMATED_MODE=true
# Set build-time flags based on Docker arg
RUN if [ "$AUTOMATED_MODE" = "true" ]; then \
sed -i 's/AUTOMATED_MODE: false/AUTOMATED_MODE: true/' \
front_end/panels/ai_chat/core/BuildConfig.ts; \
fi
🤖 Prompt for AI Agents
In docker/Dockerfile around lines 51 to 57, the Docker ARG AUTOMATED_MODE
defaults to false while the README states automated mode is the default; to
resolve this either change the ARG default to true (ARG AUTOMATED_MODE=true) so
the build-time flag and sed step reflect the README, or leave ARG as false and
update README examples to pass --build-arg AUTOMATED_MODE=true; pick one
approach and make the corresponding change consistently across Dockerfile and
README.


# Build Browser Operator version with current changes
RUN npm run build

# Production stage
Expand All @@ -59,4 +68,4 @@ COPY --from=builder /workspace/devtools/devtools-frontend/out/Default/gen/front_
# Copy nginx config
COPY docker/nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 8000
EXPOSE 8000
10 changes: 10 additions & 0 deletions docker/Dockerfile.local
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Simple Dockerfile that uses pre-built local files
FROM nginx:alpine

# Copy the pre-built DevTools frontend from host
COPY out/Default/gen/front_end /usr/share/nginx/html

# Copy nginx config
COPY docker/nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 8000
37 changes: 33 additions & 4 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,11 @@ The Docker setup uses a multi-stage build process:
From the repository root directory:

```bash
# Build the Docker image
docker build -f docker/Dockerfile -t devtools-frontend .
# Build with automated mode (default - bypasses OAuth, auto-enables evaluation)
docker build -f docker/Dockerfile -t browser-operator-automated .

# Build with normal mode (requires manual authentication)
docker build -f docker/Dockerfile --build-arg AUTOMATED_MODE=false -t browser-operator-manual .

# Or use docker-compose (recommended)
Comment on lines +25 to 31
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Automated mode “default” conflicts with Dockerfile.

Either pass --build-arg AUTOMATED_MODE=true here or change Dockerfile ARG default to true.

-# Build with automated mode (default - bypasses OAuth, auto-enables evaluation)
-docker build -f docker/Dockerfile -t browser-operator-automated .
+# Build with automated mode
+docker build -f docker/Dockerfile --build-arg AUTOMATED_MODE=true -t browser-operator-automated .
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Build with automated mode (default - bypasses OAuth, auto-enables evaluation)
docker build -f docker/Dockerfile -t browser-operator-automated .
# Build with normal mode (requires manual authentication)
docker build -f docker/Dockerfile --build-arg AUTOMATED_MODE=false -t browser-operator-manual .
# Or use docker-compose (recommended)
# Build with automated mode
docker build -f docker/Dockerfile --build-arg AUTOMATED_MODE=true -t browser-operator-automated .
# Build with normal mode (requires manual authentication)
docker build -f docker/Dockerfile --build-arg AUTOMATED_MODE=false -t browser-operator-manual .
# Or use docker-compose (recommended)
🤖 Prompt for AI Agents
In docker/README.md around lines 25 to 31, the README's claim that automated
mode is the default conflicts with the Dockerfile ARG default; update the README
and/or Dockerfile for consistency by either (a) changing the README build
command to pass --build-arg AUTOMATED_MODE=true for the automated build line, or
(b) change the Dockerfile ARG AUTOMATED_MODE default to true so the existing
README command is accurate; pick one approach and update the file(s) accordingly
so the documentation and Dockerfile agree.

docker-compose -f docker/docker-compose.yml build
Expand All @@ -32,8 +35,11 @@ docker-compose -f docker/docker-compose.yml build
### Running the Container

```bash
# Using docker run
docker run -d -p 8000:8000 --name devtools-frontend devtools-frontend
# Automated mode (no authentication required, evaluation auto-enabled)
docker run -d -p 8000:8000 --name browser-operator-automated browser-operator-automated

# Manual mode (requires OAuth/API key setup)
docker run -d -p 8000:8000 --name browser-operator-manual browser-operator-manual

# Or using docker-compose (recommended)
docker-compose -f docker/docker-compose.yml up -d
Expand Down Expand Up @@ -67,6 +73,29 @@ docker/
└── README.md # This file
```

## Automated Mode vs Manual Mode

### Automated Mode (Default)
- **Purpose**: Optimized for Docker/CI environments and automated workflows
- **Authentication**: Bypasses OAuth panel - no manual setup required
- **Evaluation**: Automatically enables evaluation mode for API wrapper connectivity
- **Use cases**: Production deployments, CI/CD, headless automation, API integration

### Manual Mode
- **Purpose**: Standard interactive usage
- **Authentication**: Requires OAuth setup or API key configuration
- **Evaluation**: Manual enable/disable in settings
- **Use cases**: Development, interactive testing, manual usage

```bash
# Example automated mode workflow
docker build -f docker/Dockerfile -t browser-operator-automated .
docker run -d -p 8000:8000 --name browser-operator browser-operator-automated

# Ready to use immediately - no authentication required!
# Evaluation server can connect automatically via WebSocket (ws://localhost:8080)
```

## Advanced Usage

### Development Mode
Expand Down
56 changes: 56 additions & 0 deletions eval-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Both implementations provide:
- 📚 **Programmatic API** - Create and manage evaluations in code
- ⚡ **Concurrent Support** - Handle multiple agents simultaneously
- 📊 **Structured Logging** - Comprehensive evaluation tracking
- 🌐 **OpenAI-Compatible API** - Standard REST endpoints for seamless integration

## Quick Start

Expand Down Expand Up @@ -56,6 +57,60 @@ python examples/basic_server.py

See [`python/README.md`](python/README.md) for detailed usage.

## OpenAI-Compatible API

Both implementations now include OpenAI-compatible HTTP endpoints that provide seamless integration with existing OpenAI clients and tools.

### Architecture

```
┌─────────────────┐ HTTP ┌──────────────────┐ WebSocket ┌─────────────────┐
│ OpenAI Client │ ──────────→ │ OpenAI HTTP │ ──────────────→ │ WebSocket │
│ (External) │ │ Wrapper │ │ Eval Server │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│ RPC
┌─────────────────────────────────────┐
│ Connected DevTools Tabs │
│ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │Tab 1│ │Tab 2│ │Tab N│ ... │
│ └─────┘ └─────┘ └─────┘ │
└─────────────────────────────────────┘
```

### Supported Endpoints

- **`GET /v1/models`** - List available models from connected DevTools tabs
- **`POST /v1/chat/completions`** - OpenAI-compatible chat completions
- **`GET /health`** - Health check and status

Comment on lines +84 to +87
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Document all supported endpoints (missing /v1/responses).

Code implements POST /v1/responses but README omits it. Add it to the list.

 - **`GET /v1/models`** - List available models from connected DevTools tabs
 - **`POST /v1/chat/completions`** - OpenAI-compatible chat completions
+- **`POST /v1/responses`** - OpenAI Responses API compatibility
 - **`GET /health`** - Health check and status
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- **`GET /v1/models`** - List available models from connected DevTools tabs
- **`POST /v1/chat/completions`** - OpenAI-compatible chat completions
- **`GET /health`** - Health check and status
- **`GET /v1/models`** - List available models from connected DevTools tabs
- **`POST /v1/chat/completions`** - OpenAI-compatible chat completions
- **`POST /v1/responses`** - OpenAI Responses API compatibility
- **`GET /health`** - Health check and status
🤖 Prompt for AI Agents
In eval-server/README.md around lines 84 to 87, the endpoints list omits the
POST /v1/responses endpoint implemented in code; update the README to document
this endpoint by adding a new bullet (e.g., **`POST /v1/responses`** - Responses
endpoint compatible with the server's response API) including a brief
description of its purpose and expected behavior so the docs match the
implementation.

### Usage Example

```bash
# List available models
curl http://localhost:8081/v1/models

# Send a chat completion request
curl -X POST http://localhost:8081/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
]
}'
```

### Request Flow

1. **External client** sends OpenAI-compatible HTTP request
2. **HTTP wrapper** converts request to evaluation format
3. **WebSocket server** selects connected DevTools tab
4. **RPC call** sent to tab via existing JSON-RPC protocol
5. **Tab processes** request using Browser Operator's LLM system
6. **Response flows back** through WebSocket → HTTP → OpenAI format

## Architecture Comparison

| Feature | NodeJS | Python |
Expand All @@ -68,6 +123,7 @@ See [`python/README.md`](python/README.md) for detailed usage.
| **Structured Logging** | ✅ (Winston) | ✅ (Loguru) |
| **YAML Evaluations** | ✅ | ❌ |
| **HTTP API Wrapper** | ✅ | ❌ |
| **OpenAI-Compatible API** | ✅ | ✅ |
| **CLI Interface** | ✅ | ❌ |
| **LLM Judge System** | ✅ | ❌ |
| **Type System** | TypeScript | Type Hints |
Expand Down
143 changes: 143 additions & 0 deletions eval-server/nodejs/examples/openai-server-example.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
#!/usr/bin/env node

// Copyright 2025 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

/**
* Example demonstrating how to use the OpenAI-compatible API server.
*
* This example shows how to start both the WebSocket evaluation server
* and the OpenAI-compatible HTTP wrapper that multiplexes requests to
* connected DevTools tabs.
*/

import { EvalServer } from '../src/lib/EvalServer.js';
import { OpenAICompatibleWrapper } from '../src/lib/OpenAICompatibleWrapper.js';

console.log(`
🌟 Browser Operator Evaluation Server with OpenAI-Compatible API

This server provides:
- WebSocket server for DevTools tab connections (port 8080)
- OpenAI-compatible HTTP API (port 8081)

To use:
1. Start this server
2. Connect DevTools tabs via WebSocket to ws://127.0.0.1:8080
3. Send OpenAI-compatible requests to http://127.0.0.1:8081

Available endpoints:
- GET /v1/models - List available models
- POST /v1/chat/completions - Chat completions
- GET /health - Health check

Example usage:
curl http://127.0.0.1:8081/v1/models

curl -X POST http://127.0.0.1:8081/v1/chat/completions \\
-H "Content-Type: application/json" \\
-d '{
"model": "gpt-4.1",
"messages": [{"role": "user", "content": "Hello!"}]
}'

Press Ctrl+C to stop the server.
`);

async function main() {
console.log('🚀 Starting Browser Operator evaluation server with OpenAI-compatible API...');

// Create WebSocket evaluation server
const evalServer = new EvalServer({
authKey: 'hello',
host: '127.0.0.1',
port: 8080
});

// Set up client connection handlers
evalServer.onConnect((client) => {
console.log(`✅ DevTools tab connected: ${client.id}`);
console.log(` Tab ID: ${client.tabId}`);
console.log(` Base Client ID: ${client.baseClientId}`);
console.log(` Connected at: ${new Date().toISOString()}`);

// The client is now ready to receive evaluations via OpenAI API
});

evalServer.onDisconnect((clientInfo) => {
console.log(`❌ DevTools tab disconnected: ${clientInfo.clientId}`);
});

// Create OpenAI-compatible HTTP wrapper
const openaiWrapper = new OpenAICompatibleWrapper(evalServer, {
host: '127.0.0.1',
port: 8081,
modelCacheTTL: 300000 // 5 minutes
});

// Graceful shutdown handler
const shutdown = async () => {
console.log('\n🛑 Shutting down servers...');
try {
await openaiWrapper.stop();
await evalServer.stop();
console.log('✅ Servers stopped successfully');
process.exit(0);
} catch (error) {
console.error('❌ Error during shutdown:', error);
process.exit(1);
}
};

// Handle shutdown signals
process.on('SIGINT', shutdown);
process.on('SIGTERM', shutdown);

try {
// Start WebSocket server first
console.log('🔧 Starting WebSocket evaluation server on ws://127.0.0.1:8080');
await evalServer.start();

// Start OpenAI-compatible HTTP wrapper
console.log('🔧 Starting OpenAI-compatible API server on http://127.0.0.1:8081');
await openaiWrapper.start();

console.log('🎉 Both servers started successfully!');
console.log('');
console.log('📡 WebSocket Server: ws://127.0.0.1:8080 (for DevTools connections)');
console.log('🌐 OpenAI API Server: http://127.0.0.1:8081 (for HTTP requests)');
console.log('');
console.log('⏳ Waiting for DevTools tabs to connect...');

// Monitor server status periodically
const statusInterval = setInterval(() => {
const evalStatus = evalServer.getStatus();
const openaiStatus = openaiWrapper.getStatus();

console.log(`📊 Status - Connected clients: ${evalStatus.connectedClients}, Ready: ${evalStatus.readyClients}`);
console.log(`📊 OpenAI API: ${openaiStatus.isRunning ? 'running' : 'stopped'} on ${openaiStatus.url}`);
}, 30000); // Every 30 seconds

// Keep the process running
process.on('beforeExit', () => {
clearInterval(statusInterval);
});

} catch (error) {
console.error('❌ Failed to start servers:', error);
await shutdown();
}
}

// Handle unhandled promise rejections
process.on('unhandledRejection', (reason, promise) => {
console.error('Unhandled Rejection at:', promise, 'reason:', reason);
});

process.on('uncaughtException', (error) => {
console.error('Uncaught Exception:', error);
process.exit(1);
});

main().catch(console.error);
Loading
Loading