diff --git a/.changeset/nervous-windows-raise.md b/.changeset/nervous-windows-raise.md new file mode 100644 index 00000000..bfe4131b --- /dev/null +++ b/.changeset/nervous-windows-raise.md @@ -0,0 +1,7 @@ +--- +"@openai/agents-core": patch +"@openai/agents-realtime": patch +"@openai/agents": patch +--- + +Make docs and comments more consistent using Codex diff --git a/AGENTS.md b/AGENTS.md index 64d8c308..98e1c0b2 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -15,6 +15,7 @@ This guide helps new contributors get started with the OpenAI Agents JS monorepo 7. [Pull Request & Commit Guidelines](#pull-request--commit-guidelines) 8. [Review Process & What Reviewers Look For](#review-process--what-reviewers-look-for) 9. [Tips for Navigating the Repo](#tips-for-navigating-the-repo) +10. [Prerequisites](#prerequisites) ## Overview @@ -27,15 +28,15 @@ The OpenAI Agents JS repository is a pnpm-managed monorepo that provides: - `packages/agents-extensions`: Extensions for agent workflows. - `docs`: Documentation site powered by Astro. - `examples`: Sample projects demonstrating usage patterns. -- `scripts`: Automation scripts (`dev.ts`, `embedMeta.ts`). +- `scripts`: Automation scripts (`dev.mts`, `embedMeta.ts`). - `helpers`: Shared utilities for testing and other internal use. ## Repo Structure & Important Files - `packages/agents-core/`, `packages/agents-openai/`, `packages/agents-realtime/`, `packages/agents-extensions/`: Each has its own `package.json`, `src/`, `test/`, and build scripts. -- `docs/`: Documentation source; run with `pnpm docs:dev` or build with `pnpm -F docs build`. +- `docs/`: Documentation source; develop with `pnpm docs:dev` or build with `pnpm docs:build`. - `examples/`: Subdirectories (e.g. `basic`, `agent-patterns`) with their own `package.json` and start scripts. -- `scripts/dev.ts`: Runs concurrent build-watchers and the docs dev server (`pnpm dev`). +- `scripts/dev.mts`: Runs concurrent build-watchers and the docs dev server (`pnpm dev`). - `scripts/embedMeta.ts`: Generates `src/metadata.ts` for each package before build. - `helpers/tests/`: Shared test utilities. - `README.md`: High-level overview and installation instructions. @@ -50,7 +51,7 @@ The OpenAI Agents JS repository is a pnpm-managed monorepo that provides: Before submitting changes, ensure all checks pass: -### Unit Tests and Type Checking Examples +### Unit Tests and Type Checking - Check the compilation across all packages and examples: ```bash @@ -61,11 +62,19 @@ Before submitting changes, ensure all checks pass: CI=1 pnpm test ``` - Tests are located under each package in `packages//test/`. -- Using `CI=1` makes sure that the tests don't automatically run in watch mode +- The test script already sets `CI=1` to avoid watch mode. ### Integration Tests -- Do NOT try to run them. Integration tests currently require a valid OpenAI Account. +- Not required for typical contributions. These tests rely on a local npm registry (Verdaccio) and other environment setup. +- To run locally only if needed: + ```bash + pnpm local-npm:start # starts Verdaccio on :4873 + pnpm local-npm:publish # public pacakges to the local repo + pnpm test:integration # runs integration tests + ``` + +See [this README](integration-tests/README.md) for details. ### Code Coverage @@ -106,7 +115,7 @@ Before submitting changes, ensure all checks pass: - Documentation site: ```bash pnpm docs:dev - pnpm -F docs build + pnpm docs:build ``` - Examples: ```bash @@ -132,6 +141,11 @@ Before submitting changes, ensure all checks pass: - Run `pnpm lint` and fix all errors locally. - Use `pnpm build` to catch type errors. +## Prerequisites + +- Node.js 22+ recommended. +- pnpm 10+ (`corepack enable` is recommended to manage versions). + ## Development Workflow 1. Sync with `main` (or default branch). @@ -160,7 +174,8 @@ Before submitting changes, ensure all checks pass: - `build`: changes that affect the build system - `ci`: CI configuration - `style`: code style (formatting, missing semicolons, etc.) - - `TYP`: type-related changes + - `types`: type-related changes + - `revert`: reverts a previous commit - Commit message format: ``` @@ -191,4 +206,4 @@ Before submitting changes, ensure all checks pass: - Study `vitest.config.ts` for test patterns (e.g., setup files, aliasing). - Explore `scripts/embedMeta.ts` for metadata generation logic. - Examples in `examples/` are fully functional apps—run them to understand usage. -- Docs in `docs/src/` use Astro and Starlight; pages mirror package APIs under `docs/src/openai/agents`. +- Docs in `docs/src/` use Astro and Starlight; authored content lives under `docs/src/content/docs/` and mirrors package APIs. diff --git a/README.md b/README.md index 6ccd167f..b22c58c4 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ Explore the [`examples/`](examples/) directory to see the SDK in action. - [x] **Guardrails**: Input and output validation for safety and reliability. - [x] **Parallelization**: Run agents or tool calls in parallel and aggregate results. - [x] **Human-in-the-Loop**: Integrate human approval or intervention into workflows. -- [x] **Realtime Voice Agents**: Build realtime voice agents using WebRTC or Websockets +- [x] **Realtime Voice Agents**: Build realtime voice agents using WebRTC or WebSockets - [x] **Local MCP Server Support**: Give an Agent access to a locally running MCP server to provide tools - [x] **Separate optimized browser package**: Dedicated package meant to run in the browser for Realtime agents. - [x] **Broader model support**: Use non-OpenAI models through the Vercel AI SDK adapter @@ -166,11 +166,11 @@ const agent = new RealtimeAgent({ tools: [getWeatherTool], }); -// Intended to be run the browser -const { apiKey } = await fetch('/path/to/ephemerial/key/generation').then( +// Intended to run in the browser +const { apiKey } = await fetch('/path/to/ephemeral/key/generation').then( (resp) => resp.json(), ); -// automatically configures audio input/output so start talking +// Automatically configures audio input/output — start talking const session = new RealtimeSession(agent); await session.connect({ apiKey }); ``` @@ -181,8 +181,8 @@ The [`examples/`](examples/) directory contains a series of examples to get star - `pnpm examples:basic` - Basic example with handoffs and tool calling - `pnpm examples:agents-as-tools` - Using agents as tools for translation -- `pnpm examples:web-search` - Using the web search tool -- `pnpm examples:file-search` - Using the file search tool +- `pnpm examples:tools-web-search` - Using the web search tool +- `pnpm examples:tools-file-search` - Using the file search tool - `pnpm examples:deterministic` - Deterministic multi-agent workflow - `pnpm examples:parallelization` - Running agents in parallel and picking the best result - `pnpm examples:human-in-the-loop` - Human approval for certain tool calls @@ -244,10 +244,16 @@ If you want to contribute or edit the SDK/examples: 2. Build the project ```bash - pnpm build + pnpm build && pnpm -r build-check ``` -3. Run tests, linter, etc. (add commands as appropriate for your project) +3. Run tests and linter + + ```bash + pnpm test && pnpm lint + ``` + +See `AGENTS.md` and `CONTRIBUTING.md` for the full contributor guide. ## Acknowledgements diff --git a/docs/README.md b/docs/README.md index 56ed600b..815f48ad 100644 --- a/docs/README.md +++ b/docs/README.md @@ -23,5 +23,5 @@ pnpm docs:translate The docs are automatically built and deployed using GitHub Actions. To build them locally run: ```bash -pnpm -F docs build +pnpm docs:build ``` diff --git a/docs/src/content/docs/guides/agents.mdx b/docs/src/content/docs/guides/agents.mdx index 470d1e87..2cc73916 100644 --- a/docs/src/content/docs/guides/agents.mdx +++ b/docs/src/content/docs/guides/agents.mdx @@ -147,7 +147,7 @@ Supplying tools doesn’t guarantee the LLM will call one. You can **force** too After a tool call the SDK automatically resets `tool_choice` back to `'auto'`. This prevents the model from entering an infinite loop where it repeatedly tries to call the tool. You can -override this behaviour via the `resetToolChoice` flag or by configuring +override this behavior via the `resetToolChoice` flag or by configuring `toolUseBehavior`: - `'run_llm_again'` (default) – run the LLM again with the tool result. diff --git a/docs/src/content/docs/guides/config.mdx b/docs/src/content/docs/guides/config.mdx index ee85629b..a0e95c72 100644 --- a/docs/src/content/docs/guides/config.mdx +++ b/docs/src/content/docs/guides/config.mdx @@ -1,6 +1,6 @@ --- title: Configuring the SDK -description: Customize API keys, tracing and logging behaviour +description: Customize API keys, tracing and logging behavior --- import { Code } from '@astrojs/starlight/components'; diff --git a/docs/src/content/docs/guides/tools.mdx b/docs/src/content/docs/guides/tools.mdx index 2fd0d5d8..1a68b429 100644 --- a/docs/src/content/docs/guides/tools.mdx +++ b/docs/src/content/docs/guides/tools.mdx @@ -99,7 +99,7 @@ See [`filesystem-example.ts`](https://github.com/openai/openai-agents-js/tree/ma --- -## Tool use behaviour +## Tool use behavior Refer to the [Agents guide](/openai-agents-js/guides/agents#forcing-tool-use) for controlling when and how a model must use tools (`tool_choice`, `toolUseBehavior`, etc.). diff --git a/docs/src/content/docs/guides/voice-agents/quickstart.mdx b/docs/src/content/docs/guides/voice-agents/quickstart.mdx index 6a14f10a..ec5ec991 100644 --- a/docs/src/content/docs/guides/voice-agents/quickstart.mdx +++ b/docs/src/content/docs/guides/voice-agents/quickstart.mdx @@ -44,7 +44,7 @@ import thinClientExample from '../../../../../../examples/docs/voice-agents/thin 2. **Generate a client ephemeral token** - As this application will run in the users browser, we need a secure way to connect to the model through the Realtime API. For this we can use a [ephemeral client key](https://platform.openai.com/docs/guides/realtime#creating-an-ephemeral-token) that should get generated on your backend server. For testing purposes you can also generate a key using `curl` and your regular OpenAI API key. + As this application will run in the user's browser, we need a secure way to connect to the model through the Realtime API. For this we can use an [ephemeral client key](https://platform.openai.com/docs/guides/realtime#creating-an-ephemeral-token) that should be generated on your backend server. For testing purposes you can also generate a key using `curl` and your regular OpenAI API key. ```bash curl -X POST https://api.openai.com/v1/realtime/sessions \ diff --git a/docs/src/content/docs/ja/guides/config.mdx b/docs/src/content/docs/ja/guides/config.mdx index b12dea9d..07e4a3a8 100644 --- a/docs/src/content/docs/ja/guides/config.mdx +++ b/docs/src/content/docs/ja/guides/config.mdx @@ -1,6 +1,6 @@ --- title: SDK の設定 -description: Customize API keys, tracing and logging behaviour +description: Customize API keys, tracing and logging behavior --- import { Code } from '@astrojs/starlight/components'; diff --git a/examples/docs/readme/readme-voice-agent.ts b/examples/docs/readme/readme-voice-agent.ts index da8d6e7e..07a8a6f6 100644 --- a/examples/docs/readme/readme-voice-agent.ts +++ b/examples/docs/readme/readme-voice-agent.ts @@ -17,11 +17,11 @@ const agent = new RealtimeAgent({ }); async function main() { - // Intended to be run the browser - const { apiKey } = await fetch('/path/to/ephemerial/key/generation').then( + // Intended to run in the browser + const { apiKey } = await fetch('/path/to/ephemeral/key/generation').then( (resp) => resp.json(), ); - // automatically configures audio input/output so start talking + // Automatically configures audio input/output — start talking const session = new RealtimeSession(agent); await session.connect({ apiKey }); } diff --git a/packages/agents-core/src/runContext.ts b/packages/agents-core/src/runContext.ts index 8eb03202..250d2cd2 100644 --- a/packages/agents-core/src/runContext.ts +++ b/packages/agents-core/src/runContext.ts @@ -13,7 +13,7 @@ type ApprovalRecord = { */ export class RunContext { /** - * The context object passed by you to the `Runner.run()` + * The context object you passed to the `Runner.run()` method. */ context: TContext; diff --git a/packages/agents-core/test/tracing.test.ts b/packages/agents-core/test/tracing.test.ts index 32ef1fdb..a1fc26a2 100644 --- a/packages/agents-core/test/tracing.test.ts +++ b/packages/agents-core/test/tracing.test.ts @@ -73,7 +73,7 @@ class TestProcessor implements TracingProcessor { } // ----------------------------------------------------------------------------------------- -// Tests for utils.ts +// Tests for utils.ts. // ----------------------------------------------------------------------------------------- describe('tracing/utils', () => { @@ -107,7 +107,7 @@ describe('tracing/utils', () => { }); // ----------------------------------------------------------------------------------------- -// Tests for Span / Trace core behaviour +// Tests for Span / Trace core behavior. // ----------------------------------------------------------------------------------------- describe('Trace & Span lifecycle', () => { @@ -161,7 +161,7 @@ describe('Trace & Span lifecycle', () => { }); // ----------------------------------------------------------------------------------------- -// Tests for BatchTraceProcessor (happy‑path) +// Tests for BatchTraceProcessor (happy‑path). // ----------------------------------------------------------------------------------------- describe('BatchTraceProcessor', () => { @@ -194,7 +194,7 @@ describe('BatchTraceProcessor', () => { }); // ----------------------------------------------------------------------------------------- -// Tests for high‑level context helpers +// Tests for high‑level context helpers. // ----------------------------------------------------------------------------------------- describe('withTrace & span helpers (integration)', () => { @@ -262,7 +262,7 @@ describe('withTrace & span helpers (integration)', () => { }); // ----------------------------------------------------------------------------------------- -// Tests for MultiTracingProcessor +// Tests for MultiTracingProcessor. // ----------------------------------------------------------------------------------------- describe('MultiTracingProcessor', () => { @@ -286,10 +286,10 @@ describe('MultiTracingProcessor', () => { }); // ----------------------------------------------------------------------------------------- -// Tests for TraceProvider disabled flag +// Tests for TraceProvider disabled flag. // ----------------------------------------------------------------------------------------- -describe('TraceProvider disabled behaviour', () => { +describe('TraceProvider disabled behavior', () => { it('returns NoopTrace/NoopSpan when disabled', () => { const provider = new TraceProvider(); provider.setDisabled(true); @@ -308,7 +308,7 @@ describe('TraceProvider disabled behaviour', () => { }); // ----------------------------------------------------------------------------------------- -// Tests for ResponseSpanData serialization +// Tests for ResponseSpanData serialization. // ----------------------------------------------------------------------------------------- describe('ResponseSpanData serialization', () => { diff --git a/packages/agents-realtime/src/openaiRealtimeWebRtc.ts b/packages/agents-realtime/src/openaiRealtimeWebRtc.ts index a844d0c2..39efc4cc 100644 --- a/packages/agents-realtime/src/openaiRealtimeWebRtc.ts +++ b/packages/agents-realtime/src/openaiRealtimeWebRtc.ts @@ -152,7 +152,7 @@ export class OpenAIRealtimeWebRTC const isClientKey = typeof apiKey === 'string' && apiKey.startsWith('ek_'); if (isBrowserEnvironment() && !this.#useInsecureApiKey && !isClientKey) { throw new UserError( - 'Using the WebRTC connection in a browser environment requires an insecure API key. Please use a WebSocket connection instead or set the useInsecureApiKey option to true.', + 'Using the WebRTC connection in a browser environment requires an ephemeral client key. If you need to use a regular API key, use the WebSocket transport or set the `useInsecureApiKey` option to true.', ); } diff --git a/packages/agents-realtime/src/realtimeSession.ts b/packages/agents-realtime/src/realtimeSession.ts index 1c7da253..bdfde0a3 100644 --- a/packages/agents-realtime/src/realtimeSession.ts +++ b/packages/agents-realtime/src/realtimeSession.ts @@ -135,12 +135,12 @@ export type RealtimeSessionConnectOptions = { }; /** - * A `RealtimeSession` is the corner piece of building Voice Agents. It's the equivalent of a + * A `RealtimeSession` is the cornerstone of building Voice Agents. It's the equivalent of a * Runner in text-based agents except that it automatically handles multiple turns by maintaining a * connection with the underlying transport layer. * * The session handles managing the local history copy, executes tools, runs output guardrails, and - * facilities handoffs. + * facilitates handoffs. * * The actual audio handling and generation of model responses is handled by the underlying * transport layer. By default if you are using a browser with WebRTC support, the session will diff --git a/packages/agents/README.md b/packages/agents/README.md index c4d7730d..cb3c9e7c 100644 --- a/packages/agents/README.md +++ b/packages/agents/README.md @@ -24,7 +24,7 @@ Explore the [`examples/`](examples/) directory to see the SDK in action. - [x] **Guardrails**: Input and output validation for safety and reliability. - [x] **Parallelization**: Run agents or tool calls in parallel and aggregate results. - [x] **Human-in-the-Loop**: Integrate human approval or intervention into workflows. -- [x] **Realtime Voice Agents**: Build realtime voice agents using WebRTC or Websockets +- [x] **Realtime Voice Agents**: Build realtime voice agents using WebRTC or WebSockets - [x] **Local MCP Server Support**: Give an Agent access to a locally running MCP server to provide tools - [x] **Separate optimized browser package**: Dedicated package meant to run in the browser for Realtime agents. - [x] **Broader model support**: Use non-OpenAI models through the Vercel AI SDK adapter @@ -160,11 +160,11 @@ const agent = new RealtimeAgent({ tools: [getWeatherTool], }); -// Intended to be run the browser -const { apiKey } = await fetch('/path/to/ephemerial/key/generation').then( +// Intended to run in the browser +const { apiKey } = await fetch('/path/to/ephemeral/key/generation').then( (resp) => resp.json(), ); -// automatically configures audio input/output so start talking +// Automatically configures audio input/output — start talking const session = new RealtimeSession(agent); await session.connect({ apiKey }); ``` @@ -198,34 +198,6 @@ The final output is the last thing the agent produces in the loop. - If the maximum number of turns is exceeded, a `MaxTurnsExceededError` is thrown. - If a guardrail is triggered, a `GuardrailTripwireTriggered` exception is raised. -## Documentation - -To view the documentation locally: - -```bash -pnpm docs:dev -``` - -Then visit [http://localhost:4321](http://localhost:4321) in your browser. - -## Development - -If you want to contribute or edit the SDK/examples: - -1. Install dependencies - - ```bash - pnpm install - ``` - -2. Build the project - - ```bash - pnpm build - ``` - -3. Run tests, linter, etc. (add commands as appropriate for your project) - ## Acknowledgements We'd like to acknowledge the excellent work of the open-source community, especially: