From 8140ac49e4571f26bd89edd0b80e1715845fe1cf Mon Sep 17 00:00:00 2001 From: Dominik Kundel Date: Tue, 3 Jun 2025 10:48:38 -0700 Subject: [PATCH] docs: fixed links --- docs/src/content/docs/extensions/twilio.mdx | 4 +-- docs/src/content/docs/guides/agents.mdx | 22 ++++++------ docs/src/content/docs/guides/guardrails.mdx | 10 +++--- .../content/docs/guides/human-in-the-loop.mdx | 2 +- docs/src/content/docs/guides/models.mdx | 32 ++++++++--------- docs/src/content/docs/guides/quickstart.mdx | 6 ++-- docs/src/content/docs/guides/results.mdx | 26 +++++++------- .../content/docs/guides/running-agents.mdx | 36 +++++++++---------- docs/src/content/docs/guides/tools.mdx | 8 ++--- docs/src/content/docs/guides/tracing.mdx | 20 +++++------ docs/src/content/docs/guides/voice-agents.mdx | 2 +- .../docs/guides/voice-agents/build.mdx | 4 +-- .../docs/guides/voice-agents/quickstart.mdx | 22 ++++++------ .../src/content/docs/ja/extensions/twilio.mdx | 2 +- docs/src/content/docs/ja/guides/agents.mdx | 22 ++++++------ .../src/content/docs/ja/guides/guardrails.mdx | 10 +++--- .../docs/ja/guides/human-in-the-loop.mdx | 2 +- docs/src/content/docs/ja/guides/models.mdx | 32 ++++++++--------- .../src/content/docs/ja/guides/quickstart.mdx | 6 ++-- docs/src/content/docs/ja/guides/results.mdx | 26 +++++++------- .../content/docs/ja/guides/running-agents.md | 8 ++--- .../content/docs/ja/guides/running-agents.mdx | 36 +++++++++---------- docs/src/content/docs/ja/guides/tools.mdx | 8 ++--- docs/src/content/docs/ja/guides/tracing.mdx | 8 ++--- .../content/docs/ja/guides/voice-agents.mdx | 2 +- .../docs/ja/guides/voice-agents/build.mdx | 4 +-- .../ja/guides/voice-agents/quickstart.mdx | 22 ++++++------ 27 files changed, 189 insertions(+), 193 deletions(-) diff --git a/docs/src/content/docs/extensions/twilio.mdx b/docs/src/content/docs/extensions/twilio.mdx index 5f7f638c..029182f4 100644 --- a/docs/src/content/docs/extensions/twilio.mdx +++ b/docs/src/content/docs/extensions/twilio.mdx @@ -9,7 +9,7 @@ import twilioServerExample from '../../../../../examples/realtime-twilio/index.t Twilio offers a [Media Streams API](https://www.twilio.com/docs/voice/media-streams) that sends the raw audio from a phone call to a WebSocket server. This set up can be used to connect your -[voice agents](../guides/voice-agents) to Twilio. You can use the default Realtime Session transport +[voice agents](/openai-agents-js/guides/voice-agents) to Twilio. You can use the default Realtime Session transport in `websocket` mode to connect the events coming from Twilio to your Realtime Session. However, this requires you to set the right audio format and adjust your own interruption timing as phone calls will naturally introduce more latency than a web-based converstaion. @@ -63,7 +63,7 @@ connection to Twilio for you, including handling interruptions and audio forward Any event and behavior that you would expect from a `RealtimeSession` will work as expected -including tool calls, guardrails, and more. Read the [voice agents guide](/guides/voice-agents) +including tool calls, guardrails, and more. Read the [voice agents guide](/openai-agents-js/guides/voice-agents) for more information on how to use the `RealtimeSession` with voice agents. ## Tips and Considerations diff --git a/docs/src/content/docs/guides/agents.mdx b/docs/src/content/docs/guides/agents.mdx index 164d462a..470d1e87 100644 --- a/docs/src/content/docs/guides/agents.mdx +++ b/docs/src/content/docs/guides/agents.mdx @@ -33,13 +33,13 @@ The rest of this page walks through every Agent feature in more detail. The `Agent` constructor takes a single configuration object. The most commonly‑used properties are shown below. -| Property | Required | Description | -| --------------- | -------- | ------------------------------------------------------------------------------------------- | -| `name` | yes | A short human‑readable identifier. | -| `instructions` | yes | System prompt (string **or** function – see [Dynamic instructions](#dynamic-instructions)). | -| `model` | no | Model name **or** a custom [`Model`](/openai/agents/interfaces/model/) implementation. | -| `modelSettings` | no | Tuning parameters (temperature, top_p, etc.). | -| `tools` | no | Array of [`Tool`](/openai/agents/type-aliases/tool/) instances the model can call. | +| Property | Required | Description | +| --------------- | -------- | ------------------------------------------------------------------------------------------------------- | +| `name` | yes | A short human‑readable identifier. | +| `instructions` | yes | System prompt (string **or** function – see [Dynamic instructions](#dynamic-instructions)). | +| `model` | no | Model name **or** a custom [`Model`](/openai-agents-js/openai/agents/interfaces/model/) implementation. | +| `modelSettings` | no | Tuning parameters (temperature, top_p, etc.). | +| `tools` | no | Array of [`Tool`](/openai-agents-js/openai/agents/type-aliases/tool/) instances the model can call. | @@ -83,7 +83,7 @@ use a _triage agent_ that routes the conversation to a more specialised sub‑ag -You can read more about this pattern in the [handoffs guide](/guides/handoffs). +You can read more about this pattern in the [handoffs guide](/openai-agents-js/guides/handoffs). --- @@ -118,7 +118,7 @@ For advanced use‑cases you can observe the Agent lifecycle by listening on eve Guardrails allow you to validate or transform user input and agent output. They are configured via the `inputGuardrails` and `outputGuardrails` arrays. See the -[guardrails guide](/guides/guardrails) for details. +[guardrails guide](/openai-agents-js/guides/guardrails) for details. --- @@ -166,6 +166,6 @@ const agent = new Agent({ ## Next steps -- Learn how to [run agents](/guides/running-agents). -- Dive into [tools](/guides/tools), [guardrails](/guides/guardrails), and [models](/guides/models). +- Learn how to [run agents](/openai-agents-js/guides/running-agents). +- Dive into [tools](/openai-agents-js/guides/tools), [guardrails](/openai-agents-js/guides/guardrails), and [models](/openai-agents-js/guides/models). - Explore the full TypeDoc reference under **@openai/agents** in the sidebar. diff --git a/docs/src/content/docs/guides/guardrails.mdx b/docs/src/content/docs/guides/guardrails.mdx index 94cfb71d..cf179607 100644 --- a/docs/src/content/docs/guides/guardrails.mdx +++ b/docs/src/content/docs/guides/guardrails.mdx @@ -19,8 +19,8 @@ There are two kinds of guardrails: Input guardrails run in three steps: 1. The guardrail receives the same input passed to the agent. -2. The guardrail function executes and returns a [`GuardrailFunctionOutput`](/openai/agents/interfaces/guardrailfunctionoutput) wrapped inside an [`InputGuardrailResult`](/openai/agents/interfaces/inputguardrailresult). -3. If `tripwireTriggered` is `true`, an [`InputGuardrailTripwireTriggered`](/openai/agents/classes/inputguardrailtripwiretriggered) error is thrown. +2. The guardrail function executes and returns a [`GuardrailFunctionOutput`](/openai-agents-js/openai/agents/interfaces/guardrailfunctionoutput) wrapped inside an [`InputGuardrailResult`](/openai-agents-js/openai/agents/interfaces/inputguardrailresult). +3. If `tripwireTriggered` is `true`, an [`InputGuardrailTripwireTriggered`](/openai-agents-js/openai/agents/classes/inputguardrailtripwiretriggered) error is thrown. > **Note** > Input guardrails are intended for user input, so they only run if the agent is the _first_ agent in the workflow. Guardrails are configured on the agent itself because different agents often require different guardrails. @@ -30,11 +30,11 @@ Input guardrails run in three steps: Output guardrails follow the same pattern: 1. The guardrail receives the same input passed to the agent. -2. The guardrail function executes and returns a [`GuardrailFunctionOutput`](/openai/agents/interfaces/guardrailfunctionoutput) wrapped inside an [`OutputGuardrailResult`](/openai/agents/interfaces/outputguardrailresult). -3. If `tripwireTriggered` is `true`, an [`OutputGuardrailTripwireTriggered`](/openai/agents/classes/outputguardrailtripwiretriggered) error is thrown. +2. The guardrail function executes and returns a [`GuardrailFunctionOutput`](/openai-agents-js/openai/agents/interfaces/guardrailfunctionoutput) wrapped inside an [`OutputGuardrailResult`](/openai-agents-js/openai/agents/interfaces/outputguardrailresult). +3. If `tripwireTriggered` is `true`, an [`OutputGuardrailTripwireTriggered`](/openai-agents-js/openai/agents/classes/outputguardrailtripwiretriggered) error is thrown. > **Note** -> Output guardrails only run if the agent is the _last_ agent in the workflow. For realtime voice interactions see [the voice agents guide](./voice-agents#guardrails). +> Output guardrails only run if the agent is the _last_ agent in the workflow. For realtime voice interactions see [the voice agents guide](/openai-agents-js/guides/voice-agents/build#guardrails). ## Tripwires diff --git a/docs/src/content/docs/guides/human-in-the-loop.mdx b/docs/src/content/docs/guides/human-in-the-loop.mdx index 1b0a1752..66cc9999 100644 --- a/docs/src/content/docs/guides/human-in-the-loop.mdx +++ b/docs/src/content/docs/guides/human-in-the-loop.mdx @@ -29,7 +29,7 @@ You can define a tool that requires approval by setting the `needsApproval` opti - If approval has not been granted or rejected, the tool will return a static message to the agent that the tool call cannot be executed. - If approval / rejection is missing it will trigger a tool approval request. 3. The agent will gather all tool approval requests and interrupt the execution. -4. If there are any interruptions, the [result](/guides/result) will contain an `interruptions` array describing pending steps. A `ToolApprovalItem` with `type: "tool_approval_item"` appears when a tool call requires confirmation. +4. If there are any interruptions, the [result](/openai-agents-js/guides/result) will contain an `interruptions` array describing pending steps. A `ToolApprovalItem` with `type: "tool_approval_item"` appears when a tool call requires confirmation. 5. You can call `result.state.approve(interruption)` or `result.state.reject(interruption)` to approve or reject the tool call. 6. After handling all interruptions, you can resume execution by passing the `result.state` back into `runner.run(agent, state)` where `agent` is the original agent that triggered the overall run. 7. The flow starts again from step 1. diff --git a/docs/src/content/docs/guides/models.mdx b/docs/src/content/docs/guides/models.mdx index 2f97a222..0ecd1a3f 100644 --- a/docs/src/content/docs/guides/models.mdx +++ b/docs/src/content/docs/guides/models.mdx @@ -14,9 +14,9 @@ import setTracingExportApiKeyExample from '../../../../../examples/docs/config/s Every Agent ultimately calls an LLM. The SDK abstracts models behind two lightweight interfaces: -- [`Model`](/openai/agents/interfaces/model) – knows how to make _one_ request against a +- [`Model`](/openai-agents-js/openai/agents/interfaces/model) – knows how to make _one_ request against a specific API. -- [`ModelProvider`](/openai/agents/interfaces/modelprovider) – resolves human‑readable +- [`ModelProvider`](/openai-agents-js/openai/agents/interfaces/modelprovider) – resolves human‑readable model **names** (e.g. `'gpt‑4o'`) to `Model` instances. In day‑to‑day work you normally only interact with model **names** and occasionally @@ -67,17 +67,17 @@ The OpenAI provider defaults to `gpt‑4o`. Override per agent or globally: `ModelSettings` mirrors the OpenAI parameters but is provider‑agnostic. -| Field | Type | Notes | -| ------------------- | ------------------------------------------ | ----------------------------------------------------------- | -| `temperature` | `number` | Creativity vs. determinism. | -| `topP` | `number` | Nucleus sampling. | -| `frequencyPenalty` | `number` | Penalise repeated tokens. | -| `presencePenalty` | `number` | Encourage new tokens. | -| `toolChoice` | `'auto' \| 'required' \| 'none' \| string` | See [forcing tool use](/guides/agents.md#forcing-tool-use). | -| `parallelToolCalls` | `boolean` | Allow parallel function calls where supported. | -| `truncation` | `'auto' \| 'disabled'` | Token truncation strategy. | -| `maxTokens` | `number` | Maximum tokens in the response. | -| `store` | `boolean` | Persist the response for retrieval / RAG workflows. | +| Field | Type | Notes | +| ------------------- | ------------------------------------------ | ------------------------------------------------------------------------- | +| `temperature` | `number` | Creativity vs. determinism. | +| `topP` | `number` | Nucleus sampling. | +| `frequencyPenalty` | `number` | Penalise repeated tokens. | +| `presencePenalty` | `number` | Encourage new tokens. | +| `toolChoice` | `'auto' \| 'required' \| 'none' \| string` | See [forcing tool use](/openai-agents-js/guides/agents#forcing-tool-use). | +| `parallelToolCalls` | `boolean` | Allow parallel function calls where supported. | +| `truncation` | `'auto' \| 'disabled'` | Token truncation strategy. | +| `maxTokens` | `number` | Maximum tokens in the response. | +| `store` | `boolean` | Persist the response for retrieval / RAG workflows. | Attach settings at either level: @@ -118,6 +118,6 @@ inspect the complete execution graph of your workflow. ## Next steps -- Explore [running agents](/guides/running-agents). -- Give your models super‑powers with [tools](/guides/tools). -- Add [guardrails](/guides/guardrails) or [tracing](/guides/tracing) as needed. +- Explore [running agents](/openai-agents-js/guides/running-agents). +- Give your models super‑powers with [tools](/openai-agents-js/guides/tools). +- Add [guardrails](/openai-agents-js/guides/guardrails) or [tracing](/openai-agents-js/guides/tracing) as needed. diff --git a/docs/src/content/docs/guides/quickstart.mdx b/docs/src/content/docs/guides/quickstart.mdx index e020dd8b..f4b8aa83 100644 --- a/docs/src/content/docs/guides/quickstart.mdx +++ b/docs/src/content/docs/guides/quickstart.mdx @@ -169,6 +169,6 @@ To review what happened during your agent run, navigate to the Learn how to build more complex agentic flows: -- Learn about configuring [Agents](/guides/agents). -- Learn about [running agents](/guides/running-agents). -- Learn about [tools](/guides/tools), [guardrails](/guides/guardrails), and [models](/guides/models). +- Learn about configuring [Agents](/openai-agents-js/guides/agents). +- Learn about [running agents](/openai-agents-js/guides/running-agents). +- Learn about [tools](/openai-agents-js/guides/tools), [guardrails](/openai-agents-js/guides/guardrails), and [models](/openai-agents-js/guides/models). diff --git a/docs/src/content/docs/guides/results.mdx b/docs/src/content/docs/guides/results.mdx index f4df3b2c..46d370bb 100644 --- a/docs/src/content/docs/guides/results.mdx +++ b/docs/src/content/docs/guides/results.mdx @@ -7,10 +7,10 @@ import { Code } from '@astrojs/starlight/components'; import handoffFinalOutputTypes from '../../../../../examples/docs/results/handoffFinalOutputTypes.ts?raw'; import historyLoop from '../../../../../examples/docs/results/historyLoop.ts?raw'; -When you [run your agent](/guides/running-agents), you will either receive a: +When you [run your agent](/openai-agents-js/guides/running-agents), you will either receive a: -- [`RunResult`](/openai/agents/classes/runresult) if you call `run` without `stream: true` -- [`StreamedRunResult`](/openai/agents/classes/streamedrunresult) if you call `run` with `stream: true`. For details on streaming, also check the [streaming guide](/guides/streaming). +- [`RunResult`](/openai-agents-js/openai/agents/classes/runresult) if you call `run` without `stream: true` +- [`StreamedRunResult`](/openai-agents-js/openai/agents/classes/streamedrunresult) if you call `run` with `stream: true`. For details on streaming, also check the [streaming guide](/openai-agents-js/guides/streaming). ## Final output @@ -52,23 +52,23 @@ In streaming mode it can also be useful to access the `currentAgent` property th ## New items -The `newItems` property contains the new items generated during the run. The items are [`RunItem`](/openai/agents/type-aliases/runitem)s. A run item wraps the raw item generated by the LLM. These can be used to access additionally to the output of the LLM which agent these events were associated with. +The `newItems` property contains the new items generated during the run. The items are [`RunItem`](/openai-agents-js/openai/agents/type-aliases/runitem)s. A run item wraps the raw item generated by the LLM. These can be used to access additionally to the output of the LLM which agent these events were associated with. -- [`RunMessageOutputItem`](/openai/agents/classes/runmessageoutputitem) indicates a message from the LLM. The raw item is the message generated. -- [`RunHandoffCallItem`](/openai/agents/classes/runhandoffcallitem) indicates that the LLM called the handoff tool. The raw item is the tool call item from the LLM. -- [`RunHandoffOutputItem`](/openai/agents/classes/runhandoffoutputitem) indicates that a handoff occurred. The raw item is the tool response to the handoff tool call. You can also access the source/target agents from the item. -- [`RunToolCallItem`](/openai/agents/classes/runtoolcallitem) indicates that the LLM invoked a tool. -- [`RunToolCallOutputItem`](/openai/agents/classes/runtoolcalloutputitem) indicates that a tool was called. The raw item is the tool response. You can also access the tool output from the item. -- [`RunReasoningItem`](/openai/agents/classes/runreasoningitem) indicates a reasoning item from the LLM. The raw item is the reasoning generated. -- [`RunToolApprovalItem`](/openai/agents/classes/runtoolapprovalitem) indicates that the LLM requested approval for a tool call. The raw item is the tool call item from the LLM. +- [`RunMessageOutputItem`](/openai-agents-js/openai/agents/classes/runmessageoutputitem) indicates a message from the LLM. The raw item is the message generated. +- [`RunHandoffCallItem`](/openai-agents-js/openai/agents/classes/runhandoffcallitem) indicates that the LLM called the handoff tool. The raw item is the tool call item from the LLM. +- [`RunHandoffOutputItem`](/openai-agents-js/openai/agents/classes/runhandoffoutputitem) indicates that a handoff occurred. The raw item is the tool response to the handoff tool call. You can also access the source/target agents from the item. +- [`RunToolCallItem`](/openai-agents-js/openai/agents/classes/runtoolcallitem) indicates that the LLM invoked a tool. +- [`RunToolCallOutputItem`](/openai-agents-js/openai/agents/classes/runtoolcalloutputitem) indicates that a tool was called. The raw item is the tool response. You can also access the tool output from the item. +- [`RunReasoningItem`](/openai-agents-js/openai/agents/classes/runreasoningitem) indicates a reasoning item from the LLM. The raw item is the reasoning generated. +- [`RunToolApprovalItem`](/openai-agents-js/openai/agents/classes/runtoolapprovalitem) indicates that the LLM requested approval for a tool call. The raw item is the tool call item from the LLM. ## State -The `state` property contains the state of the run. Most of what is attached to the `result` is derived from the `state` but the `state` is serializable/deserializable and can also be used as input for a subsequent call to `run` in case you need to [recover from an error](/guides/running-agents#exceptions) or deal with an [`interruption`](#interruptions). +The `state` property contains the state of the run. Most of what is attached to the `result` is derived from the `state` but the `state` is serializable/deserializable and can also be used as input for a subsequent call to `run` in case you need to [recover from an error](/openai-agents-js/guides/running-agents#exceptions) or deal with an [`interruption`](#interruptions). ## Interruptions -If you are using `needsApproval` in your agent, your `run` might trigger some `interruptions` that you need to handle before continuing. In that case `interruptions` will be an array of `ToolApprovalItem`s that caused the interruption. Check out the [human-in-the-loop guide](/guides/human-in-the-loop) for more information on how to work with interruptions. +If you are using `needsApproval` in your agent, your `run` might trigger some `interruptions` that you need to handle before continuing. In that case `interruptions` will be an array of `ToolApprovalItem`s that caused the interruption. Check out the [human-in-the-loop guide](/openai-agents-js/guides/human-in-the-loop) for more information on how to work with interruptions. ## Other information diff --git a/docs/src/content/docs/guides/running-agents.mdx b/docs/src/content/docs/guides/running-agents.mdx index 60bdd3b3..cf098905 100644 --- a/docs/src/content/docs/guides/running-agents.mdx +++ b/docs/src/content/docs/guides/running-agents.mdx @@ -19,7 +19,7 @@ Alternatively, you can create your own runner instance: -After running your agent, you will receive a [result](/guides/results) object that contains the final output and the full history of the run. +After running your agent, you will receive a [result](/openai-agents-js/guides/results) object that contains the final output and the full history of the run. ## The agent loop @@ -32,7 +32,7 @@ The runner then runs a loop: - **Final output** → return. - **Handoff** → switch to the new agent, keep the accumulated conversation history, go to 1. - **Tool calls** → execute tools, append their results to the conversation, go to 1. -3. Throw [`MaxTurnsExceededError`](/openai/agents-core/classes/maxturnsexceedederror) once `maxTurns` is reached. +3. Throw [`MaxTurnsExceededError`](/openai-agents-js/openai/agents-core/classes/maxturnsexceedederror) once `maxTurns` is reached.