Switch to OpenAI Responses API#1981
Conversation
|
@vinzee is attempting to deploy a commit to the HyperDX Team on Vercel. A member of the Team first needs to authorize it. |
🦋 Changeset detectedLatest commit: 4ffac21 The changes in this PR will be included in the next version bump. This PR includes changesets to release 3 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
PR Review
|
45eaa25 to
250f07b
Compare
| }); | ||
|
|
||
| return openai.chat(config.AI_MODEL_NAME); | ||
| switch (additionalOptions.api_type) { |
There was a problem hiding this comment.
Nobody has used this openai sdk since we only added it yesterday in your other PR (not released yet).
Why not just change to openai.responses for everything instead of the configuration? It's the default in the AI SDK
Since AI SDK 5, the OpenAI responses API is called by default (unless you specify e.g. 'openai.chat')
https://ai-sdk.dev/providers/ai-sdk-providers/openai
Thoughts?
There was a problem hiding this comment.
sure, I actually was doing that before. changed after the automated review comments 😄 . Let me make it the default and remove AI_ADDITIONAL_OPTIONS
|
@vinzee looks like some linting issues |
According to OpenAI docs, the Responses API is recommended for all new projects. https://developers.openai.com/api/docs/guides/migrate-to-responses
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
## Summary #1960 added support for OpenAI's chat completions api. This change switches to using [OpenAI's new Responses API](https://developers.openai.com/api/docs/guides/migrate-to-responses) instead. ### How to test locally or on Vercel ### How to test locally 1. Set env vars: `AI_PROVIDER=openai AI_API_KEY= AI_BASE_URL=<> AI_MODEL_NAME=<> AI_REQUEST_HEADERS={"X-Client-Id":"","X-Username":"", AI_ADDITIONAL_OPTIONS = {API_TYPE: "responses"}}` 3. Open Hyperdx's chart explorer and use the AI assistant chart builder - e.g. "show me error count by service in the last hour" 4. Confirm the assistant returns a valid chart config. ### References - Linear Issue: - Related PRs:
## Summary #1960 added support for OpenAI's chat completions api. This change switches to using [OpenAI's new Responses API](https://developers.openai.com/api/docs/guides/migrate-to-responses) instead. ### How to test locally or on Vercel ### How to test locally 1. Set env vars: `AI_PROVIDER=openai AI_API_KEY= AI_BASE_URL=<> AI_MODEL_NAME=<> AI_REQUEST_HEADERS={"X-Client-Id":"","X-Username":"", AI_ADDITIONAL_OPTIONS = {API_TYPE: "responses"}}` 3. Open Hyperdx's chart explorer and use the AI assistant chart builder - e.g. "show me error count by service in the last hour" 4. Confirm the assistant returns a valid chart config. ### References - Linear Issue: - Related PRs: Co-authored-by: peter-leonov-ch <209667683+peter-leonov-ch@users.noreply.github.com>
Summary
#1960 added support for OpenAI's chat completions api.
This change switches to using OpenAI's new Responses API instead.
How to test locally or on Vercel
How to test locally
AI_PROVIDER=openai AI_API_KEY= AI_BASE_URL=<> AI_MODEL_NAME=<> AI_REQUEST_HEADERS={"X-Client-Id":"","X-Username":"", AI_ADDITIONAL_OPTIONS = {API_TYPE: "responses"}}References