-
Notifications
You must be signed in to change notification settings - Fork 92
OpenAI Agents financial research and reasoning examples #226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI Agents financial research and reasoning examples #226
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. (moved to easily replyable comment, can't delete this unfortunately)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, the top-level of openai_agents has all of the samples mashed together as one project at the top level dir. I think it makes the addition of this as a subdir a bit confusing. I wonder if we should move the current sample set into a subdir (e.g. "simple" or something) or make each sample its own dir as we are doing with these newer ones.
EDIT: I now see #221 will fix this, thanks! Can disregard this comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a little concerned we are adding lots of OpenAI sample code with no tests to confirm it continues to work. Granted I know this is unfortunately OpenAI's approach to samples, but ideally we can have some test somewhere. We have other samples where we have done this and it comes back to bite us hard when people report it stops working and we have to scramble (this is the case with the Sentry sample which has now remained broken for a long time).
(just adding this general comment, deferring code review to Dan/Tim)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree.
Before we said testing the samples would be redundant because the samples were basically the same as the tests already in the SDK. Now we are adding more samples and they will become unmaintainable without tests.
The best way to do this is with end-to-end tests because we want to validate the sample as-written. We'll need to write a little tool to do this because some samples are interactive and all of them call LLMs, making it difficult to validate they do the right thing. We are probably just asserting that they run to completion without errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Should we open an issue to track this?
…nancial-research-reasoning
What was changed
Ported financial research and reasoning examples from OpenAI Agents SDK.
Why?
Checklist
Closes
How was this tested: