-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Documentation] Using non-OpenAI models #2076
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2076 +/- ##
==========================================
+ Coverage 36.88% 36.90% +0.01%
==========================================
Files 68 68
Lines 7062 7061 -1
Branches 1540 1541 +1
==========================================
+ Hits 2605 2606 +1
+ Misses 4228 4225 -3
- Partials 229 230 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! My comments are mostly about organizing the content into smaller parts for easier navigation and better indexibiity.
website/docs/topics/non-openai-models/cloud-based-proxy-servers.md
Outdated
Show resolved
Hide resolved
website/docs/topics/non-openai-models/cloud-based-proxy-servers.md
Outdated
Show resolved
Hide resolved
website/docs/topics/non-openai-models/cloud-based-proxy-servers.md
Outdated
Show resolved
Hide resolved
website/docs/topics/non-openai-models/cloud-based-proxy-servers.md
Outdated
Show resolved
Hide resolved
website/docs/topics/non-openai-models/about-using-nonopenai-models.md
Outdated
Show resolved
Hide resolved
website/docs/topics/non-openai-models/about-using-nonopenai-models.md
Outdated
Show resolved
Hide resolved
website/docs/topics/non-openai-models/cloud-based-proxy-servers.md
Outdated
Show resolved
Hide resolved
Thanks! To run pre-commit formatting check locally, you can do:
|
* Addition of Non-OpenAI LLM section and main doc page * Continued writing... * Continued writing - cloud-based proxy servers * Folder renamed * Further writing * together.ai example added * Local proxy server added, diagram added, tidy up * Added vLLM to local proxy servers documentation * As per @ekzhu's feedback, individual pages and tidy up * Added reference to LM Studio and renamed file * Fixed incorrect huggingface.co link * Run pre-commit checks, added LM Studio redirect --------- Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
As there's a lot of interest in running AutoGen with non-OpenAI models and there's a drive to add it to the documentation (#1994), here's a start to the documentation for using non-OpenAI models with cloud and local proxy servers.
There's a new menu item under
Docs
>Topics
calledUsing Non-OpenAI Models
.Under this are the start of the documentation for using non-OpenAI models, with three pages:
I'll add vLLM to the locally run proxy servers and am also looking to add huggingface.co to the cloud-based (if I can get a working example).
It would be great for others to add examples for cloud-based and locally run proxy servers.
Adding custom model classes would also be a good additional page and examples.
Incorporating (#2044 - LM Studio example) by @ekzhu is necessary, too, and that has a matching folder structure.
Related issue number
1994 - [Documentation] Topic Category for AutoGen + Non-OpenAI Models is the driver for this PR.
2044 - Add LM Studio Example in Topics to be incorporated / merged.
Checks
UPDATE
The current documentation structure is now:
Docs > Topics > Using Non-OpenAI Models
Under this is an introduction page and then individual pages for cloud-based and local proxy servers.