-
-
Notifications
You must be signed in to change notification settings - Fork 12.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⚡️ perf: Improve the performance of refreshModelProviderList
.
#6672
Conversation
@Last-Order is attempting to deploy a commit to the LobeChat Desktop Team on Vercel. A member of the Team first needs to authorize it. |
Thank you for raising your pull request and contributing to our Community |
refreshModelProviderList
.refreshModelProviderList
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for your contribution! This part will be remove in the V2. I think currently the database mode or pglite mode have completely solved this issue
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #6672 +/- ##
==========================================
+ Coverage 91.56% 91.61% +0.05%
==========================================
Files 716 716
Lines 67236 67243 +7
Branches 3239 3248 +9
==========================================
+ Hits 61567 61608 +41
+ Misses 5669 5635 -34
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
❤️ Great PR @Last-Order ❤️ The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world. |
🎉 This PR is included in version 1.69.5 🎉 The release is available on: Your semantic-release bot 📦🚀 |
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
When setting model availability on the LLM settings page, getEnableModelsById is called thousands of times, causing a performance issue. This PR partially addresses the problem, but further work is needed to fully resolve it.
📝 补充信息 | Additional Information
On my device, the average time for this operation decreased from ~1000ms to ~200ms.