Do we integrate local models? #81
IntelligenzaArtificiale
started this conversation in
Polls
Replies: 1 comment 1 reply
-
Integrating open-source models is a step towards democratizing AI, but it is important to recognize that their current performance may not match that of models like GPT-3 or Bing Chat. Therefore, it is necessary to work on improving the performance of open-source models while simultaneously considering their integration. Once we achieve a certain level of performance parity, we can then shift our focus away from other models. Ultimately, the decision to integrate local models depends on the priority given to true democratization of AI and the balance between accessibility and performance. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Models such as LaMini have recently come out which allow you to run LLm models even without a super high-performance computer.
Naturally who has PCs with very little Ram or with old CPUs will not be able to run them. We would like to add about 3or 4 local models, different according to the requirements they need to work.
Opensource models DO NOT have GPT3.5 performance. They are much inferior.
9 votes ·
Beta Was this translation helpful? Give feedback.
All reactions