Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More local inference support #23

Closed
1 of 3 tasks
RobinQu opened this issue Jun 16, 2024 · 2 comments
Closed
1 of 3 tasks

More local inference support #23

RobinQu opened this issue Jun 16, 2024 · 2 comments
Milestone

Comments

@RobinQu
Copy link
Owner

RobinQu commented Jun 16, 2024

Todos

  • BGE-M3 Embedding support
  • Possible llama.cpp support chat model in guff format
  • initial support for parallelism: multi-instance, batching
@RobinQu RobinQu added this to the 0.1.5 milestone Jun 16, 2024
@RobinQu
Copy link
Owner Author

RobinQu commented Jun 30, 2024

BGE models are still slow on CPU. Will ingestigate on GPU version.

@RobinQu
Copy link
Owner Author

RobinQu commented Jul 1, 2024

Link to #27

@RobinQu RobinQu closed this as completed Jul 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant