Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM Infrence speed is slow want to improve it by Groq #7

Open
arre-ankit opened this issue May 3, 2024 · 0 comments
Open

LLM Infrence speed is slow want to improve it by Groq #7

arre-ankit opened this issue May 3, 2024 · 0 comments

Comments

@arre-ankit
Copy link

  • I noticed that the after asking it usually give answer very approx 2min so I was thinking to play around other LLM Infrence Model like Groq to make the output real-time.

  • Does this project active and how I can contribute to this with other open source models

  • I have setup the project on Google colab
    image

It would be great if you can guide me in the process @gcapuzzi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant