Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and minimal learning curve.
-
Updated
May 17, 2024 - Python
Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and minimal learning curve.
Auto Complete anything using a gguf model
This repository contains code for a Telegram bot that uses GPT-based language models to generate responses to user queries. It includes a Python script for creating a vector index based on a corpus of text files, and another script for running the Telegram bot. Users can add their own text files to the data folder and use the bot to generate respo
Add a description, image, and links to the llamas topic page so that developers can more easily learn about it.
To associate your repository with the llamas topic, visit your repo's landing page and select "manage topics."