HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊 HugNLP will released to @HugAILab
-
Updated
May 5, 2023 - Python
HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊 HugNLP will released to @HugAILab
Inference Llama 2 in one file of pure C
Specify what you want it to build, the AI asks for clarification, and then builds it.
Automating the deployment of the Takeoff Server on AWS for LLMs
creating a workflow to train t5 language models
Experimental autonomous AI LLM & RAG IETF reviewer
Swarm Agents: An opensource agent orchestration framework built on top of the latest OpenAI Assistants API.
Code and analysis for optimizing dynamic neural networks. This project investigates and implements various optimization techniques to enhance dynamic neural networks.
Explore innovative Language Model applications (LLMs) with Streamlit-based Proof of Concepts (POCs) 🚀. These demos showcase open-source models using Groq for cloud-based inference and LangChain for efficient orchestration 🌐. From writing assistants to blog post generators, experience AI-driven tools enhancing productivity and creativity 📚💡.
This is the repository holding code and data for "FrugalML: How to Use ML Prediction APIs More Accurately and Cheaply".
LLM Kit - Python Large Language Model Kit for generating data of your choice
AccIo - Enterprise LLM : Unifying intelligence at your command!
Python-based WebSocket for CLI LLaVA inference.
EmbeddedLLM: API server for Embedded Device Deployment. Currently support IpexLLM/DirectML./CPU
Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
A framework for multiple LLM models to operate in a non-adversarial fashion based on the structure of a bee colony working together to maintain a hive.
A guide on how to run LLMs on intel CPUs
Add a description, image, and links to the llm-inference topic page so that developers can more easily learn about it.
To associate your repository with the llm-inference topic, visit your repo's landing page and select "manage topics."