An intelligent LLM Router that automatically directs user prompts to the most cost-effective LLM based on task type and preference needs
- A routing function that classifies prompt types and sends them to the most cost effective LLM
- A web app that lets users test prompts and view model selection, latency, and cost per request
- React
- Next.js
- JavaScript
- REST APIs
- LLM (Large Language Models)
Create a .env file in the LLM-Router directory with the following values:
OPENAI_API_KEY=your_openai_api_key_hereFollow these steps to install and run the project locally:
- Clone the repository:
git clone https://github.com/noor188/LLM-Router.git
cd LLM-Router
- Install dependencies:
npm install
# or
yarn install
- Add environment variables:
Create a .env file in the root directory and copy the environment variable keys from above.
- Run the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
- Visit your app:
Open your browser and go to http://localhost:3000
- Add customizable routing rules so users can prioritize cost, latency, or quality depending on their needs
- Run routing function through an evaluation set and optimize performance
