Skip to content

Setting up Backend for SLM/LLM Infrastructure #4

@vinod0m

Description

@vinod0m

Support AI Infrastructure
- Local inference: LM Studio, Ollama
- Cloud-based Infra: Groq, openRouter
- Api based inference using LLM provided (OpenAI, Grok, Gemini, Qwen, etc)

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions