intel-analytics
Pinned
Repositories
- continue Public Forked from continuedev/continue
⏩ Open-source VS Code and JetBrains extensions that enable you to easily create your own modular AI software development system
- ipex-llm Public
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
-
- langchain Public Forked from langchain-ai/langchain
🦜🔗 Build context-aware reasoning applications
- text-generation-webui Public Forked from oobabooga/text-generation-webui
A Gradio Web UI for running local LLM on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) using IPEX-LLM.
- Langchain-Chatchat Public Forked from chatchat-space/Langchain-Chatchat
Knowledge Base QA using RAG pipeline on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with IPEX-LLM
-
- private-gpt Public Forked from zylon-ai/private-gpt
Interact with your documents using the power of GPT, 100% privately, no data leaks
- llama_index Public Forked from run-llama/llama_index
LlamaIndex is a data framework for your LLM applications
- ipex-llm-tutorial Public
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm