Skip to content

Commit 96d5cd9

Browse files
kevinintelXinyaoWapre-commit-ci[bot]
authored
Update supported_examples (#825)
Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> Co-authored-by: Xinyao Wang <xinyao.wang@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 0bb0abb commit 96d5cd9

File tree

1 file changed

+89
-8
lines changed

1 file changed

+89
-8
lines changed

supported_examples.md

Lines changed: 89 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,58 @@ This document introduces the supported examples of GenAIExamples. The supported
66

77
[ChatQnA](./ChatQnA/README.md) is an example of chatbot for question and answering through retrieval augmented generation (RAG).
88

9-
| Framework | LLM | Embedding | Vector Database | Serving | HW | Description |
10-
| ------------------------------------------------------------------------------ | ----------------------------------------------------------------- | --------------------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------- | --------------- | ----------- |
11-
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [NeuralChat-7B](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Redis](https://redis.io/) | [TGI](https://github.com/huggingface/text-generation-inference) [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2/GPU | Chatbot |
12-
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [NeuralChat-7B](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Chroma](https://www.trychroma.com/) | [TGI](https://github.com/huggingface/text-generation-inference) [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Chatbot |
13-
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Redis](https://redis.io/) | [TGI](https://github.com/huggingface/text-generation-inference) [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Chatbot |
14-
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Qdrant](https://qdrant.tech/) | [TGI](https://github.com/huggingface/text-generation-inference) [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Chatbot |
15-
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Redis](https://redis.io/) | [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Chatbot |
9+
<table>
10+
<tr>
11+
<th>Framework</th>
12+
<th>LLM</th>
13+
<th>Embedding</th>
14+
<th>Vector Database</th>
15+
<th>Serving</th>
16+
<th>HW</th>
17+
<th>Description</th>
18+
</tr>
19+
<tr>
20+
<td rowspan="6"><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai/">LlamaIndex</a></td>
21+
<td> <a href="https://huggingface.co/Intel/neural-chat-7b-v3-3">NeuralChat-7B</a></td>
22+
<td> <a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td>
23+
<td> <a href="https://redis.io/">Redis</a></td>
24+
<td> <a href="https://github.com/huggingface/text-generation-inference">TGI</a> <a href="https://github.com/huggingface/text-embeddings-inference">TEI</a></td>
25+
<td> Xeon/Gaudi2/GPU</td>
26+
<td> Chatbot</td>
27+
</tr>
28+
<tr>
29+
<td> <a href="https://huggingface.co/Intel/neural-chat-7b-v3-3">NeuralChat-7B</a></td>
30+
<td> <a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td>
31+
<td> <a href="https://www.trychroma.com/">Chroma</a></td>
32+
<td> <a href="https://github.com/huggingface/text-generation-inference">TGI</a> <a href="https://github.com/huggingface/text-embeddings-inference">TEI</a></td>
33+
<td> Xeon/Gaudi2</td>
34+
<td> Chatbot</td>
35+
</tr>
36+
<tr>
37+
<td> <a href="https://huggingface.co/mistralai/Mistral-7B-v0.1">Mistral-7B</a></td>
38+
<td> <a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td>
39+
<td> <a href="https://redis.io/">Redis</a></td>
40+
<td> <a href="https://github.com/huggingface/text-generation-inference">TGI</a> <a href="https://github.com/huggingface/text-embeddings-inference">TEI</a></td>
41+
<td> Xeon/Gaudi2</td>
42+
<td> Chatbot</td>
43+
</tr>
44+
<tr>
45+
<td> <a href="https://huggingface.co/mistralai/Mistral-7B-v0.1">Mistral-7B</a></td>
46+
<td> <a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td>
47+
<td> <a href="https://qdrant.tech/">Qdrant</a></td>
48+
<td> <a href="https://github.com/huggingface/text-generation-inference">TGI</a> <a href="https://github.com/huggingface/text-embeddings-inference">TEI</a></td>
49+
<td> Xeon/Gaudi2</td>
50+
<td> Chatbot</td>
51+
</tr>
52+
<tr>
53+
<td> <a href="https://huggingface.co/Qwen/Qwen2-7B">Qwen2-7B</a></td>
54+
<td> <a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td>
55+
<td> <a href="https://redis.io/">Redis</a></td>
56+
<td> <a href="https://github.com/huggingface/text-generation-inference">TGI</a></td>
57+
<td> Xeon/Gaudi2</td>
58+
<td> Chatbot</td>
59+
</tr>
60+
</table>
1661

1762
### CodeGen
1863

@@ -101,7 +146,7 @@ The DocRetriever example demonstrates how to match user queries with free-text r
101146

102147
| Framework | Embedding | Vector Database | Serving | HW | Description |
103148
| ------------------------------------------------------------------------------ | --------------------------------------------------- | -------------------------- | --------------------------------------------------------------- | ----------- | -------------------------- |
104-
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Redis](https://redis.io/) | [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Document Retrieval Service |
149+
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Redis](https://redis.io/) | [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Document Retrieval service |
105150

106151
### AgentQnA
107152

@@ -110,3 +155,39 @@ The AgentQnA example demonstrates a hierarchical, multi-agent system designed fo
110155
Worker agent uses open-source websearch tool (duckduckgo), agents use OpenAI GPT-4o-mini as llm backend.
111156

112157
> **_NOTE:_** This example is in active development. The code structure of these use cases are subject to change.
158+
159+
### AudioQnA
160+
161+
The AudioQnA example demonstrates the integration of Generative AI (GenAI) models for performing question-answering (QnA) on audio files, with the added functionality of Text-to-Speech (TTS) for generating spoken responses. The example showcases how to convert audio input to text using Automatic Speech Recognition (ASR), generate answers to user queries using a language model, and then convert those answers back to speech using Text-to-Speech (TTS).
162+
163+
<table>
164+
<tr>
165+
<th>ASR</th>
166+
<th>TTS</th>
167+
<th>LLM</th>
168+
<th>HW</th>
169+
<th>Description</th>
170+
</tr>
171+
<tr>
172+
<td> <a href="https://huggingface.co/openai/whisper-small">openai/whisper-small</a></td>
173+
<td> <a href="https://huggingface.co/microsoft/speecht5_tts">microsoft/SpeechT5</a></td>
174+
<td> <a href="https://github.com/huggingface/text-generation-inference">TGI</a></td>
175+
<td> Xeon/Gaudi2</td>
176+
<td> Talkingbot service</td>
177+
</tr>
178+
</table>
179+
180+
### FaqGen
181+
182+
FAQ Generation Application leverages the power of large language models (LLMs) to revolutionize the way you interact with and comprehend complex textual data. By harnessing cutting-edge natural language processing techniques, our application can automatically generate comprehensive and natural-sounding frequently asked questions (FAQs) from your documents, legal texts, customer queries, and other sources. In this example use case, we utilize LangChain to implement FAQ Generation and facilitate LLM inference using Text Generation Inference on Intel Xeon and Gaudi2 processors.
183+
| Framework | LLM | Serving | HW | Description |
184+
| ------------------------------------------------------------------------------ | ----------------------------------------------------------------- | --------------------------------------------------------------- | ----------- | ----------- |
185+
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Chatbot |
186+
187+
### MultimodalQnA
188+
189+
[MultimodalQnA](./MultimodalQnA/README.md) addresses your questions by dynamically fetching the most pertinent multimodal information (frames, transcripts, and/or captions) from your collection of videos.
190+
191+
### ProductivitySuite
192+
193+
[Productivity Suite](./ProductivitySuite/README.md) streamlines your workflow to boost productivity. It leverages the OPEA microservices to provide a comprehensive suite of features to cater to the diverse needs of modern enterprises.

0 commit comments

Comments
 (0)