You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: FaqGen/README.md
+49Lines changed: 49 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,55 @@ In today's data-driven world, organizations across various industries face the c
4
4
5
5
Our FAQ Generation Application leverages the power of large language models (LLMs) to revolutionize the way you interact with and comprehend complex textual data. By harnessing cutting-edge natural language processing techniques, our application can automatically generate comprehensive and natural-sounding frequently asked questions (FAQs) from your documents, legal texts, customer queries, and other sources. In this example use case, we utilize LangChain to implement FAQ Generation and facilitate LLM inference using Text Generation Inference on Intel Xeon and Gaudi2 processors.
6
6
7
+
The FaqGen example is implemented using the component-level microservices defined in [GenAIComps](https://github.com/opea-project/GenAIComps). The flow chart below shows the information flow between different microservices for this example.
8
+
9
+
```mermaid
10
+
---
11
+
config:
12
+
flowchart:
13
+
nodeSpacing: 400
14
+
rankSpacing: 100
15
+
curve: linear
16
+
themeVariables:
17
+
fontSize: 50px
18
+
---
19
+
flowchart LR
20
+
%% Colors %%
21
+
classDef blue fill:#ADD8E6,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
0 commit comments