Skip to content

Commit eb91d1f

Browse files
MSCetin37HarshaRamayanampre-commit-ci[bot]XinyaoWaashahba
authored
Docsum (#1095)
Signed-off-by: Mustafa <mustafa.cetin@intel.com> Signed-off-by: Harsha Ramayanam <harsha.ramayanam@intel.com> Co-authored-by: Harsha Ramayanam <harsha.ramayanam@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: XinyaoWa <xinyao.wang@intel.com> Co-authored-by: Abolfazl Shahbazi <12436063+ashahba@users.noreply.github.com> Co-authored-by: chen, suyue <suyue.chen@intel.com>
1 parent 2587179 commit eb91d1f

File tree

22 files changed

+1389
-272
lines changed

22 files changed

+1389
-272
lines changed

DocSum/Dockerfile

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
2-
31
# Copyright (C) 2024 Intel Corporation
42
# SPDX-License-Identifier: Apache-2.0
53

@@ -30,3 +28,4 @@ USER user
3028
WORKDIR /home/user
3129

3230
ENTRYPOINT ["python", "docsum.py"]
31+

DocSum/README.md

Lines changed: 78 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,37 +1,23 @@
11
# Document Summarization Application
22

3-
Large Language Models (LLMs) have revolutionized the way we interact with text. These models can be used to create summaries of news articles, research papers, technical documents, legal documents and other types of text. Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. In this example use case, we utilize LangChain to implement summarization strategies and facilitate LLM inference using Text Generation Inference.
4-
5-
The architecture for document summarization will be illustrated/described below:
3+
Large Language Models (LLMs) have revolutionized the way we interact with text. These models can be used to create summaries of news articles, research papers, technical documents, legal documents, multimedia documents, and other types of documents. Suppose you have a set of documents (PDFs, Notion pages, customer questions, multimedia files, etc.) and you want to summarize the content. In this example use case, we utilize LangChain to implement summarization strategies and facilitate LLM inference using Text Generation Inference.
64

75
![Architecture](./assets/img/docsum_architecture.png)
86

9-
![Workflow](./assets/img/docsum_workflow.png)
10-
117
## Deploy Document Summarization Service
128

139
The Document Summarization service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
14-
Based on whether you want to use Docker or Kubernetes, follow the instructions below.
15-
16-
Currently we support two ways of deploying Document Summarization services with docker compose:
17-
18-
1. Start services using the docker image on `docker hub`:
19-
20-
```bash
21-
docker pull opea/docsum:latest
22-
```
23-
24-
2. Start services using the docker images `built from source`: [Guide](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose)
10+
Based on whether you want to use Docker or Kubernetes, follow the instructions below. Currently we support deploying Document Summarization services with docker compose.
2511

2612
### Required Models
2713

28-
We set default model as "Intel/neural-chat-7b-v3-3", change "LLM_MODEL_ID" in "docker_compose/set_env.sh" if you want to use other models.
14+
Default model is "Intel/neural-chat-7b-v3-3". Change "LLM_MODEL_ID" environment variable in commands below if you want to use another model.
2915

30-
```
16+
```bash
3117
export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
3218
```
3319

34-
If use gated models, you also need to provide [huggingface token](https://huggingface.co/docs/hub/security-tokens) to "HUGGINGFACEHUB_API_TOKEN" environment variable.
20+
When using gated models, you also need to provide [HuggingFace token](https://huggingface.co/docs/hub/security-tokens) to "HUGGINGFACEHUB_API_TOKEN" environment variable.
3521

3622
### Setup Environment Variable
3723

@@ -57,32 +43,34 @@ To set up environment variables for deploying Document Summarization services, f
5743
3. Set up other environment variables:
5844

5945
```bash
60-
source ./docker_compose/set_env.sh
46+
source GenAIExamples/DocSum/docker_compose/set_env.sh
6147
```
6248

6349
### Deploy using Docker
6450

6551
#### Deploy on Gaudi
6652

67-
Find the corresponding [compose.yaml](./docker_compose/intel/hpu/gaudi/compose.yaml).
53+
Follow the instructions provided in the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build Docker images from source. Once the images are built, run the following command to start the services:
6854

6955
```bash
7056
cd GenAIExamples/DocSum/docker_compose/intel/hpu/gaudi/
7157
docker compose -f compose.yaml up -d
7258
```
7359

74-
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
60+
Find the corresponding [compose.yaml](./docker_compose/intel/hpu/gaudi/compose.yaml).
61+
62+
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
7563
7664
#### Deploy on Xeon
7765

78-
Find the corresponding [compose.yaml](./docker_compose/intel/cpu/xeon/compose.yaml).
66+
Follow the instructions provided in the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) to build Docker images from source. Once the images are built, run the following command to start the services:
7967

8068
```bash
8169
cd GenAIExamples/DocSum/docker_compose/intel/cpu/xeon/
82-
docker compose up -d
70+
docker compose -f compose.yaml up -d
8371
```
8472

85-
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more instructions on building docker images from source.
73+
Find the corresponding [compose.yaml](./docker_compose/intel/cpu/xeon/compose.yaml).
8674

8775
### Deploy using Kubernetes with GMC
8876

@@ -120,9 +108,12 @@ flowchart LR
120108
classDef invisible fill:transparent,stroke:transparent;
121109
style DocSum-MegaService stroke:#000000
122110
111+
112+
123113
%% Subgraphs %%
124114
subgraph DocSum-MegaService["DocSum MegaService "]
125115
direction LR
116+
M2T([Multimedia2text MicroService]):::blue
126117
LLM([LLM MicroService]):::blue
127118
end
128119
subgraph UserInterface[" User Interface "]
@@ -132,20 +123,24 @@ flowchart LR
132123
end
133124
134125
135-
LLM_gen{{LLM Service <br>}}
126+
A2T_SRV{{Audio2Text service<br>}}
127+
V2A_SRV{{Video2Audio service<br>}}
128+
WSP_SRV{{whisper service<br>}}
136129
GW([DocSum GateWay<br>]):::orange
137130
138131
139132
%% Questions interaction
140133
direction LR
141-
a[User Input Query] --> UI
134+
a[User Document for Summarization] --> UI
142135
UI --> GW
143136
GW <==> DocSum-MegaService
144-
137+
M2T ==> LLM
145138
146139
%% Embedding service flow
147140
direction LR
148-
LLM <-.-> LLM_gen
141+
M2T .-> V2A_SRV
142+
M2T <-.-> A2T_SRV <-.-> WSP_SRV
143+
V2A_SRV .-> A2T_SRV
149144
150145
```
151146

@@ -155,22 +150,74 @@ Two ways of consuming Document Summarization Service:
155150

156151
1. Use cURL command on terminal
157152

153+
Text:
154+
158155
```bash
159-
#Use English mode (default).
156+
curl -X POST http://${host_ip}:8888/v1/docsum \
157+
-H "Content-Type: application/json" \
158+
-d '{"type": "text", "messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."}'
159+
160+
# Use English mode (default).
160161
curl http://${host_ip}:8888/v1/docsum \
161162
-H "Content-Type: multipart/form-data" \
163+
-F "type=text" \
162164
-F "messages=Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5." \
163165
-F "max_tokens=32" \
164166
-F "language=en" \
165167
-F "stream=true"
166168

167-
#Use Chinese mode.
169+
# Use Chinese mode.
168170
curl http://${host_ip}:8888/v1/docsum \
169171
-H "Content-Type: multipart/form-data" \
172+
-F "type=text" \
170173
-F "messages=2024年9月26日,北京——今日,英特尔正式发布英特尔® 至强® 6性能核处理器(代号Granite Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。" \
171174
-F "max_tokens=32" \
172175
-F "language=zh" \
173176
-F "stream=true"
177+
178+
# Upload file
179+
curl http://${host_ip}:8888/v1/docsum \
180+
-H "Content-Type: multipart/form-data" \
181+
-F "type=text" \
182+
-F "messages=" \
183+
-F "files=@/path to your file (.txt, .docx, .pdf)" \
184+
-F "max_tokens=32" \
185+
-F "language=en" \
186+
-F "stream=true"
187+
```
188+
189+
> Audio and Video file uploads are not supported in docsum with curl request, please use the Gradio-UI.
190+
191+
Audio:
192+
193+
```bash
194+
curl -X POST http://${host_ip}:8888/v1/docsum \
195+
-H "Content-Type: application/json" \
196+
-d '{"type": "audio", "messages": "UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA"}'
197+
198+
curl http://${host_ip}:8888/v1/docsum \
199+
-H "Content-Type: multipart/form-data" \
200+
-F "type=audio" \
201+
-F "messages=UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA" \
202+
-F "max_tokens=32" \
203+
-F "language=en" \
204+
-F "stream=true"
205+
```
206+
207+
Video:
208+
209+
```bash
210+
curl -X POST http://${host_ip}:8888/v1/docsum \
211+
-H "Content-Type: application/json" \
212+
-d '{"type": "video", "messages": "convert your video to base64 data type"}'
213+
214+
curl http://${host_ip}:8888/v1/docsum \
215+
-H "Content-Type: multipart/form-data" \
216+
-F "type=video" \
217+
-F "messages=convert your video to base64 data type" \
218+
-F "max_tokens=32" \
219+
-F "language=en" \
220+
-F "stream=true"
174221
```
175222

176223
2. Access via frontend
@@ -184,7 +231,6 @@ Two ways of consuming Document Summarization Service:
184231
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:
185232

186233
```bash
187-
http_proxy=""
188234
curl http://${host_ip}:8008/generate \
189235
-X POST \
190236
-d '{"inputs":"What is Deep Learning?","parameters":{"max_tokens":17, "do_sample": true}}' \
121 KB
Loading

0 commit comments

Comments
 (0)