You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: DocSum/README.md
+78-32Lines changed: 78 additions & 32 deletions
Original file line number
Diff line number
Diff line change
@@ -1,37 +1,23 @@
1
1
# Document Summarization Application
2
2
3
-
Large Language Models (LLMs) have revolutionized the way we interact with text. These models can be used to create summaries of news articles, research papers, technical documents, legal documents and other types of text. Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. In this example use case, we utilize LangChain to implement summarization strategies and facilitate LLM inference using Text Generation Inference.
4
-
5
-
The architecture for document summarization will be illustrated/described below:
3
+
Large Language Models (LLMs) have revolutionized the way we interact with text. These models can be used to create summaries of news articles, research papers, technical documents, legal documents, multimedia documents, and other types of documents. Suppose you have a set of documents (PDFs, Notion pages, customer questions, multimedia files, etc.) and you want to summarize the content. In this example use case, we utilize LangChain to implement summarization strategies and facilitate LLM inference using Text Generation Inference.
The Document Summarization service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
14
-
Based on whether you want to use Docker or Kubernetes, follow the instructions below.
15
-
16
-
Currently we support two ways of deploying Document Summarization services with docker compose:
17
-
18
-
1. Start services using the docker image on `docker hub`:
19
-
20
-
```bash
21
-
docker pull opea/docsum:latest
22
-
```
23
-
24
-
2. Start services using the docker images `built from source`: [Guide](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose)
10
+
Based on whether you want to use Docker or Kubernetes, follow the instructions below. Currently we support deploying Document Summarization services with docker compose.
25
11
26
12
### Required Models
27
13
28
-
We set default model as "Intel/neural-chat-7b-v3-3", change "LLM_MODEL_ID" in "docker_compose/set_env.sh" if you want to use other models.
14
+
Default model is "Intel/neural-chat-7b-v3-3". Change "LLM_MODEL_ID" environment variable in commands below if you want to use another model.
29
15
30
-
```
16
+
```bash
31
17
export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
32
18
```
33
19
34
-
If use gated models, you also need to provide [huggingface token](https://huggingface.co/docs/hub/security-tokens) to "HUGGINGFACEHUB_API_TOKEN" environment variable.
20
+
When using gated models, you also need to provide [HuggingFace token](https://huggingface.co/docs/hub/security-tokens) to "HUGGINGFACEHUB_API_TOKEN" environment variable.
35
21
36
22
### Setup Environment Variable
37
23
@@ -57,32 +43,34 @@ To set up environment variables for deploying Document Summarization services, f
Find the corresponding [compose.yaml](./docker_compose/intel/hpu/gaudi/compose.yaml).
53
+
Follow the instructions provided in the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build Docker images from source. Once the images are built, run the following command to start the services:
68
54
69
55
```bash
70
56
cd GenAIExamples/DocSum/docker_compose/intel/hpu/gaudi/
71
57
docker compose -f compose.yaml up -d
72
58
```
73
59
74
-
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
60
+
Find the corresponding [compose.yaml](./docker_compose/intel/hpu/gaudi/compose.yaml).
61
+
62
+
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
75
63
76
64
#### Deploy on Xeon
77
65
78
-
Find the corresponding [compose.yaml](./docker_compose/intel/cpu/xeon/compose.yaml).
66
+
Follow the instructions provided in the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) to build Docker images from source. Once the images are built, run the following command to start the services:
79
67
80
68
```bash
81
69
cd GenAIExamples/DocSum/docker_compose/intel/cpu/xeon/
82
-
docker compose up -d
70
+
docker compose -f compose.yaml up -d
83
71
```
84
72
85
-
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more instructions on building docker images from source.
73
+
Find the corresponding [compose.yaml](./docker_compose/intel/cpu/xeon/compose.yaml).
@@ -155,22 +150,74 @@ Two ways of consuming Document Summarization Service:
155
150
156
151
1. Use cURL command on terminal
157
152
153
+
Text:
154
+
158
155
```bash
159
-
#Use English mode (default).
156
+
curl -X POST http://${host_ip}:8888/v1/docsum \
157
+
-H "Content-Type: application/json" \
158
+
-d '{"type": "text", "messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."}'
159
+
160
+
# Use English mode (default).
160
161
curl http://${host_ip}:8888/v1/docsum \
161
162
-H "Content-Type: multipart/form-data" \
163
+
-F "type=text" \
162
164
-F "messages=Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5." \
-d '{"type": "video", "messages": "convert your video to base64 data type"}'
213
+
214
+
curl http://${host_ip}:8888/v1/docsum \
215
+
-H "Content-Type: multipart/form-data" \
216
+
-F "type=video" \
217
+
-F "messages=convert your video to base64 data type" \
218
+
-F "max_tokens=32" \
219
+
-F "language=en" \
220
+
-F "stream=true"
174
221
```
175
222
176
223
2. Access via frontend
@@ -184,7 +231,6 @@ Two ways of consuming Document Summarization Service:
184
231
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:
185
232
186
233
```bash
187
-
http_proxy=""
188
234
curl http://${host_ip}:8008/generate \
189
235
-X POST \
190
236
-d '{"inputs":"What is Deep Learning?","parameters":{"max_tokens":17, "do_sample": true}}' \
0 commit comments