Skip to content

Commit e1b8ce0

Browse files
authored
Fix Xeon reference per its trademark (#803)
Signed-off-by: Malini Bhandaru <malini.bhandaru@intel.com>
1 parent 558ea3b commit e1b8ce0

File tree

8 files changed

+11
-11
lines changed

8 files changed

+11
-11
lines changed

AudioQnA/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ AudioQnA is an example that demonstrates the integration of Generative AI (GenAI
44

55
## Deploy AudioQnA Service
66

7-
The AudioQnA service can be deployed on either Intel Gaudi2 or Intel XEON Scalable Processor.
7+
The AudioQnA service can be deployed on either Intel Gaudi2 or Intel Xeon Scalable Processor.
88

99
### Deploy AudioQnA on Gaudi
1010

ChatQnA/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ flowchart LR
9595
9696
```
9797

98-
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
98+
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel Xeon Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
9999

100100
In the below, we provide a table that describes for each microservice component in the ChatQnA architecture, the default configuration of the open source project, hardware, port, and endpoint.
101101

@@ -114,7 +114,7 @@ In the below, we provide a table that describes for each microservice component
114114

115115
## Deploy ChatQnA Service
116116

117-
The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
117+
The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
118118

119119
Two types of ChatQnA pipeline are supported now: `ChatQnA with/without Rerank`. And the `ChatQnA without Rerank` pipeline (including Embedding, Retrieval, and LLM) is offered for Xeon customers who can not run rerank service on HPU yet require high performance and accuracy.
120120

DocSum/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ The architecture for document summarization will be illustrated/described below:
1010

1111
## Deploy Document Summarization Service
1212

13-
The Document Summarization service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
13+
The Document Summarization service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
1414
Based on whether you want to use Docker or Kubernetes, follow the instructions below.
1515

1616
Currently we support two ways of deploying Document Summarization services with docker compose:

FaqGen/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Our FAQ Generation Application leverages the power of large language models (LLM
66

77
## Deploy FAQ Generation Service
88

9-
The FAQ Generation service can be deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
9+
The FAQ Generation service can be deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
1010

1111
### Deploy FAQ Generation on Gaudi
1212

SearchQnA/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ The workflow falls into the following architecture:
2222

2323
## Deploy SearchQnA Service
2424

25-
The SearchQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
25+
The SearchQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
2626

2727
Currently we support two ways of deploying SearchQnA services with docker compose:
2828

Translation/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@ Translation architecture shows below:
66

77
![architecture](./assets/img/translation_architecture.png)
88

9-
This Translation use case performs Language Translation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
9+
This Translation use case performs Language Translation Inference on Intel Gaudi2 or Intel Xeon Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
1010

1111
## Deploy Translation Service
1212

13-
The Translation service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
13+
The Translation service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
1414

1515
### Deploy Translation on Gaudi
1616

VideoQnA/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -85,14 +85,14 @@ flowchart LR
8585
DP <-.->|d|VDB
8686
```
8787

88-
- This project implements a Retrieval-Augmented Generation (RAG) workflow using LangChain, Intel VDMS VectorDB, and Text Generation Inference, optimized for Intel XEON Scalable Processors.
88+
- This project implements a Retrieval-Augmented Generation (RAG) workflow using LangChain, Intel VDMS VectorDB, and Text Generation Inference, optimized for Intel Xeon Scalable Processors.
8989
- Video Processing: Videos are converted into feature vectors using mean aggregation and stored in the VDMS vector store.
9090
- Query Handling: When a user submits a query, the system performs a similarity search in the vector store to retrieve the best-matching videos.
9191
- Contextual Inference: The retrieved videos are then sent to the Large Vision Model (LVM) for inference, providing supplemental context for the query.
9292

9393
## Deploy VideoQnA Service
9494

95-
The VideoQnA service can be effortlessly deployed on Intel XEON Scalable Processors.
95+
The VideoQnA service can be effortlessly deployed on Intel Xeon Scalable Processors.
9696

9797
### Required Models
9898

VisualQnA/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ You can choose other llava-next models, such as `llava-hf/llava-v1.6-vicuna-13b-
3030

3131
## Deploy VisualQnA Service
3232

33-
The VisualQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
33+
The VisualQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
3434

3535
Currently we support deploying VisualQnA services with docker compose.
3636

0 commit comments

Comments
 (0)