Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 0 additions & 50 deletions .github/workflows/Translation.yml

This file was deleted.

4 changes: 2 additions & 2 deletions .github/workflows/scripts/build_push.sh
Original file line number Diff line number Diff line change
Expand Up @@ -46,14 +46,14 @@ function docker_build() {
# $1 is like "apple orange pear"
for MEGA_SVC in $1; do
case $MEGA_SVC in
"ChatQnA"|"CodeGen"|"CodeTrans"|"DocSum")
"ChatQnA"|"CodeGen"|"CodeTrans"|"DocSum"|"Translation")
cd $MEGA_SVC/docker
IMAGE_NAME="$(getImagenameFromMega $MEGA_SVC)"
docker_build ${IMAGE_NAME}
cd ui
docker_build ${IMAGE_NAME}-ui docker/Dockerfile
;;
"AudioQnA"|"SearchQnA"|"Translation"|"VisualQnA")
"AudioQnA"|"SearchQnA"|"VisualQnA")
echo "Not supported yet"
;;
*)
Expand Down
48 changes: 9 additions & 39 deletions Translation/README.md
Original file line number Diff line number Diff line change
@@ -1,51 +1,21 @@
# Language Translation
# Translation Application

Language Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text.

The workflow falls into the following architecture:
Translation architecture shows below:

![architecture](./assets/img/translation_architecture.png)

# Start Backend Service
This Translation use case performs Language Translation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Please visit [Habana AI products](https://habana.ai/products) for more details.

1. Start the TGI Service to deploy your LLM
# Deploy Translation Service

```sh
cd serving/tgi_gaudi
bash build_docker.sh
bash launch_tgi_service.sh
```
The Translation service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.

`launch_tgi_service.sh` the script uses `8080` as the TGI service's port by default. Please replace it if any port conflicts detected.
## Deploy Translation on Gaudi

2. Start the Language Translation Service
Refer to the [Gaudi Guide](./docker/gaudi/README.md) for instructions on deploying Translation on Gaudi.

```sh
cd langchain/docker
bash build_docker.sh
docker run -it --name translation_server --net=host --ipc=host -e TGI_ENDPOINT=${TGI_ENDPOINT} -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e SERVER_PORT=8000 -e http_proxy=${http_proxy} -e https_proxy=${https_proxy} translation:latest bash
```
## Deploy Translation on Xeon

**Note**: Set the following parameters before running the above command

- `TGI_ENDPOINT`: The endpoint of your TGI service, usually equal to `<ip of your machine>:<port of your TGI service>`.
- `HUGGINGFACEHUB_API_TOKEN`: Your HuggingFace hub API token, usually generated [here](https://huggingface.co/settings/tokens).
- `SERVER_PORT`: The port of the Translation service on the host.

3. Quick Test

```sh
curl http://localhost:8000/v1/translation \
-X POST \
-d '{"language_from": "Chinese","language_to": "English","source_language": "我爱机器翻译。"}' \
-H 'Content-Type: application/json'
```

The shortcodes of languages are also supported:

```sh
curl http://localhost:8000/v1/translation \
-X POST \
-d '{"language_from": "de","language_to": "en","source_language": "Maschinelles Lernen"}' \
-H 'Content-Type: application/json'
```
Refer to the [Xeon Guide](./docker/xeon/README.md) for instructions on deploying Translation on Xeon.
51 changes: 51 additions & 0 deletions Translation/deprecated/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Language Translation

Language Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text.

The workflow falls into the following architecture:

![architecture](../assets/img/translation_architecture.png)

# Start Backend Service

1. Start the TGI Service to deploy your LLM

```sh
cd serving/tgi_gaudi
bash build_docker.sh
bash launch_tgi_service.sh
```

`launch_tgi_service.sh` the script uses `8080` as the TGI service's port by default. Please replace it if any port conflicts detected.

2. Start the Language Translation Service

```sh
cd langchain/docker
bash build_docker.sh
docker run -it --name translation_server --net=host --ipc=host -e TGI_ENDPOINT=${TGI_ENDPOINT} -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e SERVER_PORT=8000 -e http_proxy=${http_proxy} -e https_proxy=${https_proxy} translation:latest bash
```

**Note**: Set the following parameters before running the above command

- `TGI_ENDPOINT`: The endpoint of your TGI service, usually equal to `<ip of your machine>:<port of your TGI service>`.
- `HUGGINGFACEHUB_API_TOKEN`: Your HuggingFace hub API token, usually generated [here](https://huggingface.co/settings/tokens).
- `SERVER_PORT`: The port of the Translation service on the host.

3. Quick Test

```sh
curl http://localhost:8000/v1/translation \
-X POST \
-d '{"language_from": "Chinese","language_to": "English","source_language": "我爱机器翻译。"}' \
-H 'Content-Type: application/json'
```

The shortcodes of languages are also supported:

```sh
curl http://localhost:8000/v1/translation \
-X POST \
-d '{"language_from": "de","language_to": "en","source_language": "Maschinelles Lernen"}' \
-H 'Content-Type: application/json'
```
42 changes: 42 additions & 0 deletions Translation/docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Copyright (c) 2024 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


FROM langchain/langchain:latest

RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
libgl1-mesa-glx \
libjemalloc-dev \
vim \
git

RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/

RUN cd /home/user/ && \
git clone https://github.com/opea-project/GenAIComps.git

RUN cd /home/user/GenAIComps && pip install --no-cache-dir --upgrade pip && \
pip install -r /home/user/GenAIComps/requirements.txt

COPY ./translation.py /home/user/translation.py

ENV PYTHONPATH=$PYTHONPATH:/home/user/GenAIComps

USER user

WORKDIR /home/user

ENTRYPOINT ["python", "translation.py"]
104 changes: 104 additions & 0 deletions Translation/docker/gaudi/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# Build MegaService of Translation on Gaudi

This document outlines the deployment process for a Translation application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Gaudi server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.

## 🚀 Build Docker Images

First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.

```bash
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
```

### 1. Build LLM Image

```bash
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .
```

### 2. Build MegaService Docker Image

To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the `translation.py` Python script. Build the MegaService Docker image using the command below:

```bash
git clone https://github.com/opea-project/GenAIExamples
cd GenAIExamples/Translation/docker
docker build -t opea/translation:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
```

### 3. Build UI Docker Image

Construct the frontend Docker image using the command below:

```bash
cd GenAIExamples/Translation/docker/ui/
docker build -t opea/translation-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
```

Then run the command `docker images`, you will have the following four Docker Images:

1. `ghcr.io/huggingface/tgi-gaudi:1.2.1`
2. `opea/gen-ai-comps:llm-tgi-gaudi-server`
3. `opea/gen-ai-comps:translation-megaservice-server`
4. `opea/gen-ai-comps:translation-ui-server`

## 🚀 Start Microservices

### Setup Environment Variables

Since the `docker_compose.yaml` will consume some environment variables, you need to setup them in advance as below.

```bash
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
export LLM_MODEL_ID="haoranxu/ALMA-13B"
export TGI_LLM_ENDPOINT="http://${host_ip}:8008"
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
export MEGA_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/translation"
```

Note: Please replace with `host_ip` with you external IP address, do not use localhost.

### Start Microservice Docker Containers

```bash
docker compose -f docker_compose.yaml up -d
```

### Validate Microservices

1. TGI Service

```bash
curl http://${host_ip}:8008/generate \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":64, "do_sample": true}}' \
-H 'Content-Type: application/json'
```

2. LLM Microservice

```bash
curl http://${host_ip}:9000/v1/chat/completions \
-X POST \
-d '{"query":"Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"}' \
-H 'Content-Type: application/json'
```

3. MegaService

```bash
curl http://${host_ip}:8888/v1/translation -H "Content-Type: application/json" -d '{
"language_from": "Chinese","language_to": "English","source_language": "我爱机器翻译。"}'
```

Following the validation of all aforementioned microservices, we are now prepared to construct a mega-service.

## 🚀 Launch the UI

Open this URL `http://{host_ip}:5173` in your browser to access the frontend.
![project-screenshot](../../assets/img/trans_ui_init.png)
![project-screenshot](../../assets/img/trans_ui_select.png)
Loading