-
Notifications
You must be signed in to change notification settings - Fork 330
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Priority
P1-Stopper
OS type
Ubuntu
Hardware type
Xeon-SPR
Installation method
- Pull docker images from hub.docker.com
- Build docker images from source
- Other
Deploy method
- Docker
- Docker Compose
- Kubernetes Helm Charts
- Kubernetes GMC
- Other
Running nodes
Single Node
What's the version?
Description
ProductivitySuite failed on Xeon
https://github.com/opea-project/GenAIExamples/actions/runs/12870524855/job/35881788903?pr=1422#step:5:5043
Reproduce steps
Run CI test scripts.
Raw log
++ curl -s -o /dev/null -w '%{http_code}' -X POST -F 'messages=Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.' -F max_tokens=32 -F stream=False -H 'Content-Type: multipart/form-data' 100.80.243.253:9002/v1/faqgen
+ local HTTP_STATUS=422
+ '[' 422 -eq 200 ']'
+ echo '[ llm_faqgen ] HTTP status is not 200. Received status was 422'
[ llm_faqgen ] HTTP status is not 200. Received status was 422
+ docker logs llm-faqgen-server
[2025-01-20 15:03:02,927] [ INFO] - Base service - CORS is enabled.
[2025-01-20 15:03:02,928] [ INFO] - Base service - Setting up HTTP server
[2025-01-20 15:03:02,929] [ INFO] - Base service - Uvicorn server setup on port 9000
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:9000/ (Press CTRL+C to quit)
[2025-01-20 15:03:02,942] [ INFO] - Base service - HTTP server setup successful
[2025-01-20 15:03:02,945] [ INFO] - llm_faqgen - OPEA FAQGen Microservice is starting...
+ exit 1
Attachments
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working