-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ChatQnA v0.6 failed to work due to ghcr.io/huggingface/text-embeddings-inference:cpu-1.2 failed to start #262
Comments
On your Linux system, type the following and replace
You can also add that to the end of your |
Thanks @wsfowler for the quick reply! I made sure I have set token. X below means mask I noticed a simliar issue in HF meilisearch/meilisearch#4271 Here is my log Caused by: |
seems like a proxy issue, please try a different proxy and see. |
@huiyan2021 Thanks Huiyan! It's my machine network issue, after setting proper proxy, it works! So the tips is that before you start to play with OPEA GenAIexamples, make sure your machine able to access HF. Please close this issue. |
Dear experts:
I tried to follow https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker/xeon/README.md to run ChatQnA at Xeon.
✔ Network xeon_default Created 0.1s
✔ Container tei-embedding-server Started 0.4s
✔ Container tgi-service Started 0.4s
✔ Container tei-reranking-server Started 0.4s
✔ Container redis-vector-db Started 0.4s
✔ Container embedding-tei-server Started 1.0s
✔ Container dataprep-redis-server Started 1.0s
✔ Container retriever-redis-server Started 1.0s
✔ Container reranking-tei-xeon-server Started 1.0s
✔ Container llm-tgi-server Started 1.0s
✔ Container chatqna-xeon-backend-server Started 1.3s
✔ Container chatqna-xeon-ui-server Started 1.6s
[root@localhost xeon]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
396862d1b421 opea/chatqna-ui:latest "docker-entrypoint.s…" 5 seconds ago Up 2 seconds 0.0.0.0:5173 ->5173/tcp, :::5173->5173/tcp chatqna-xeon-ui-server
b9c5d115785b opea/chatqna:latest "python chatqna.py" 5 seconds ago Up 3 seconds 0.0.0.0:8888 ->8888/tcp, :::8888->8888/tcp chatqna-xeon-backend-server
5833f6a7a3ad opea/llm-tgi:latest "python llm.py" 5 seconds ago Up 3 seconds 0.0.0.0:9000 ->9000/tcp, :::9000->9000/tcp llm-tgi-server
3fa23c7c29d1 opea/reranking-tei:latest "python reranking_te…" 5 seconds ago Up 3 seconds 0.0.0.0:8000 ->8000/tcp, :::8000->8000/tcp reranking-tei-xeon-server
528e4776d952 opea/retriever-redis:latest "/home/user/comps/re…" 5 seconds ago Up 3 seconds 0.0.0.0:7000 ->7000/tcp, :::7000->7000/tcp retriever-redis-server
8f802c803754 opea/dataprep-redis:latest "python prepare_doc_…" 5 seconds ago Up 3 seconds 0.0.0.0:6007 ->6007/tcp, :::6007->6007/tcp dataprep-redis-server
7318f543b581 opea/embedding-tei:latest "python embedding_te…" 5 seconds ago Up 3 seconds 0.0.0.0:6000 ->6000/tcp, :::6000->6000/tcp embedding-tei-server
57593b53e762 ghcr.io/huggingface/text-generation-inference:1.4 "text-generation-lau…" 5 seconds ago Up 4 seconds 0.0.0.0:9009 ->80/tcp, :::9009->80/tcp tgi-service
96d681918923 ghcr.io/huggingface/text-embeddings-inference:cpu-1.2 "text-embeddings-rou…" 5 seconds ago Up 4 seconds 0.0.0.0:8808 ->80/tcp, :::8808->80/tcp tei-reranking-server
a3d3a8419a56 redis/redis-stack:7.2.0-v9 "/entrypoint.sh" 5 seconds ago Up 4 seconds 0.0.0.0:6379 ->6379/tcp, :::6379->6379/tcp, 0.0.0.0:8001->8001/tcp, :::8001->8001/tcp redis-vector-db
5f843c7f3753 ghcr.io/huggingface/text-embeddings-inference:cpu-1.2 "text-embeddings-rou…" 5 seconds ago Up 4 seconds 0.0.0.0:6006 ->80/tcp, :::6006->80/tcp tei-embedding-server
but huggingface containers exit soon.
docker logs 57593b53e762
2024-06-05T09:16:09.130452Z INFO text_generation_launcher: Args { model_id: "Intel/neural-chat-7 b-v3-3", revision: None, validation_workers: 2, sharded: None, num_shard: None, quantize: None, s peculate: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_top_n_tokens: 5, max_input_length: 1024, max_total_tokens: 2048, w aiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: None, max_waiti ng_tokens: 20, max_batch_size: None, enable_cuda_graphs: false, hostname: "57593b53e762", port: 8 0, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, h uggingface_hub_cache: Some("/data"), weights_cache_override: None, disable_custom_kernels: false, cuda_memory_fraction: 1.0, rope_scaling: None, rope_factor: None, json_output: false, otlp_endpo int: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngr ok_authtoken: None, ngrok_edge: None, tokenizer_config_path: None, disable_grammar_support: false , env: false }
2024-06-05T09:16:09.130585Z INFO download: text_generation_launcher: Starting download process.
Error: DownloadError
2024-06-05T09:16:21.747568Z ERROR download: text_generation_launcher: Download encountered an err or:
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-sig ned certificate in certificate chain (_ssl.c:1007)
Is it related to hf token setting?
Any tips to set hf token? I got the hf token from my windows machine by accessing hf web page, how to connect the token to my identity when I work in a linux machine. Thanks!
The text was updated successfully, but these errors were encountered: