-
-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Description
LocalAI version:
latest-aio-gpu-nvidia-cuda-12
Environment, GPU architecture, OS, and Version:
Docker,GPU,Windows 11
Describe the bug
Unable to load model: cross-encoder
To Reproduce
install
Expected behavior
it can work,and talk back to me
Logs
2:45AM INF Success ip=172.17.0.1 latency="187.603µs" method=GET status=200 url=/static/assets/UcCO3FwrK3iLTeHuS_fvQtMwCp50KnMw2boKoduKmMEVuGKYMZg.ttf
2:45AM INF Success ip=172.17.0.1 latency="29.522µs" method=GET status=200 url=/static/assets/tw-elements.js
2:45AM INF Success ip=172.17.0.1 latency=95.525846ms method=POST status=200 url=/browse/search/models
2:45AM INF Success ip=127.0.0.1 latency="58.697µs" method=GET status=200 url=/readyz
2:46AM INF BackendLoader starting backend=rerankers modelID=cross-encoder o.model=cross-encoder
2:46AM INF Success ip=127.0.0.1 latency="30.471µs" method=GET status=200 url=/readyz
2:46AM ERR Server error error="failed to load model with internal loader: could not load model (no success): Unexpected err=OSError("We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like mixedbread-ai/mxbai-rerank-base-v1 is not the path to a directory containing a file named config.json.\nCheckout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.\"), type(err)=<class 'OSError'>" ip=172.17.0.1 latency=22.118975703s method=POST status=500 url=/v1/rerank
2:47AM INF Success ip=127.0.0.1 latency="32.108µs" method=GET status=200 url=/readyz