You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just tried the huggingface script and it seems or be working for me. We are the AsyncInferenceClient from the huggingface_hub library. According to the comments in the code:
The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. bigcode/starcoder or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is automatically selected for the task.
I tried the huggingface bot script. I am encountering a 404 error. Looks likes MODAL is expecting the model to be run locally. Is that correct?
The text was updated successfully, but these errors were encountered: