[BUG]: Invocation of model ID deepseek.r1-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model. #3441
Labels
possible bug
Bug was reported but is not confirmed or is unable to be replicated.
How are you running AnythingLLM?
Docker (local)
What happened?
I got the below output when I am using bedrock with deepseek, its seems that bedrock required a inferenceConfig as below.
Error msg: Invocation of model ID deepseek.r1-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model.
aws sample api query:
Are there known steps to reproduce?
set model ID to
deepseek.r1-v1:0
in LLM provider pageNo response
The text was updated successfully, but these errors were encountered: