For us, who use Windows machines without decent HW / GPU, so using better local LLMs via Ollama is not a really good option, a good option would be using eg. Huggin Face's inference, so - as I see CrewAI uses Langchain - it should not be an issue, since LCh has HF integration:
https://python.langchain.com/docs/integrations/platforms/huggingface
So pls. add this option ( so we should only add our HF API key ) and testing your code would not break our OpenAI budget :)
For us, who use Windows machines without decent HW / GPU, so using better local LLMs via Ollama is not a really good option, a good option would be using eg. Huggin Face's inference, so - as I see CrewAI uses Langchain - it should not be an issue, since LCh has HF integration:
https://python.langchain.com/docs/integrations/platforms/huggingface
So pls. add this option ( so we should only add our HF API key ) and testing your code would not break our OpenAI budget :)