I saw Ollama included Anthropic API compatibility therefore making it possible to use Claude Code with Ollama models.
Would it be possible to add it to fastflowlm?
If I ran lets say llama3.1:8b model on my laptop with these specs:
- 32 GB RAM
- AMD Ryzen AI 7 350 with Radeon 860M, 2000 MHz, 8 main processors, 16 logical processors
It would run faster in FastflowLm than Ollama I assume