π‘ Is your feature request related to a problem?
Sentient only supports local models with Ollama currently. This can be problematic if you don't have the required computation power to run
β¨ Describe the Solution
Option to switch to cloud models should be given at both the onboarding stage later in the chat page
π Alternatives Considered
Not really, if we truly want to achieve the level of performance we need from Sentient LLMs are the only way
π Additional Context
Local first should be encouraged at all steps
π‘ Is your feature request related to a problem?
Sentient only supports local models with Ollama currently. This can be problematic if you don't have the required computation power to run
β¨ Describe the Solution
Option to switch to cloud models should be given at both the onboarding stage later in the chat page
π Alternatives Considered
Not really, if we truly want to achieve the level of performance we need from Sentient LLMs are the only way
π Additional Context
Local first should be encouraged at all steps