-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.Net: Ollama AI Connector #3603
Conversation
@microsoft-github-policy-service agree |
Thanks @BLaZeKiLL we'll take a look at this soon and provide feedback if anything needs to be fixed before we merge |
Looking forward to this! Maybe you could simplify the code in var uri = new Uri("http://localhost:11434");
var ollama = new OllamaApiClient(uri);
await ollama.StreamCompletion("How are you today?", "llama2", stream => Console.Write(stream.Response)); |
I'll take a look, If plugins are to be hosted separately it won't be an issue to add ollama sharp as a dependency |
I have released a nuget with the above ollama connector as well as support for chat completion and embedding generation you can check it out here - https://www.nuget.org/packages/Codeblaze.SemanticKernel.Connectors.AI.Ollama |
@BLaZeKiLL Thanks for your contribution. @RogerBarreto is working on our strategy to expand our AI Connectors. We will have an ADR available soon describing the approach we want to use and we will work with you to progress your PR. |
Looking forward to it |
Ollama have announced OpenAI compatibility, see: https://ollama.ai/blog/openai-compatibility So we may be able to use our existing OpenAI connector. @RogerBarreto can you investigate is this the case. |
Closing this PR and assigning the original Author @BLaZeKiLL to the new one #4789 Please ping me if you have any questions. |
Motivation and Context
I am currently participating in Microsoft AI Classroom Hackathon for which I am building an app using semantic kernel. Since I have a student account with Azure so don't have access to Azure Open AI services, but Ollama allows us to self host LLM models based on Llama2
With this change semantic kernel can use Ollama as an AI connector.
Provides more options for AI services and allows integration with self hosted Ollama instances
Plugin and memory plugins created by the community
N/A
Description
I followed the implementation of the similar feature in python #3055 and HuggingFace Connector as it also operates over Http.
I have also added a PingOllamaAsync method which can be used to check if the specified model is available on the ollama instance pointed by ollama base url.
I was unable to execute all tests as I don't have access to OpenAI or Azure OpenAI.
I wasn't able to test the streaming implementation, not sure if it works. I would appreciate any feedback of the implementation as I am still new to semantic kernel.
Contribution Checklist