-
Notifications
You must be signed in to change notification settings - Fork 0
Simple examples
TalkBack is meant to be very simple to use.
You inject the ILLM and IProviderActivator into the class that's using it.
public class MyClass
{
private readonly ILLM _llm;
private readonly IConversationContext _conversationContext;
public MyClass(ILLM llm, IProviderActivator providerActivator)
{
_llm = llm;
var provider = providerActivator.CreateProvider<OpenAIProvider>();
provider.InitProvider(new OpenAIOptions()
{
ApiKey = "<your key here>",
Model = "gpt-4-turbo"
});
_conversationContext = provider.CreateNewContext();
}
}At this point, the object is ready to use. The interface for ILLM is:
public interface ILLM
{
void SetProvider(ILLMProvider provider);
IConversationContext? CreateNewContext();
Task StreamCompletionAsync(ICompletionReceiver receiver, string prompt, IConversationContext? context = null);
Task<IModelResponse> CompleteAsync(string prompt, IConversationContext? context = null);
}From here you can chat either blocking or streaming.
Context is optional. If you don't include a context object, a new one will be automatically created and returned as part of the IModelResponse. If you don't reuse the context, each message will exist in isolation with no memory between exchanges. You can also explicitly create one with ILLMProvider.CreateNewContext().
One reason you'd explicitly create on is to set the System Message for the conversation via the string IConversationContext.SystemPrompt property.
The non-streaming version is as simple as this:
_conversationContext.SystemMessage = "You are an expert C# programmer!";
var result = await _llm.CompleteAsync("Please write a command-line C# program to retrieve the current weather for Paris, France, from OpenWeather.", _conversationContext);
string responseText = result.Response;Streaming requires that you have a class that implements the ICompletionReceiver interface:
public class MyClass : ICompletionReceiver
{
private readonly ILLM _llm;
private readonly IConversationContext _conversationContext;
private string _llmResponse = string.Empty;
public MyClass(ILLM llm, IProviderActivator providerActivator)
{
_llm = llm;
var provider = providerActivator.CreateProvider<OpenAIProvider>();
provider.InitProvider(new OpenAIOptions()
{
ApiKey = "<your key here>",
Model = "gpt-4-turbo"
});
_conversationContext = provider.CreateNewContext();
}
...
await _llm.StreamCompletionAsync(this, prompt, _conversationContext);
...
public async Task ReceiveCompletionPartAsync(IModelResponse response, bool final)
{
if (!final) // final contains a copy of the entire response that was streamed.
{
Console.Write(response.Response);
}
return;
}
}