Text completion 2. (Legacy)
Zoltan Juhasz edited this page Dec 10, 2023
·
1 revision
WARNING! - This is a legacy feature of OpenAI, it will be shut down on January 4th, 2024. Check more information: https://platform.openai.com/docs/api-reference/completions
The next example demonstrates, how you can receive an answer in streamed mode. Streamed mode means, you will get the generated answer in pieces and not in one packages like in the previous example. Because of generating an answer takes time, it can be useful, if you see the result in the meantime. The process also can be cancelled.
This version works with a callback. It will be called each time, if a piece of answer arrived.
public static async Task Main(string[] args)
{
using var host = Host.CreateDefaultBuilder(args)
.ConfigureServices((builder, services) =>
{
services.AddForgeOpenAI(options => {
options.AuthenticationInfo = builder
.Configuration["OpenAI:ApiKey"]!;
});
})
.Build();
IOpenAIService openAi = host.Services.GetService<IOpenAIService>()!;
// this method is useful for older .NET where the IAsyncEnumerable is not supported,
// or you just simply does not prefer this way
TextCompletionRequest request = new TextCompletionRequest();
request.Prompt = "Write a C# code which demonstrate how to open a text file and read its content";
request.MaxTokens = 4096 - request.Prompt
.Split(" ", StringSplitOptions.RemoveEmptyEntries).Length; // calculating max token
request.Temperature = 0.1; // lower value means more precise answer
Console.WriteLine(request.Prompt);
Action<HttpOperationResult<TextCompletionResponse>> receivedDataHandler =
(HttpOperationResult<TextCompletionResponse> response) =>
{
if (response.IsSuccess)
{
Console.Write(response.Result?.Completions[0].Text);
}
else
{
Console.WriteLine(response);
}
};
HttpOperationResult response = await openAi.TextCompletionService
.GetStreamAsync(request, receivedDataHandler, CancellationToken.None)
.ConfigureAwait(false);
if (response.IsSuccess)
{
Console.WriteLine();
}
else
{
Console.WriteLine(response);
}
}