Skip to content

Quickstart

Davide R. Wiest edited this page Nov 5, 2023 · 3 revisions

Demo

See examples of how to use ContextFlow in the Demo Folder

Walkthrough

Examples of the most common use cases are shown here. It's recommended to visit the namespace-wiki-page too.

Making a request

The minimally required code to start using ContextFlow is creating a LLMRequest and executing it. This is shown here:

RequestResult result = new LLMRequestBuilder()
    .UsingPrompt(new Prompt("Give me a number from 1 to 10"))
    .UsingLLMConfig(new LLMConfig("gpt-3.5-turbo"))
    .UsingLLMConnection(new OpenAIChatConnection())
    .UsingRequestConfig(new RequestConfig())
    .Build()
    .Complete();

Using and creating templates

Templates exist to write requests easier. They specify the configuration of a request piece by piece, meaning each of the parts (like LLMConfig) can be used separately, or as a whole to make the request directly. Example usage:

string input = "...";
string newStyle = "More lively";
CFTemplate template = new RewriteTemplate(input, newStyle);
Prompt prompt = template.GetPrompt();
LLMConfig conf = template.GetLLMConfig("gpt-4.0", 1024); // modelname and max total tokens
LLMRequest request = template.GetLLMRequest(new OpenAIChatConnection(), "gpt-4.0", 1024);

Template implementations exist for basic text manipulation (aggregating, rewriting, summaizing, expanding, translating). If you write a CFTemplate implementation, and think others could use it too, please open an issue for it.

It's recommended to create templates for requests that are made often. To do this, implement the CFTemplate-class.

Configuring requests

Building a prompt

Prompts consist of an action and a number of attachments, which represent the data. It's a good practice to separate those both, for LLM-performance and flexibility. Each attachment has a name, content, and the option to be inline.

Here's an example

Prompt prompt = new("What is this mathematical constant called?")
    .UsingAttachment(new Attachment("Mathematical constant", "2.71828", true))
    .UsingAttachment(new Attachment("Hint", "It starts with e", false);
Console.WriteLine(prompt.ToPlainText()); // don't use ToString() for this

Would be this:

What is this mathematical constant called?

Mathematical constant: 2.71828

Hint:

It starts with e
Configuring LLMConfig

Commonly used methods:

LLMConfig conf = new LLMConfig("gpt-3.5-turbo-16k", 1024, 512) // Configure the model name, max total tokens, max input tokens, the last two are their default values and are optional
    .UsingSystemMessage("You are a cosmic all-knowing entity")
    .UsingTemperature(0.5);
Setting up a saver and loader
RequestConfig requestconf = new()
    .UsingRequestSaver(new JsonRequestSaver("storage.json"))
    .usingRequestLoader(new JsonRequestLoader("storage.json"));

For more info, visit the documentation on RequestConfig, as most of it is out of the scope of the quickstart.

Choosing a LLMConnection

As of november 2023, the supported connections are following:

OpenAIChatConnection()
OpenAIChatConnectionAsync()
OpenAICompletionConnection()
OpenAICompletionConnectionAsync()

They are based on the OPENAI_API-package. Writing your own connection isn't hard: Extend the abstract class LLMConnection/LLMConnectionAsync

Working with results

You now have a result. Heres what you can do with it:

string output = result.RawOutput;
FinishReason = result.FinishReason; // Stop, Overflow, or Unknown
ResultAdditionalData data = result.AdditionalData;

Results have actions, sync and async. See more under the architecture-page of this wiki.

RequestResult nextResult = result.Actions.Then(r => 
    new LLMRequestBuilder.UsingPrompt(new Prompt("Is this your favorite number? {r.RawOutput}"))
    //...
    .Build());

You can also parse a result. This has the advantage of being able to apply actions to the parsed content.

ParsedRequestResult<int> parsedresult = result.Parse<ExampleWrapper>(r => Int.Parse(result.RawOutput));
if (parsedResult.ParsedOutput > 5) {
    //...
}