Skip to content

LlmMemory::Broca

Shohei Kameda edited this page Jul 2, 2023 · 3 revisions

LlmMemory::Broca Class

The LlmMemory::Broca class provides methods to generate responses to prompts using OpenAI's models.

Usage

The following is an example of how to use the LlmMemory::Broca class.

# Initialize the Broca
broca = LlmMemory::Broca.new(
  prompt: "Translate the following English text to French: <%= text %>", 
  model: "gpt-3.5-turbo", 
  temperature: 0.7, 
  max_token: 4096
)

# Generate a response
response = broca.respond(text: "Hello, world!")

# Print the response
puts "Response: #{response}"

Class Initialization

initialize(prompt:, model: "gpt-3.5-turbo", temperature: 0.7, max_token: 4096) -> Broca

Creates a new instance of the LlmMemory::Broca class.

Parameters:

  • prompt (String): The prompt that will be used to generate responses.
  • model (String): The model to be used. The default is "gpt-3.5-turbo".
  • temperature (Float): The temperature to be used by the model. The default is 0.7.
  • max_token (Integer): The maximum number of tokens in the message. The default is 4096.

Instance Methods

respond(args) -> String

Generates a response to the given prompt and arguments.

Parameters:

  • args (Hash): The arguments to be used in the prompt.

Returns the generated response as a string.

respond_with_schema(context: context, schema: schema) -> Hash

Using OpenAI function calling to format the output.

Parameters:

  • context (Hash): The arguments to be used in the prompt for ERB.
  • schema (Hash): The parameters to be used for functional calling.

Returns the generated response as a hash.

Example

related_docs = [{content: "My name is Shohei"}, {content: "I'm a software engineer"}]
broca = LlmMemory::Broca.new(prompt: template)
res = broca.respond_with_schema(
  context: {related_docs: related_docs, query_str: "what is my name?"},
  schema: { # JSON Schema
    type: :object,
    properties: {
      name: {
        type: :string,
        description: "The name of person"
      }
    },
    required: ["name"]
  }
)

generate_prompt(args) -> String

Generates the final prompt by substituting the arguments into the prompt.

Parameters:

  • args (Hash): The arguments to be used in the prompt.

Returns the generated prompt as a string.

adjust_token_count -> nil

Adjusts the messages to fit within the maximum token count by removing the earliest messages until the total token count is within the limit.

tokenizer -> Tokenizer

Returns the tokenizer used to encode the messages. The tokenizer is created from the pretrained "gpt2" tokenizer if it doesn't exist yet.

Attributes

  • messages (Array[Hash]): The array of messages. Each message is a hash with a :role (String) and :content (String).

Exceptions

The LlmMemory::Broca class will handle exceptions raised during the response generation by logging the error and returning nil.

Further Information

The LlmMemory::Broca class is part of the LlmMemory module, which is designed for managing memory in a language model. It uses OpenAI's models to generate responses to prompts. The class uses the Strategy design pattern for the model, allowing the model to be easily swapped out for a different one. The specific model to be used is specified during class initialization and can be changed dynamically if needed. The class also includes a tokenizer to encode the messages into tokens.