-
Notifications
You must be signed in to change notification settings - Fork 4
LlmMemory::Broca
The LlmMemory::Broca
class provides methods to generate responses to prompts using OpenAI's models.
The following is an example of how to use the LlmMemory::Broca
class.
# Initialize the Broca
broca = LlmMemory::Broca.new(
prompt: "Translate the following English text to French: <%= text %>",
model: "gpt-3.5-turbo",
temperature: 0.7,
max_token: 4096
)
# Generate a response
response = broca.respond(text: "Hello, world!")
# Print the response
puts "Response: #{response}"
initialize(prompt:, model: "gpt-3.5-turbo", temperature: 0.7, max_token: 4096) -> Broca
Creates a new instance of the LlmMemory::Broca
class.
Parameters:
-
prompt
(String): The prompt that will be used to generate responses. -
model
(String): The model to be used. The default is"gpt-3.5-turbo"
. -
temperature
(Float): The temperature to be used by the model. The default is0.7
. -
max_token
(Integer): The maximum number of tokens in the message. The default is4096
.
Generates a response to the given prompt and arguments.
Parameters:
-
args
(Hash): The arguments to be used in the prompt.
Returns the generated response as a string.
Using OpenAI function calling to format the output.
Parameters:
-
context
(Hash): The arguments to be used in the prompt for ERB. -
schema
(Hash): The parameters to be used for functional calling.
Returns the generated response as a hash.
related_docs = [{content: "My name is Shohei"}, {content: "I'm a software engineer"}]
broca = LlmMemory::Broca.new(prompt: template)
res = broca.respond_with_schema(
context: {related_docs: related_docs, query_str: "what is my name?"},
schema: { # JSON Schema
type: :object,
properties: {
name: {
type: :string,
description: "The name of person"
}
},
required: ["name"]
}
)
Generates the final prompt by substituting the arguments into the prompt.
Parameters:
-
args
(Hash): The arguments to be used in the prompt.
Returns the generated prompt as a string.
Adjusts the messages to fit within the maximum token count by removing the earliest messages until the total token count is within the limit.
Returns the tokenizer used to encode the messages. The tokenizer is created from the pretrained "gpt2"
tokenizer if it doesn't exist yet.
-
messages
(Array[Hash]): The array of messages. Each message is a hash with a:role
(String) and:content
(String).
The LlmMemory::Broca
class will handle exceptions raised during the response generation by logging the error and returning nil
.
The LlmMemory::Broca
class is part of the LlmMemory module, which is designed for managing memory in a language model. It uses OpenAI's models to generate responses to prompts. The class uses the Strategy design pattern for the model, allowing the model to be easily swapped out for a different one. The specific model to be used is specified during class initialization and can be changed dynamically if needed. The class also includes a tokenizer to encode the messages into tokens.