Skip to content

Latest commit

 

History

History
432 lines (346 loc) · 14 KB

custom_functions.livemd

File metadata and controls

432 lines (346 loc) · 14 KB

LangChain: Executing Custom Elixir Functions

Mix.install([
  {:langchain, "~> 0.1.0"}
])
Resolving Hex dependencies...
Resolution completed in 0.796s
New:
  abacus 2.0.0
  castore 1.0.3
  decimal 2.1.1
  ecto 3.10.3
  expo 0.4.1
  finch 0.16.0
  gettext 0.23.1
  hpax 0.1.2
  jason 1.4.1
  langchain 0.1.0
  mime 2.0.5
  mint 1.5.1
  nimble_options 1.0.2
  nimble_pool 1.0.0
  req 0.4.3
  telemetry 1.2.1
* Getting langchain (Hex package)
* Getting abacus (Hex package)
* Getting ecto (Hex package)
* Getting gettext (Hex package)
* Getting req (Hex package)
* Getting finch (Hex package)
* Getting jason (Hex package)
* Getting mime (Hex package)
* Getting castore (Hex package)
* Getting mint (Hex package)
* Getting nimble_options (Hex package)
* Getting nimble_pool (Hex package)
* Getting telemetry (Hex package)
* Getting hpax (Hex package)
* Getting expo (Hex package)
* Getting decimal (Hex package)
==> decimal
Compiling 4 files (.ex)
Generated decimal app
==> mime
Compiling 1 file (.ex)
Generated mime app
==> nimble_options
Compiling 3 files (.ex)
Generated nimble_options app
===> Analyzing applications...
===> Compiling telemetry
==> jason
Compiling 10 files (.ex)
Generated jason app
==> expo
Compiling 2 files (.erl)
Compiling 21 files (.ex)
Generated expo app
==> hpax
Compiling 4 files (.ex)
Generated hpax app
==> gettext
Compiling 17 files (.ex)
Generated gettext app
==> ecto
Compiling 56 files (.ex)
warning: Logger.warn/1 is deprecated. Use Logger.warning/2 instead
  lib/ecto/changeset/relation.ex:474: Ecto.Changeset.Relation.process_current/3

warning: Logger.warn/1 is deprecated. Use Logger.warning/2 instead
  lib/ecto/repo/preloader.ex:208: Ecto.Repo.Preloader.fetch_ids/4

warning: Logger.warn/1 is deprecated. Use Logger.warning/2 instead
  lib/ecto/changeset.ex:3156: Ecto.Changeset.optimistic_lock/3

Generated ecto app
==> abacus
Compiling 3 files (.erl)
Compiling 5 files (.ex)
warning: Abacus.parse/1 is undefined or private
Invalid call found at 9 locations:
  lib/format.ex:36: Abacus.Format.format/1
  lib/format.ex:37: Abacus.Format.format/1
  lib/format.ex:38: Abacus.Format.format/1
  lib/format.ex:39: Abacus.Format.format/1
  lib/format.ex:64: Abacus.Format.format/1
  lib/format.ex:65: Abacus.Format.format/1
  lib/format.ex:81: Abacus.Format.format/1
  lib/format.ex:82: Abacus.Format.format/1
  lib/format.ex:100: Abacus.Format.format/1

Generated abacus app
==> nimble_pool
Compiling 2 files (.ex)
Generated nimble_pool app
==> castore
Compiling 1 file (.ex)
Generated castore app
==> mint
Compiling 1 file (.erl)
Compiling 19 files (.ex)
Generated mint app
==> finch
Compiling 13 files (.ex)
warning: Logger.warn/1 is deprecated. Use Logger.warning/2 instead
  lib/finch/http2/pool.ex:362: Finch.HTTP2.Pool.connected/3

warning: Logger.warn/1 is deprecated. Use Logger.warning/2 instead
  lib/finch/http2/pool.ex:460: Finch.HTTP2.Pool.connected_read_only/3

Generated finch app
==> req
Compiling 6 files (.ex)
Generated req app
==> langchain
Compiling 14 files (.ex)
Generated langchain app
:ok

What we're doing

This notebook shows how to use the Elixir LangChain library to expose an Elixir function as something that can be executed by an LLM like ChatGPT. The LangChain library wraps this all up making it easy and portable between different LLMs.

The Elixir Function in our App

Let's define the Elixir function we want to expose to ChatGPT so we can see how it works.

In this example we'll create a get_user_info function that takes a user ID and returns the relevant user's information for the current user to a web app.

For simplicity, we're skipping an actual database and storing our fake records on the module.

defmodule MyApp do
  @pretend_db %{
    1 => %{user_id: 1, name: "Michael Johnson", account_type: :trial, favorite_animal: "Horse"},
    2 => %{user_id: 2, name: "Joan Jett", account_type: :member, favorite_animal: "Aardvark"}
  }

  def get_user_info(user_id) do
    @pretend_db[user_id]
  end
end
{:module, MyApp, <<70, 79, 82, 49, 0, 0, 7, ...>>, {:get_user_info, 1}}

Explosing our Function to an LLM

With an Elixir function defined, we will wrap it in a LangChain Function structure so it can be easily shared with an LLM.

This is what that looks like:

alias LangChain.Function

function =
  Function.new!(%{
    name: "get_user_info",
    description: "Return JSON object of the current users's relevant information.",
    function: fn _args, %{user_id: user_id} = _context ->
      # Use the provided user_id context to call our Elixir function.
      # ChatGPT responses must be text. Convert the returned Map into JSON.
      Jason.encode!(MyApp.get_user_info(user_id))
    end
  })
%LangChain.Function{
  name: "get_user_info",
  description: "Return JSON object of the current users's relevant information.",
  function: #Function<41.3316493/2 in :erl_eval.expr/6>,
  parameters_schema: nil
}

The function name we provide is how the LLM will execute the function if the LLM chooses to call it.

The description is for the LLM to know what the function can do so it can decide which function to call for which purpose.

The function argument is passed an annonymous function whose job it is to be the glue that bridges data coming from the LLM with context from our application before calling other functions from our application.

This "bridge" function receives 2 arguments. The first is any arguments passed to the function by the LLM if we defined any as being required. The second is an application context that we'll get to next. The context is specific to our application and does not go through the LLM at all. Think of this as the current user logged into our Phoenix web application. We want the exchange with the LLM to be relevant and only based on the what the current user can see and do.

Setting up our LangChain API Key

We need to setup the LangChain library to connect with ChatGPT using our API key. In a real Elixir application, this would be done in the config/config.exs file using something like this:

config :langchain, :openai_key, fn -> System.fetch_env!("OPENAI_API_KEY") end

For the Livebook notebook, use the "Secrets" on the sidebar to create an OPENAI_API_KEY secret with you API key. That is accessible here using "LB_OPENAI_API_KEY".

Application.put_env(:langchain, :openai_key, System.fetch_env!("LB_OPENAI_API_KEY"))
:ok

Defining our AI Assistant

We'll use the LangChain.Message struct to define the messages for what we want the LLM to do. Our system message instructs the LLM how to behave.

In this example, we want the assistant to generate Haiku poems about the current user's favorite animals. However, we only want it to work for users who are "members" and not "trial" users.

The instructions we're giving the LLM will require it to execute the function to get additional information. Yes, this is a simple and contrived example, in a real system, we wouldn't even make the API call to the server for a "trial" user and we could pass along the additional information with the first request.

What we're demonstrating here is that the LLM can interact with our Elixir application, use multiple pieces of returned information to make business logic decisions and fullfil our system requests.

alias LangChain.Message

messages = [
  Message.new_system!(~s(You are a helpful haiku poem generating assistant.
    ONLY generate a haiku for users with an `account_type` of "member".
    If the user has an `account_type` of "trial", say you can't do it,
    but you would love to help them if they upgrade and become a member.)),
  Message.new_user!("The current user is requesting a Haiku poem about their favorite animal.")
]
[
  %LangChain.Message{
    content: "You are a helpful haiku poem generating assistant.\n    ONLY generate a haiku for users with an `account_type` of \"member\".\n    If the user has an `account_type` of \"trial\", say you can't do it,\n    but you would love to help them if they upgrade and become a member.",
    index: nil,
    status: :complete,
    role: :system,
    function_name: nil,
    arguments: nil
  },
  %LangChain.Message{
    content: "The current user is requesting a Haiku poem about their favorite animal.",
    index: nil,
    status: :complete,
    role: :user,
    function_name: nil,
    arguments: nil
  }
]

Defining our AI Model

For this example, we're talking to OpenAI's ChatGPT service. Let's setup that model. At this point, we can also specify which version of ChatGPT we want to talk with.

For the kind of work we're asking it to do, GPT-4 does a better job than previous model versions. We'll specify we want "gpt-4".

alias LangChain.ChatModels.ChatOpenAI

chat_model = ChatOpenAI.new!(%{model: "gpt-4", temperature: 1, stream: false})
%LangChain.ChatModels.ChatOpenAI{
  endpoint: "https://api.openai.com/v1/chat/completions",
  model: "gpt-4",
  temperature: 1.0,
  frequency_penalty: 0.0,
  receive_timeout: 60000,
  n: 1,
  stream: false
}

Defining our Application's User Context

Here we'll define some special context that we want passed through to our LangChain.Function when it is executed.

In a real application, this might be session based user or account information. It's whatever is relevant to our application that changes how and what a function should operate.

context = %{user_id: 2}
%{user_id: 2}

After trying this with user_id: 2, a member who should have a Haiku generated for them, change it to user_id: 1 to see it be polietly denied.

Making the API Call

We're ready to make the API call!

Notice the custom_context: context setting that is passed in when creating the LLMChain. That information is the application-specific context we want to be passed to our Function when executed.

Also, note the verbose: true setting. That causes a number of IO.inspect calls to be printed showing what's happening internally.

Additionally, the stream: false option says we want the result only when it's complete. This example isn't setup for receving a streaming response. We're keeping it simple!

alias LangChain.Chains.LLMChain

{:ok, updated_chain, response} =
  %{llm: chat_model, custom_context: context, verbose: true}
  |> LLMChain.new!()
  # add the prompt message
  |> LLMChain.add_messages(messages)
  # add the functions that are available to the LLM
  |> LLMChain.add_functions([function])
  # keep running the LLM chain against the LLM if needed to evaluate
  # function calls and provide a response.
  |> LLMChain.run(while_needs_response: true)

IO.puts(response.content)
response.content
LLM: %LangChain.ChatModels.ChatOpenAI{
  endpoint: "https://api.openai.com/v1/chat/completions",
  model: "gpt-4",
  temperature: 1.0,
  frequency_penalty: 0.0,
  receive_timeout: 60000,
  n: 1,
  stream: false
}
MESSAGES: [
  %LangChain.Message{
    content: "You are a helpful haiku poem generating assistant.\n    ONLY generate a haiku for users with an `account_type` of \"member\".\n    If the user has an `account_type` of \"trial\", say you can't do it,\n    but you would love to help them if they upgrade and become a member.",
    index: nil,
    status: :complete,
    role: :system,
    function_name: nil,
    arguments: nil
  },
  %LangChain.Message{
    content: "The current user is requesting a Haiku poem about their favorite animal.",
    index: nil,
    status: :complete,
    role: :user,
    function_name: nil,
    arguments: nil
  }
]
FUNCTIONS: [
  %LangChain.Function{
    name: "get_user_info",
    description: "Return JSON object of the current users's relevant information.",
    function: #Function<41.3316493/2 in :erl_eval.expr/6>,
    parameters_schema: nil
  }
]
SINGLE MESSAGE RESPONSE: %LangChain.Message{
  content: nil,
  index: 0,
  status: :complete,
  role: :assistant,
  function_name: "get_user_info",
  arguments: %{}
}
EXECUTING FUNCTION: "get_user_info"

21:19:05.991 [debug] Executing function "get_user_info"
FUNCTION RESULT: "{\"account_type\":\"member\",\"favorite_animal\":\"Aardvark\",\"name\":\"Joan Jett\",\"user_id\":2}"
SINGLE MESSAGE RESPONSE: %LangChain.Message{
  content: "Aardvark in night's cloak,\nDigging deep beneath moon's gaze,\nJoan, this beast we evoke.",
  index: 0,
  status: :complete,
  role: :assistant,
  function_name: nil,
  arguments: nil
}
Aardvark in night's cloak,
Digging deep beneath moon's gaze,
Joan, this beast we evoke.
"Aardvark in night's cloak,\nDigging deep beneath moon's gaze,\nJoan, this beast we evoke."

TIP: Try changing the context to user_id: 1 now and see what happens when a different user context is provided.

Discussion

After a successful call, we can see in the verbose logs that:

  • the LLM requested to execute the function
  • LLMChain executed the function attached to the Function struct
  • the response of our Elixir function passed through the anonymous function on Function and was re-submitted back to the LLM
  • the LLM reacted to the result of our function call

This means it worked! We successfully let an LLM directly interact with our Elixir application!

With this, we could expose functions that allow the LLM to request additional information specific to the current user, or we could even define functions that allow the LLM to change things in the user's account on their behalf!

At that point, it's up to us!