Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bind_tools NotImplementedError when using ChatOllama #21479

Open
5 tasks done
hyhzl opened this issue May 9, 2024 · 25 comments
Open
5 tasks done

bind_tools NotImplementedError when using ChatOllama #21479

hyhzl opened this issue May 9, 2024 · 25 comments
Labels
04 new feature New functionality (use for larger scope enhancements) Ɑ: core Related to langchain-core

Comments

@hyhzl
Copy link

hyhzl commented May 9, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

def init_ollama(model_name:str = global_model):
# llm = Ollama(model=model_name)
llm = ChatOllama(model=model_name)
return llm

llm = init_ollama()
llama2 = init_ollama(model_name=fallbacks)
llm_with_fallbacks = llm.with_fallbacks([llama2])

def agent_search():
search = get_Tavily_Search()
retriver = get_milvus_vector_retriver(get_webLoader_docs("https://docs.smith.langchain.com/overview"),global_model)
retriver_tool = create_retriever_tool(
retriver,
"langsmith_search",
"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)
tools = [search,retriver_tool]
# llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) # money required
prompt = hub.pull("hwchase17/openai-functions-agent")
agent = create_tool_calling_agent(llm,tools,prompt) # no work.
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "hi!"})

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
File "agent.py", line 72, in
agent = create_tool_calling_agent(llm,tools,prompt)
File "/home/anaconda3/envs/languagechain/lib/python3.8/site-packages/langchain/agents/tool_calling_agent/base.py", line 88, in create_tool_calling_agent
llm_with_tools = llm.bind_tools(tools)
File "/home/anaconda3/envs/languagechain/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 912, in bind_tools
raise NotImplementedError()
NotImplementedError

Description

because ollama provide great convenient for developers to develop and practice LLM app, so hoping this issue to be handled as soon as possible
Appreciate sincerely !

System Info

langchain==0.1.19
platform: centos
python version 3.8.19

@dosubot dosubot bot added Ɑ: core Related to langchain-core 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels May 9, 2024
@sbusso
Copy link
Contributor

sbusso commented May 9, 2024

@hyhzl, no random mention, please.

@subhash137
Copy link

Any one gt the solution for that

@subhash137
Copy link

image

Even structuredoutut is not working

error -

image

@tcztzy
Copy link

tcztzy commented May 11, 2024

You can use Ollama's OpenAI compatible API like

from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
    api_key="ollama",
    model="llama3",
    base_url="http://localhost:11434/v1",
)
llm = llm.bind_tools(tools)

Pretend the Ollama mode as OpenAI and have fun with the LLM developping!

@subhash137
Copy link

subhash137 commented May 11, 2024 via email

@alexanderp99
Copy link

@subhash137 . According to the Ollama docs, their Chat Completions API does not support function calling yet. Did you have any success?

@subhash137
Copy link

subhash137 commented May 13, 2024 via email

@alexanderp99
Copy link

@subhash137 would you please show, how you achieved function calling in that way?

@kaminwong
Copy link

@subhash137 would you please show, how you achieved function calling in that way?

tcztzy's comment should work

@subhash137
Copy link

subhash137 commented May 14, 2024 via email

@kaminwong
Copy link

@subhash137 would you please show, how you achieved function calling in that way?

Oh sorry I just tried, seems the tools are not invoked this way. Did someone successfully make the model use the tools provided?

@subhash137
Copy link

subhash137 commented May 14, 2024 via email

@hinthornw hinthornw added 04 new feature New functionality (use for larger scope enhancements) and removed 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels May 15, 2024
@AmirMohamadBabaee
Copy link

I faced this error too. is there any quick fix for this problem? using OllamaFunctions can fix it?

slonyator added a commit to slonyator/langchain-experiments that referenced this issue May 16, 2024
According to this github issue:
langchain-ai/langchain#21479 this setup:

from langchain_openai import ChatOpenAI

class Person(BaseModel):
    name: str = Field(description="The person's name", required=True)
    height: float = Field(description="The person's height", required=True)
    hair_color: str = Field(description="The person's hair color")

prompt = PromptTemplate.from_template(
    """Alex is 5 feet tall.
Claudia is 1 feet taller than Alex and jumps higher than him.
Claudia is a brunette and Alex is blonde.

Human: {question}
AI: """
)

llm = ChatOpenAI(
    api_key="ollama",
    model="llama3",
    base_url="http://localhost:11434/v1",
)
llm = llm.with_structured_output(Person)

chain = prompt | structured_llm

chain.invoke("Describe Alex")

Should work, but unfortunately it does not which is why it will be
removed from this example.
@lalanikarim
Copy link
Contributor

lalanikarim commented May 27, 2024

#20881 (merged) already added bind_tools feature into OllamaFunctions
#21625 (pending merge) adds support for tool_calls

@Harsh-Kesharwani
Copy link

@lalanikarim can i use chat model along with function calling. As i see chatOllama does not supports bind_tools, but in documentation it is given how to bind_tools with chatOllama.

@ErfanMomeniii
Copy link

ErfanMomeniii commented May 28, 2024

We should use OllamaFunctions and pass the LLaMA model name as a parameter, as it includes a suitable bind_tools method for adding tools to the chain. The ChatOllama class does not possess any methods for this purpose.
for more details see https://python.langchain.com/v0.1/docs/integrations/chat/ollama_functions
Alternatively, we can manage this manually by defining new classes that inherit from ChatOllama, incorporating tools as parameters, and creating an appropriate invoke function to utilize these tools.
@Harsh-Kesharwani

@lalanikarim
Copy link
Contributor

@lalanikarim can i use chat model along with function calling. As i see chatOllama does not supports bind_tools, but in documentation it is given how to bind_tools with chatOllama.

@Harsh-Kesharwani
Like @ErfanMomeniii suggested, you can use OllamaFunctions if you need function calling capabilities with Ollama.
OllamaFunctions inherits from ChatOllama and adds newer bind_tools and with_structured_output functions as well as adds tool_calls property to AIMessage.
While you can currently already use OllamaFunctions for function calling, there is an unmerged PR #21625 that fixes the issue where you want a chat response from OllamaFunctions in case none of the provided functions are appropriate for the request. I am hoping that to be merged sometime this week.

@ntelo007
Copy link

ntelo007 commented Jun 4, 2024

Can someone please post a mini example of tool calling with these pr merges?

@KIC
Copy link

KIC commented Jun 9, 2024

I am still not able to get it to work:

from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool


@tool
def magic_function(input: int):
    """applies magic function to an input"""
    return input * -2

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad")
    ]
)

tools = [magic_function]

model = OllamaFunctions(
    model="llama3",
    # formal="json",    # commented or not, does not change the error
    keep_alive=-1,
    temperature=0,
    max_new_tokes=512,
)

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input":"What is the value of magic_function(3)"})

TypeError: Object of type StructuredTool is not JSON serializable

@lalanikarim
Copy link
Contributor

I am still not able to get it to work:

from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool


@tool
def magic_function(input: int):
    """applies magic function to an input"""
    return input * -2

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad")
    ]
)

tools = [magic_function]

model = OllamaFunctions(
    model="llama3",
    # formal="json",    # commented or not, does not change the error
    keep_alive=-1,
    temperature=0,
    max_new_tokes=512,
)

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input":"What is the value of magic_function(3)"})

TypeError: Object of type StructuredTool is not JSON serializable

This PR fixed the JSON serialization error and couple other things.

#22339

@lalanikarim
Copy link
Contributor

Example notebook with tool calling from withing LangGraph agent.
https://github.com/lalanikarim/notebooks/blob/main/LangGraph-MessageGraph-OllamaFunctions.ipynb

Since #22339 is not yet merged, the notebook installs langchain-expermental from my repo (source for #22339).

@Harsh-Kesharwani
Copy link

@lalanikarim does agent carries context that is return by tool at every iteration, suppose i have 3 tools below is the execution flow:

agent...
use tool 1
tool 1 response: resp1

agent... (does agent carries resp1 or summarization or knowledge graph for it)
use tool 2
tool 2 response: resp2

agent... (does agent carries resp1, resp2 or summarization or knowledge graph for it)
use tool 3
tool 3 response: resp3

The question is does agent carries tool response as a context for next iteration.

@lalanikarim
Copy link
Contributor

lalanikarim commented Jun 10, 2024

@lalanikarim does agent carries context that is return by tool at every iteration, suppose i have 3 tools below is the execution flow:

agent...
use tool 1
tool 1 response: resp1

agent... (does agent carries resp1 or summarization or knowledge graph for it)
use tool 2
tool 2 response: resp2

agent... (does agent carries resp1, resp2 or summarization or knowledge graph for it)
use tool 3
tool 3 response: resp3

The question is does agent carries tool response as a context for next iteration.

@Harsh-Kesharwani

You provide an initial state on every irritation. Unless you pass the previous context into the next iteration, the agent starts with a fresh state every time. I hope this answers your question.

initial_state = ...
updated_state = agent.invoke(initial_state)

next_initial_state = <combine updated_state and a new initial state>
updated_state = agent.invoke(next_initial_state)

@Harsh-Kesharwani
Copy link

@lalanikarim can i log the prompt which is passed to the agent.

@lalanikarim
Copy link
Contributor

@lalanikarim can i log the prompt which is passed to the agent.

@Harsh-Kesharwani
I have included langtrace links for multiple runs in the notebook. Take a look and let me know if that answers your questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
04 new feature New functionality (use for larger scope enhancements) Ɑ: core Related to langchain-core
Projects
None yet
Development

No branches or pull requests