-
Notifications
You must be signed in to change notification settings - Fork 13.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bind_tools NotImplementedError when using ChatOllama #21479
Comments
@hyhzl, no random mention, please. |
Any one gt the solution for that |
You can use Ollama's OpenAI compatible API like from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
api_key="ollama",
model="llama3",
base_url="http://localhost:11434/v1",
)
llm = llm.bind_tools(tools) Pretend the Ollama mode as OpenAI and have fun with the LLM developping! |
Thank you for your reply
…On Sat, 11 May 2024, 13:12 Tang Ziya, ***@***.***> wrote:
You can use Ollama's OpenAI compatible API
<https://ollama.com/blog/openai-compatibility> like
from langchain_openai import ChatOpenAIllm = ChatOpenAI(
api_key="ollama",
model="llama3",
base_url="http://localhost:11434/v1",
)llm = llm.bind_tools(tools)
Pretend the Ollama mode as OpenAI and have fun with the LLM developping!
—
Reply to this email directly, view it on GitHub
<#21479 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AW6YNHNN3T6STIR6I6LHJSTZBXDVBAVCNFSM6AAAAABHOU3E4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBVGYYTQMRTG4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
@subhash137 . According to the Ollama docs, their Chat Completions API does not support function calling yet. Did you have any success? |
Yes, I did.
…On Mon, 13 May 2024, 19:24 Alexander, ***@***.***> wrote:
@subhash137 <https://github.com/subhash137> . According to the Ollama docs
<https://github.com/ollama/ollama/blob/main/docs/openai.md>, their Chat
Completions API does not support function calling yet. Did you have any
success?
—
Reply to this email directly, view it on GitHub
<#21479 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AW6YNHLMHTGSQSIIM7XMBDDZCDAYTAVCNFSM6AAAAABHOU3E4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBXGY2DCNZXGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@subhash137 would you please show, how you achieved function calling in that way? |
tcztzy's comment should work |
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
api_key="ollama",
model="llama3",
base_url="http://localhost:11434/v1",
)
llm = llm.bind_tools(tools)
If you want to run locally use LM studio download the models , run the
server give the api to base_url.
but i prefer to use groq for faster and efficient output.
…On Tue, 14 May 2024 at 13:17, KM ***@***.***> wrote:
@subhash137 <https://github.com/subhash137> would you please show, how
you achieved function calling in that way?
tcztzy's comment should work
—
Reply to this email directly, view it on GitHub
<#21479 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AW6YNHNVDN2RXAOE2HL4Z3LZCG6PTAVCNFSM6AAAAABHOU3E4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBZGUYDMNBQG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Oh sorry I just tried, seems the tools are not invoked this way. Did someone successfully make the model use the tools provided? |
Oh 😳 , I am sorry I didn't implemented it correctly. Just now I ran the
code it is running successfully but tools are not invoked. Aa.. I don't
have any idea now
…On Tue, 14 May 2024, 16:21 KM, ***@***.***> wrote:
@subhash137 <https://github.com/subhash137> would you please show, how
you achieved function calling in that way?
Oh sorry I just tried, seems the tools are not invoked this way. Did
someone successfully make the model use the tools provided?
—
Reply to this email directly, view it on GitHub
<#21479 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AW6YNHNRKDUWHP6VCW45263ZCHUCZAVCNFSM6AAAAABHOU3E4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBZHA4DQMRRGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I faced this error too. is there any quick fix for this problem? using |
According to this github issue: langchain-ai/langchain#21479 this setup: from langchain_openai import ChatOpenAI class Person(BaseModel): name: str = Field(description="The person's name", required=True) height: float = Field(description="The person's height", required=True) hair_color: str = Field(description="The person's hair color") prompt = PromptTemplate.from_template( """Alex is 5 feet tall. Claudia is 1 feet taller than Alex and jumps higher than him. Claudia is a brunette and Alex is blonde. Human: {question} AI: """ ) llm = ChatOpenAI( api_key="ollama", model="llama3", base_url="http://localhost:11434/v1", ) llm = llm.with_structured_output(Person) chain = prompt | structured_llm chain.invoke("Describe Alex") Should work, but unfortunately it does not which is why it will be removed from this example.
@lalanikarim can i use chat model along with function calling. As i see chatOllama does not supports bind_tools, but in documentation it is given how to bind_tools with chatOllama. |
We should use |
@Harsh-Kesharwani |
Can someone please post a mini example of tool calling with these pr merges? |
I am still not able to get it to work: from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
@tool
def magic_function(input: int):
"""applies magic function to an input"""
return input * -2
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad")
]
)
tools = [magic_function]
model = OllamaFunctions(
model="llama3",
# formal="json", # commented or not, does not change the error
keep_alive=-1,
temperature=0,
max_new_tokes=512,
)
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input":"What is the value of magic_function(3)"})
|
This PR fixed the JSON serialization error and couple other things. |
Example notebook with tool calling from withing LangGraph agent. Since #22339 is not yet merged, the notebook installs |
@lalanikarim does agent carries context that is return by tool at every iteration, suppose i have 3 tools below is the execution flow: agent... agent... (does agent carries resp1 or summarization or knowledge graph for it) agent... (does agent carries resp1, resp2 or summarization or knowledge graph for it) The question is does agent carries tool response as a context for next iteration. |
You provide an initial state on every irritation. Unless you pass the previous context into the next iteration, the agent starts with a fresh state every time. I hope this answers your question.
|
@lalanikarim can i log the prompt which is passed to the agent. |
@Harsh-Kesharwani |
Checked other resources
Example Code
def init_ollama(model_name:str = global_model):
# llm = Ollama(model=model_name)
llm = ChatOllama(model=model_name)
return llm
llm = init_ollama()
llama2 = init_ollama(model_name=fallbacks)
llm_with_fallbacks = llm.with_fallbacks([llama2])
def agent_search():
search = get_Tavily_Search()
retriver = get_milvus_vector_retriver(get_webLoader_docs("https://docs.smith.langchain.com/overview"),global_model)
retriver_tool = create_retriever_tool(
retriver,
"langsmith_search",
"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)
tools = [search,retriver_tool]
# llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) # money required
prompt = hub.pull("hwchase17/openai-functions-agent")
agent = create_tool_calling_agent(llm,tools,prompt) # no work.
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "hi!"})
Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "agent.py", line 72, in
agent = create_tool_calling_agent(llm,tools,prompt)
File "/home/anaconda3/envs/languagechain/lib/python3.8/site-packages/langchain/agents/tool_calling_agent/base.py", line 88, in create_tool_calling_agent
llm_with_tools = llm.bind_tools(tools)
File "/home/anaconda3/envs/languagechain/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 912, in bind_tools
raise NotImplementedError()
NotImplementedError
Description
because ollama provide great convenient for developers to develop and practice LLM app, so hoping this issue to be handled as soon as possible
Appreciate sincerely !
System Info
langchain==0.1.19
platform: centos
python version 3.8.19
The text was updated successfully, but these errors were encountered: