Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code in documentation on OllamaFunctions fails #21373

Closed
5 tasks done
maxschulz-COL opened this issue May 7, 2024 · 3 comments
Closed
5 tasks done

Code in documentation on OllamaFunctions fails #21373

maxschulz-COL opened this issue May 7, 2024 · 3 comments
Labels
🤖:docs Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder

Comments

@maxschulz-COL
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

The following code taken from the docs: https://python.langchain.com/docs/integrations/chat/ollama_functions/ fails

from langchain_experimental.llms.ollama_functions import OllamaFunctions
model = OllamaFunctions(model="llama3", format="json")

Taking out format='json' also doesn't help - see below.

Error Message and Stack Trace (if applicable)

With format='json:

chain = prompt | structured_llm

but also without this setting:

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
Cell In[13], line 25
     23 # Chain
     24 llm = OllamaFunctions(model="phi3", temperature=0)
---> 25 structured_llm = llm.with_structured_output(Person)
     26 chain = prompt | structured_llm

File /langchain_core/_api/beta_decorator.py:110, in beta.<locals>.beta.<locals>.warning_emitting_wrapper(*args, **kwargs)
    108     warned = True
    109     emit_warning()
--> 110 return wrapped(*args, **kwargs)

File /langchain_core/language_models/base.py:204, in BaseLanguageModel.with_structured_output(self, schema, **kwargs)
    199 @beta()
    200 def with_structured_output(
    201     self, schema: Union[Dict, Type[BaseModel]], **kwargs: Any
    202 ) -> Runnable[LanguageModelInput, Union[Dict, BaseModel]]:
    203     """Implement this if there is a way of steering the model to generate responses that match a given schema."""  # noqa: E501
--> 204     raise NotImplementedError()

NotImplementedError:

Description

The docs seem out of date

System Info

langchain==0.1.16
langchain-anthropic==0.1.11
langchain-community==0.0.33
langchain-core==0.1.44
langchain-experimental==0.0.57
langchain-openai==0.1.3
langchain-text-splitters==0.0.1

@dosubot dosubot bot added the 🤖:docs Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder label May 7, 2024
@wulabaha
Copy link

wulabaha commented May 9, 2024

This environment works for me, you can give it a try

python 3.11.9

langchain==0.1.19
langchain-community==0.0.38
langchain-core==0.1.52
langchain-experimental==0.0.58
langchain-text-splitters==0.0.1
pydantic==2.7.1
pydantic_core==2.18.2

@keceli
Copy link

keceli commented May 11, 2024

I do not get the NotImplementedError, but the example doesn't run for me as well.
I have the latest versions of langchain*:

langchain                                0.1.20
langchain-chroma                         0.1.0
langchain-community                      0.0.38
langchain-core                           0.1.52
langchain-experimental                   0.0.58
langchain-openai                         0.1.6
langchain-text-splitters                 0.0.1
langchainhub                             0.1.15
langgraph                                0.0.48
langsmith                                0.1.56

My problem is after I run the model as in the example here:

from langchain_core.messages import HumanMessage

model.invoke("what is the weather in Boston?")

The cell (and my gpu) keeps running without any output, normally when I run llama3 it only takes a couple of seconds. So, I think there is a problem with the example. Another weird thing is the example imports HumanMessage but never uses it. I tried model.invoke with a list of SystemMessage and HumanMessage as shown in other examples, but got the same gpu problem.

I'd appreciate any help to get this example working. Thank you.

@maxschulz-COL
Copy link
Author

What worked for me is upgrading to langchain-experimental==0.0.58

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:docs Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder
Projects
None yet
Development

No branches or pull requests

3 participants