Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BooleanOutputParser expected output value error #11408

Closed
1 of 14 tasks
ankur287 opened this issue Oct 4, 2023 · 6 comments · Fixed by #20064
Closed
1 of 14 tasks

BooleanOutputParser expected output value error #11408

ankur287 opened this issue Oct 4, 2023 · 6 comments · Fixed by #20064
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed

Comments

@ankur287
Copy link

ankur287 commented Oct 4, 2023

System Info

Hi, I am using LLMChainFilter.from_llm(llm) but while running, I am getting this error:

ValueError: BooleanOutputParser expected output value to either be YES or NO. Received Yes, the context is relevant to the question as it provides information about the problem in the.

How do I resolve this error?

Langchain version: 0.0.308

Who can help?

@agola11

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor, LLMChainFilter

llm = SageMakerEndpointModel
_filter = LLMChainFilter.from_llm(llm)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=faiss_retriever)

compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown?")

Expected behavior

Get filtered docs

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Oct 4, 2023
@dosubot
Copy link

dosubot bot commented Oct 4, 2023

🤖

Hello,

The error you're encountering is due to the output of your language model (LLM) not matching the expected values "YES" or "NO" that the BooleanOutputParser class is designed to handle.

The BooleanOutputParser class in the LangChain codebase is used to parse the output of a language model call to a boolean value. It takes the output of a language model, represented as a string, and converts it into a boolean value. The class has two string attributes, true_val and false_val, which represent the string values that should be parsed as True and False respectively. By default, these are set to "YES" and "NO".

Here's the relevant code that's causing the error:

cleaned_text = text.strip()
if cleaned_text.upper() not in (self.true_val.upper(), self.false_val.upper()):
    raise ValueError(
        f"BooleanOutputParser expected output value to either be "
        f"{self.true_val} or {self.false_val}. Received {cleaned_text}."
    )

This code is checking if the cleaned, uppercased output of your LLM (cleaned_text.upper()) is either "YES" or "NO". If it's not, it raises a ValueError. So, you need to ensure that your LLM's output matches these expectations.

To resolve this issue, you need to ensure that the output from your LLM is either "YES" or "NO". If your LLM is producing different outputs, you might need to adjust your LLM or create a custom output parser that can handle the specific outputs of your LLM.

In the context shared, the LLMChainFilter class uses an LLM to filter documents based on their relevance to a query. The from_llm class method creates an LLMChainFilter from a language model. If no prompt is provided, it uses a default prompt that expects a boolean output from the LLM. This is where the BooleanOutputParser is used, and it expects the output to be either "YES" or "NO".

I hope this helps! If you have any other questions, feel free to ask.

Best,
Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@sudranga
Copy link
Contributor

sudranga commented Oct 4, 2023

@ankur287 Looks like the output parser doesn't retry when the LLM output isn't as expected. We should make it more robust by retrying. I can try fixing this issue if that's ok with you.

@sudranga
Copy link
Contributor

sudranga commented Oct 5, 2023

@agola11 Can you please assign this issue to me? I'm unable to do it myself.

@ankur287
Copy link
Author

ankur287 commented Oct 6, 2023

I fixed the issue temporarily by creating a rule that if "YES" in output just return "YES". However, we need to make this change as we cannot expect LLM to always say just yes or no. In my case model said, 'Yes answer is available in context'

@davidgxue
Copy link

davidgxue commented Jan 4, 2024

@ankur287 Do you mind posting how you were able to get around this issue in detail since LangChain itself hasn't really fixed this?

If not is there an issue tracking this problem? I have found a quick workaround by implementing my own boolean output parser to default to YES along with checking if YES/NO is in the output instead of strict matching. I am happy to make a PR to address this problem.

@ankur287
Copy link
Author

ankur287 commented Jan 4, 2024

I posted above how I fixed it. See my last comment

baskaryan pushed a commit that referenced this issue Feb 26, 2024
…responses (#17810)

- **Description:** I encountered this error when I tried to use
LLMChainFilter. Even if the message slightly differs, like `Not relevant
(NO)` this results in an error. It has been reported already here:
https://github.com/langchain-ai/langchain/issues/. This change hopefully
makes it more robust.
- **Issue:**  #11408 
- **Dependencies:** No
- **Twitter handle:** dokatox
joakimeriksson pushed a commit to joakimeriksson/langchain that referenced this issue Feb 26, 2024
…responses (langchain-ai#17810)

- **Description:** I encountered this error when I tried to use
LLMChainFilter. Even if the message slightly differs, like `Not relevant
(NO)` this results in an error. It has been reported already here:
https://github.com/langchain-ai/langchain/issues/. This change hopefully
makes it more robust.
- **Issue:**  langchain-ai#11408 
- **Dependencies:** No
- **Twitter handle:** dokatox
al1p pushed a commit to al1p/langchain that referenced this issue Feb 27, 2024
…responses (langchain-ai#17810)

- **Description:** I encountered this error when I tried to use
LLMChainFilter. Even if the message slightly differs, like `Not relevant
(NO)` this results in an error. It has been reported already here:
https://github.com/langchain-ai/langchain/issues/. This change hopefully
makes it more robust.
- **Issue:**  langchain-ai#11408 
- **Dependencies:** No
- **Twitter handle:** dokatox
haydeniw pushed a commit to haydeniw/langchain that referenced this issue Feb 27, 2024
…responses (langchain-ai#17810)

- **Description:** I encountered this error when I tried to use
LLMChainFilter. Even if the message slightly differs, like `Not relevant
(NO)` this results in an error. It has been reported already here:
https://github.com/langchain-ai/langchain/issues/. This change hopefully
makes it more robust.
- **Issue:**  langchain-ai#11408 
- **Dependencies:** No
- **Twitter handle:** dokatox
gkorland pushed a commit to FalkorDB/langchain that referenced this issue Mar 30, 2024
…responses (langchain-ai#17810)

- **Description:** I encountered this error when I tried to use
LLMChainFilter. Even if the message slightly differs, like `Not relevant
(NO)` this results in an error. It has been reported already here:
https://github.com/langchain-ai/langchain/issues/. This change hopefully
makes it more robust.
- **Issue:**  langchain-ai#11408 
- **Dependencies:** No
- **Twitter handle:** dokatox
@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 4, 2024
casperdcl added a commit to casperdcl/langchain that referenced this issue Apr 5, 2024
casperdcl added a commit to casperdcl/langchain that referenced this issue Apr 5, 2024
eyurtsev added a commit that referenced this issue Apr 9, 2024
…#20064)

- **Description**: fixes BooleanOutputParser detecting sub-words ("NOW
this is likely (YES)" -> `True`, not `AmbiguousError`)
- **Issue(s)**: fixes #11408 (follow-up to #17810)
- **Dependencies**: None
- **GitHub handle**: @casperdcl

<!-- if unreviewd after a few days, @-mention one of baskaryan, efriis,
eyurtsev, hwchase17 -->

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
junkeon pushed a commit to UpstageAI/langchain that referenced this issue Apr 16, 2024
…langchain-ai#20064)

- **Description**: fixes BooleanOutputParser detecting sub-words ("NOW
this is likely (YES)" -> `True`, not `AmbiguousError`)
- **Issue(s)**: fixes langchain-ai#11408 (follow-up to langchain-ai#17810)
- **Dependencies**: None
- **GitHub handle**: @casperdcl

<!-- if unreviewd after a few days, @-mention one of baskaryan, efriis,
eyurtsev, hwchase17 -->

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
hinthornw pushed a commit that referenced this issue Apr 26, 2024
…#20064)

- **Description**: fixes BooleanOutputParser detecting sub-words ("NOW
this is likely (YES)" -> `True`, not `AmbiguousError`)
- **Issue(s)**: fixes #11408 (follow-up to #17810)
- **Dependencies**: None
- **GitHub handle**: @casperdcl

<!-- if unreviewd after a few days, @-mention one of baskaryan, efriis,
eyurtsev, hwchase17 -->

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants