Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reask doesn't work on Langchain integration #155

Closed
Chromadream opened this issue May 6, 2023 · 4 comments · Fixed by langchain-ai/langchain#6089
Closed

Reask doesn't work on Langchain integration #155

Chromadream opened this issue May 6, 2023 · 4 comments · Fixed by langchain-ai/langchain#6089

Comments

@Chromadream
Copy link

Hello,

We are trying to integrate Guardrails with Langchain, but encountered an issue when a reask is required, where it would throw an exception of

TypeError: 'NoneType' object is not callable

Here is the most basic reproducible snippet of the code:

from langchain.llms import OpenAI

from langchain.output_parsers import GuardrailsOutputParser

from langchain.prompts import PromptTemplate

def main():
    llm = OpenAI()
    rail_spec = """
<rail version="0.1">

<output>
<object name="patient_info">
    <string name="gender" description="Patient's gender" />
    <integer name="age" format="valid-range: 0 100"/>  


    <list name="symptoms" description="Symptoms that the patient is currently experiencing. Each symptom should be classified into  separate item in the list.">
        <object>
            <string name="symptom" description="Symptom that a patient is experiencing" />
            <string name="affected area" description="What part of the body the symptom is affecting"
                format="valid-choices: {['head', 'neck', 'chest']}"
                on-fail-valid-choices="reask"
            />  


        </object>
    </list>
    <list name="current_meds" description="Medications the patient is currently taking and their response">
        <object>
            <string name="medication" description="Name of the medication the patient is taking" />
            <string name="response" description="How the patient is responding to the medication" />
        </object>
    </list>
</object>
</output>

<prompt>

Given the following doctor's notes about a patient, please extract a dictionary that contains the patient's information.

{{doctors_notes}}

@complete_json_suffix_v2
</prompt>
</rail>
"""
    output_parser = GuardrailsOutputParser.from_rail_string(rail_spec)
    prompt = PromptTemplate(
        template=output_parser.guard.base_prompt,
        input_variables=output_parser.guard.prompt.variable_names,
    )

    doctors_notes = """
49 y/o Male with chronic macular rash to face & hair, worse in beard, eyebrows & nares.
Itchy, flaky, slightly scaly. Moderate response to OTC steroid cream
"""
    output = llm(prompt.format_prompt(doctors_notes=doctors_notes).to_string())
    print(output_parser.parse(output))
        
if __name__ == "__main__":
    main()

Any advice would be greatly appreciated.

@fuergaosi233
Copy link

same problem

@irgolic
Copy link
Contributor

irgolic commented Jun 12, 2023

The reason this happens is because https://github.com/hwchase17/langchain/blob/master/langchain/output_parsers/rail_parser.py#L43 calls guard.parse without an api.

@irgolic
Copy link
Contributor

irgolic commented Jun 13, 2023

@Chromadream @fuergaosi233 Could you give langchain-ai/langchain#6089 a try? Pass a openai.ChatCompletion.create as you instantiate the parser, and reasks should work fine now.

hwchase17 pushed a commit to langchain-ai/langchain that referenced this issue Jun 18, 2023
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes guardrails-ai/guardrails#155 

Enables guardrails reasking by specifying an LLM api in the output
parser.
@irgolic
Copy link
Contributor

irgolic commented Jun 26, 2023

Closing due to inactivity.

@irgolic irgolic closed this as completed Jun 26, 2023
kacperlukawski pushed a commit to kacperlukawski/langchain that referenced this issue Jun 29, 2023
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes guardrails-ai/guardrails#155 

Enables guardrails reasking by specifying an LLM api in the output
parser.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants