Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example from LangChain docs produces exceptions #1124

Open
mdziezyc opened this issue Oct 4, 2023 · 3 comments
Open

Example from LangChain docs produces exceptions #1124

mdziezyc opened this issue Oct 4, 2023 · 3 comments
Labels
bug Something isn't working

Comments

@mdziezyc
Copy link

mdziezyc commented Oct 4, 2023

Describe the bug

I get exceptions when running code from https://python.langchain.com/docs/integrations/providers/clearml_tracking

To reproduce

Run code:

from langchain import OpenAI
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
from langchain.callbacks import ClearMLCallbackHandler, StdOutCallbackHandler

# Setup and use the ClearML Callback
clearml_callback = ClearMLCallbackHandler(
    task_type="inference",
    project_name="langchain_callback_demo",
    task_name="llm",
    tags=["test"],
    # Change the following parameters based on the amount of detail you want tracked
    visualize=True,
    complexity_metrics=True,
    stream_logs=True,
)
callbacks = [StdOutCallbackHandler(), clearml_callback]
# Get the OpenAI model ready to go
llm = OpenAI(temperature=0, callbacks=callbacks)

# SCENARIO 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
# After every generation run, use flush to make sure all the metrics
# prompts and other output are properly saved separately
clearml_callback.flush_tracker(langchain_asset=llm, name="simple_sequential")

and (I removed one tool so I don't have to get API access)

from langchain import OpenAI
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
from langchain.callbacks import ClearMLCallbackHandler, StdOutCallbackHandler

# Setup and use the ClearML Callback
clearml_callback = ClearMLCallbackHandler(
    task_type="inference",
    project_name="langchain_callback_demo",
    task_name="llm",
    tags=["test"],
    # Change the following parameters based on the amount of detail you want tracked
    visualize=True,
    complexity_metrics=True,
    stream_logs=True,
)
callbacks = [StdOutCallbackHandler(), clearml_callback]
# Get the OpenAI model ready to go
llm = OpenAI(temperature=0, callbacks=callbacks)

# SCENARIO 2 - Agent with Tools
tools = load_tools(["llm-math"], llm=llm, callbacks=callbacks)
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    callbacks=callbacks,
)
agent.run("What is 123124134123 + 422095834?")
clearml_callback.flush_tracker(
    langchain_asset=agent, name="Agent with Tools", finish=True
)

Expected behaviour

First code snippet output

[...]  UserWarning: Importing OpenAI from langchain root module is no longer supported.
  warnings.warn(
ClearML Task: overwriting (reusing) task id=[...]
2023-10-04 12:49:29,380 - clearml.Task - INFO - No repository found, storing script code instead
ClearML results page: [...] 
The clearml callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/allegroai/clearml/issues with the tag `langchain`.
{'action': 'on_llm_start', 'lc': 1, 'type': 'constructor', 'id': ['langchain', 'llms', 'openai', 'OpenAI'], 'kwargs_temperature': 0.0, 'kwargs_openai_api_key_lc': 1, 'kwargs_openai_api_key_type': 'secret', 'kwargs_openai_api_key_id': ['OPENAI_API_KEY'], 'step': 1, 'starts': 1, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}
{'action': 'on_llm_start', 'lc': 1, 'type': 'constructor', 'id': ['langchain', 'llms', 'openai', 'OpenAI'], 'kwargs_temperature': 0.0, 'kwargs_openai_api_key_lc': 1, 'kwargs_openai_api_key_type': 'secret', 'kwargs_openai_api_key_id': ['OPENAI_API_KEY'], 'step': 2, 'starts': 2, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}
{'action': 'on_llm_start', 'lc': 1, 'type': 'constructor', 'id': ['langchain', 'llms', 'openai', 'OpenAI'], 'kwargs_temperature': 0.0, 'kwargs_openai_api_key_lc': 1, 'kwargs_openai_api_key_type': 'secret', 'kwargs_openai_api_key_id': ['OPENAI_API_KEY'], 'step': 3, 'starts': 3, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}
{'action': 'on_llm_start', 'lc': 1, 'type': 'constructor', 'id': ['langchain', 'llms', 'openai', 'OpenAI'], 'kwargs_temperature': 0.0, 'kwargs_openai_api_key_lc': 1, 'kwargs_openai_api_key_type': 'secret', 'kwargs_openai_api_key_id': ['OPENAI_API_KEY'], 'step': 4, 'starts': 4, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 4, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}
{'action': 'on_llm_start', 'lc': 1, 'type': 'constructor', 'id': ['langchain', 'llms', 'openai', 'OpenAI'], 'kwargs_temperature': 0.0, 'kwargs_openai_api_key_lc': 1, 'kwargs_openai_api_key_type': 'secret', 'kwargs_openai_api_key_id': ['OPENAI_API_KEY'], 'step': 5, 'starts': 5, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 5, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}
{'action': 'on_llm_start', 'lc': 1, 'type': 'constructor', 'id': ['langchain', 'llms', 'openai', 'OpenAI'], 'kwargs_temperature': 0.0, 'kwargs_openai_api_key_lc': 1, 'kwargs_openai_api_key_type': 'secret', 'kwargs_openai_api_key_id': ['OPENAI_API_KEY'], 'step': 6, 'starts': 6, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 6, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}
[...] : UserWarning: [W006] No entities to visualize found in Doc object. If this is surprising to you, make sure the Doc was processed using a model that supports named entity recognition, and check the `doc.ents` property manually if necessary.
  warnings.warn(Warnings.W006)
{'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 7, 'starts': 6, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 6, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': 0.1, 'automated_readability_index': 1.4, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.6, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 60.06, 'crawford': -0.2, 'gulpease_index': 77.46153846153847, 'osman': 116.91}
{'action': 'on_llm_end', 'model_name': 'text-davinci-003', 'step': 8, 'starts': 6, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 6, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 92.12, 'flesch_kincaid_grade': 3.6, 'smog_index': 0.0, 'coleman_liau_index': 5.5, 'automated_readability_index': 5.7, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 121.6, 'szigriszt_pazos': 117.16, 'gutierrez_polini': 51.1, 'crawford': 1.0, 'gulpease_index': 68.23076923076923, 'osman': 100.17}
{'action': 'on_llm_end', 'model_name': 'text-davinci-003', 'step': 9, 'starts': 6, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 6, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': 0.1, 'automated_readability_index': 1.4, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.6, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 60.06, 'crawford': -0.2, 'gulpease_index': 77.46153846153847, 'osman': 116.91}
{'action': 'on_llm_end', 'model_name': 'text-davinci-003', 'step': 10, 'starts': 6, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 6, 'llm_ends': 4, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 92.12, 'flesch_kincaid_grade': 3.6, 'smog_index': 0.0, 'coleman_liau_index': 5.5, 'automated_readability_index': 5.7, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 121.6, 'szigriszt_pazos': 117.16, 'gutierrez_polini': 51.1, 'crawford': 1.0, 'gulpease_index': 68.23076923076923, 'osman': 100.17}
{'action': 'on_llm_end', 'model_name': 'text-davinci-003', 'step': 11, 'starts': 6, 'ends': 5, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 6, 'llm_ends': 5, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': 0.1, 'automated_readability_index': 1.4, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.6, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 60.06, 'crawford': -0.2, 'gulpease_index': 77.46153846153847, 'osman': 116.91}
{'action': 'on_llm_end', 'model_name': 'text-davinci-003', 'step': 12, 'starts': 6, 'ends': 6, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 6, 'llm_ends': 6, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 92.12, 'flesch_kincaid_grade': 3.6, 'smog_index': 0.0, 'coleman_liau_index': 5.5, 'automated_readability_index': 5.7, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 121.6, 'szigriszt_pazos': 117.16, 'gutierrez_polini': 51.1, 'crawford': 1.0, 'gulpease_index': 68.23076923076923, 'osman': 100.17}
Traceback (most recent call last):
  File "[...]", line 25, in <module>
    clearml_callback.flush_tracker(langchain_asset=llm, name="simple_sequential")
  File "[...]python3.10/site-packages/langchain/callbacks/clearml_callback.py", line 462, in flush_tracker
    session_analysis_df = self._create_session_analysis_df()
  File "[...]python3.10/site-packages/langchain/callbacks/clearml_callback.py", line 382, in _create_session_analysis_df
    on_llm_start_records_df[["step", "prompts", "name"]]
  File "[...]python3.10/site-packages/pandas/core/frame.py", line 3767, in __getitem__
    indexer = self.columns._get_indexer_strict(key, "columns")[1]
  File "[...]/python3.10/site-packages/pandas/core/indexes/base.py", line 5877, in _get_indexer_strict
    self._raise_if_missing(keyarr, indexer, axis_name)
  File "[...]/python3.10/site-packages/pandas/core/indexes/base.py", line 5941, in _raise_if_missing
    raise KeyError(f"{not_found} not in index")
KeyError: "['name'] not in index"

Second code snippet output

[...]: UserWarning: Importing OpenAI from langchain root module is no longer supported.
  warnings.warn(
ClearML Task: overwriting (reusing) task id=[...]
2023-10-04 12:52:18,347 - clearml.Task - INFO - No repository found, storing script code instead
ClearML results page: [...]
The clearml callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/allegroai/clearml/issues with the tag `langchain`.


> Entering new AgentExecutor chain...
{'action': 'on_chain_start', 'lc': 1, 'type': 'not_implemented', 'id': ['langchain', 'agents', 'agent', 'AgentExecutor'], 'repr': 'AgentExecutor(callbacks=[<langchain.callbacks.stdout.StdOutCallbackHandler object at 0x7f33ef303520>, <langchain.callbacks.clearml_callback.ClearMLCallbackHandler object at 0x7f33b20600a0>], tags=[\'zero-shot-react-description\'], agent=ZeroShotAgent(llm_chain=LLMChain(prompt=PromptTemplate(input_variables=[\'input\', \'agent_scratchpad\'], template=\'Answer the following questions as best you can. You have access to the following tools:\\n\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}\'), llm=OpenAI(callbacks=[<langchain.callbacks.stdout.StdOutCallbackHandler object at 0x7f33ef303520>, <langchain.callbacks.clearml_callback.ClearMLCallbackHandler object at 0x7f33b20600a0>], client=<class \'openai.api_resources.completion.Completion\'>, temperature=0.0, openai_api_key=\'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\', openai_api_base=\'\', openai_organization=\'\', openai_proxy=\'\')), output_parser=MRKLOutputParser(), allowed_tools=[\'Calculator\']), tools=[Tool(name=\'Calculator\', description=\'Useful for when you need to answer questions about math.\', args_schema=None, return_direct=False, verbose=False, callbacks=[<langchain.callbacks.stdout.StdOutCallbackHandler object at 0x7f33ef303520>, <langchain.callbacks.clearml_callback.ClearMLCallbackHandler object at 0x7f33b20600a0>], callback_manager=None, tags=None, metadata=None, handle_tool_error=False, func=<bound method Chain.run of LLMMathChain(llm_chain=LLMChain(prompt=PromptTemplate(input_variables=[\'question\'], template=\'Translate a math problem into a expression that can be executed using Python\\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate("37593 * 67")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate("37593**(1/5)")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n\'), llm=OpenAI(callbacks=[<langchain.callbacks.stdout.StdOutCallbackHandler object at 0x7f33ef303520>, <langchain.callbacks.clearml_callback.ClearMLCallbackHandler object at 0x7f33b20600a0>], client=<class \'openai.api_resources.completion.Completion\'>, temperature=0.0, openai_api_key=\'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\', openai_api_base=\'\', openai_organization=\'\', openai_proxy=\'\')))>, coroutine=<bound method Chain.arun of LLMMathChain(llm_chain=LLMChain(prompt=PromptTemplate(input_variables=[\'question\'], template=\'Translate a math problem into a expression that can be executed using Python\\\'s numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${{Question with math problem.}}\\n```text\\n${{single line mathematical expression that solves the problem}}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${{Output of running the code}}\\n```\\nAnswer: ${{Answer}}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate("37593 * 67")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate("37593**(1/5)")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: {question}\\n\'), llm=OpenAI(callbacks=[<langchain.callbacks.stdout.StdOutCallbackHandler object at 0x7f33ef303520>, <langchain.callbacks.clearml_callback.ClearMLCallbackHandler object at 0x7f33b20600a0>], client=<class \'openai.api_resources.completion.Completion\'>, temperature=0.0, openai_api_key=\'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\', openai_api_base=\'\', openai_organization=\'\', openai_proxy=\'\')))>)])', 'step': 1, 'starts': 1, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 0, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'input': 'What is 123124134123 + 422095834?'}
{'action': 'on_llm_start', 'lc': 1, 'type': 'constructor', 'id': ['langchain', 'llms', 'openai', 'OpenAI'], 'kwargs_temperature': 0.0, 'kwargs_openai_api_key_lc': 1, 'kwargs_openai_api_key_type': 'secret', 'kwargs_openai_api_key_id': ['OPENAI_API_KEY'], 'step': 2, 'starts': 2, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: What is 123124134123 + 422095834?\nThought:'}
ClearML Monitor: GPU monitoring failed getting GPU reading, switching off GPU monitoring
{'action': 'on_llm_end', 'token_usage_completion_tokens': 24, 'token_usage_prompt_tokens': 162, 'token_usage_total_tokens': 186, 'model_name': 'text-davinci-003', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': ' I need to add two large numbers\nAction: Calculator\nAction Input: 123124134123 + 422095834', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 58.28, 'flesch_kincaid_grade': 8.4, 'smog_index': 0.0, 'coleman_liau_index': 15.3, 'automated_readability_index': 13.3, 'dale_chall_readability_score': 11.57, 'difficult_words': 4, 'linsear_write_formula': 7.0, 'gunning_fog': 8.28, 'text_standard': '7th and 8th grade', 'fernandez_huerta': 97.6, 'szigriszt_pazos': 93.2, 'gutierrez_polini': 34.69, 'crawford': 2.9, 'gulpease_index': 52.07692307692308, 'osman': 38.79}
 I need to add two large numbers
Action: Calculator
Action Input: 123124134123 + 422095834{'action': 'on_agent_action', 'tool': 'Calculator', 'tool_input': '123124134123 + 422095834', 'log': ' I need to add two large numbers\nAction: Calculator\nAction Input: 123124134123 + 422095834', 'step': 4, 'starts': 3, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 1, 'tool_ends': 0, 'agent_ends': 0}
{'action': 'on_tool_start', 'input_str': '123124134123 + 422095834', 'name': 'Calculator', 'description': 'Useful for when you need to answer questions about math.', 'step': 5, 'starts': 4, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 0, 'agent_ends': 0}
{'action': 'on_llm_start', 'lc': 1, 'type': 'constructor', 'id': ['langchain', 'llms', 'openai', 'OpenAI'], 'kwargs_temperature': 0.0, 'kwargs_openai_api_key_lc': 1, 'kwargs_openai_api_key_type': 'secret', 'kwargs_openai_api_key_id': ['OPENAI_API_KEY'], 'step': 6, 'starts': 5, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Translate a math problem into a expression that can be executed using Python\'s numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${Question with math problem.}\n```text\n${single line mathematical expression that solves the problem}\n```\n...numexpr.evaluate(text)...\n```output\n${Output of running the code}\n```\nAnswer: ${Answer}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate("37593 * 67")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate("37593**(1/5)")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: 123124134123 + 422095834\n'}
{'action': 'on_llm_end', 'token_usage_completion_tokens': 35, 'token_usage_prompt_tokens': 234, 'token_usage_total_tokens': 269, 'model_name': 'text-davinci-003', 'step': 7, 'starts': 5, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 0, 'agent_ends': 0, 'text': '```text\n123124134123 + 422095834\n```\n...numexpr.evaluate("123124134123 + 422095834")...\n', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 15.64, 'flesch_kincaid_grade': 12.3, 'smog_index': 0.0, 'coleman_liau_index': 53.68, 'automated_readability_index': 60.2, 'dale_chall_readability_score': 19.67, 'difficult_words': 2, 'linsear_write_formula': 4.0, 'gunning_fog': 18.0, 'text_standard': '12th and 13th grade', 'fernandez_huerta': 69.8, 'szigriszt_pazos': 64.78, 'gutierrez_polini': -32.66, 'crawford': 3.3, 'gulpease_index': -19.0, 'osman': -167.01}

Observation: Answer: 123546229957
Thought:{'action': 'on_tool_end', 'output': 'Answer: 123546229957', 'step': 8, 'starts': 5, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0}
{'action': 'on_llm_start', 'lc': 1, 'type': 'constructor', 'id': ['langchain', 'llms', 'openai', 'OpenAI'], 'kwargs_temperature': 0.0, 'kwargs_openai_api_key_lc': 1, 'kwargs_openai_api_key_type': 'secret', 'kwargs_openai_api_key_id': ['OPENAI_API_KEY'], 'step': 9, 'starts': 6, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: What is 123124134123 + 422095834?\nThought: I need to add two large numbers\nAction: Calculator\nAction Input: 123124134123 + 422095834\nObservation: Answer: 123546229957\nThought:'}
{'action': 'on_llm_end', 'token_usage_completion_tokens': 16, 'token_usage_prompt_tokens': 202, 'token_usage_total_tokens': 218, 'model_name': 'text-davinci-003', 'step': 10, 'starts': 6, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'text': ' I now know the final answer\nFinal Answer: 123546229957', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 62.34, 'flesch_kincaid_grade': 6.8, 'smog_index': 0.0, 'coleman_liau_index': 10.58, 'automated_readability_index': 7.7, 'dale_chall_readability_score': 7.59, 'difficult_words': 1, 'linsear_write_formula': 3.5, 'gunning_fog': 3.6, 'text_standard': '7th and 8th grade', 'fernandez_huerta': 101.7, 'szigriszt_pazos': 100.92, 'gutierrez_polini': 42.47, 'crawford': 1.9, 'gulpease_index': 70.11111111111111, 'osman': 65.38}
 I now know the final answer
Final Answer: 123546229957
{'action': 'on_agent_finish', 'output': '123546229957', 'log': ' I now know the final answer\nFinal Answer: 123546229957', 'step': 11, 'starts': 6, 'ends': 5, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 1}

> Finished chain.
{'action': 'on_chain_end', 'outputs': '123546229957', 'step': 12, 'starts': 6, 'ends': 6, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 1, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 1}
Traceback (most recent call last):
  File "[...]", line 30, in <module>
    clearml_callback.flush_tracker(
  File "[...]/python3.10/site-packages/langchain/callbacks/clearml_callback.py", line 462, in flush_tracker
    session_analysis_df = self._create_session_analysis_df()
  File "[...]/python3.10/site-packages/langchain/callbacks/clearml_callback.py", line 382, in _create_session_analysis_df
    on_llm_start_records_df[["step", "prompts", "name"]]
  File [...]/python3.10/site-packages/pandas/core/frame.py", line 3767, in __getitem__
    indexer = self.columns._get_indexer_strict(key, "columns")[1]
  File "[...]/python3.10/site-packages/pandas/core/indexes/base.py", line 5877, in _get_indexer_strict
    self._raise_if_missing(keyarr, indexer, axis_name)
  File "[...]python3.10/site-packages/pandas/core/indexes/base.py", line 5941, in _raise_if_missing
    raise KeyError(f"{not_found} not in index")
KeyError: "['name'] not in index"

Environment

  • Server type (self hosted \ app.clear.ml): clear.ml cloud
  • ClearML SDK Version: 1.13.1
  • langchain=0.0.306
  • Python Version: 3.10
  • OS (Windows \ Linux \ Macos): Linux
@mdziezyc mdziezyc added the bug Something isn't working label Oct 4, 2023
@eugen-ajechiloae-clearml
Copy link
Collaborator

Hi @mdziezyc ! We are looking into this problem and #1126 and we will come back to you as soon as we have a fix. Thank you for reporting!

@eugen-ajechiloae-clearml
Copy link
Collaborator

@mdziezyc We have submitted a PR related to this issue and #1126 : langchain-ai/langchain#11472

@pollfly
Copy link
Contributor

pollfly commented Jan 16, 2024

Hey @mdziezyc! Just letting you know that this issue has been resolved. See langchain-ai/langchain#11472. Let us know if there are any issues :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants