Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError when using AutoGPT with Huggingface #5365

Closed
alienhd opened this issue May 28, 2023 · 4 comments
Closed

AssertionError when using AutoGPT with Huggingface #5365

alienhd opened this issue May 28, 2023 · 4 comments

Comments

@alienhd
Copy link

alienhd commented May 28, 2023

This code:

`from langchain.experimental import AutoGPT
from langchain import HuggingFaceHub

repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options

agent = AutoGPT.from_llm_and_tools(
ai_name="Tom",
ai_role="Assistant",
tools=tools,
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64}),
memory=vectorstore.as_retriever()
)
agent.chain.verbose = True`
agent.run(["write a weather report for SF today"])

outputs the error:

`AssertionError Traceback (most recent call last)
Cell In[21], line 1
----> 1 agent.run(["write a weather report for SF today"])

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\experimental\autonomous_agents\autogpt\agent.py:91, in AutoGPT.run(self, goals)
88 loop_count += 1
90 # Send message to AI, get response
---> 91 assistant_reply = self.chain.run(
92 goals=goals,
93 messages=self.full_message_history,
94 memory=self.memory,
95 user_input=user_input,
96 )
98 # Print Assistant thoughts
99 print(assistant_reply)

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\base.py:239, in Chain.run(self, callbacks, *args, **kwargs)
236 return self(args[0], callbacks=callbacks)[self.output_keys[0]]
238 if kwargs and not args:
--> 239 return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
241 if not kwargs and not args:
242 raise ValueError(
243 "run supported with either positional arguments or keyword arguments,"
244 " but none were provided."
245 )

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\base.py:140, in Chain.call(self, inputs, return_only_outputs, callbacks)
138 except (KeyboardInterrupt, Exception) as e:
139 run_manager.on_chain_error(e)
--> 140 raise e
141 run_manager.on_chain_end(outputs)
142 return self.prep_outputs(inputs, outputs, return_only_outputs)

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\base.py:134, in Chain.call(self, inputs, return_only_outputs, callbacks)
128 run_manager = callback_manager.on_chain_start(
129 {"name": self.class.name},
130 inputs,
131 )
132 try:
133 outputs = (
--> 134 self._call(inputs, run_manager=run_manager)
135 if new_arg_supported
136 else self._call(inputs)
137 )
138 except (KeyboardInterrupt, Exception) as e:
139 run_manager.on_chain_error(e)

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\llm.py:69, in LLMChain._call(self, inputs, run_manager)
64 def _call(
65 self,
66 inputs: Dict[str, Any],
67 run_manager: Optional[CallbackManagerForChainRun] = None,
68 ) -> Dict[str, str]:
---> 69 response = self.generate([inputs], run_manager=run_manager)
70 return self.create_outputs(response)[0]

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\llm.py:78, in LLMChain.generate(self, input_list, run_manager)
72 def generate(
73 self,
74 input_list: List[Dict[str, Any]],
75 run_manager: Optional[CallbackManagerForChainRun] = None,
76 ) -> LLMResult:
77 """Generate LLM result from inputs."""
---> 78 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
79 return self.llm.generate_prompt(
80 prompts, stop, callbacks=run_manager.get_child() if run_manager else None
81 )

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\llm.py:106, in LLMChain.prep_prompts(self, input_list, run_manager)
104 for inputs in input_list:
105 selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
--> 106 prompt = self.prompt.format_prompt(**selected_inputs)
107 _colored_text = get_colored_text(prompt.to_string(), "green")
108 _text = "Prompt after formatting:\n" + _colored_text

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\prompts\chat.py:144, in BaseChatPromptTemplate.format_prompt(self, **kwargs)
143 def format_prompt(self, **kwargs: Any) -> PromptValue:
--> 144 messages = self.format_messages(**kwargs)
145 return ChatPromptValue(messages=messages)

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\experimental\autonomous_agents\autogpt\prompt.py:51, in AutoGPTPrompt.format_messages(self, **kwargs)
49 memory: VectorStoreRetriever = kwargs["memory"]
50 previous_messages = kwargs["messages"]
---> 51 relevant_docs = memory.get_relevant_documents(str(previous_messages[-10:]))
52 relevant_memory = [d.page_content for d in relevant_docs]
53 relevant_memory_tokens = sum(
54 [self.token_counter(doc) for doc in relevant_memory]
55 )

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\base.py:377, in VectorStoreRetriever.get_relevant_documents(self, query)
375 def get_relevant_documents(self, query: str) -> List[Document]:
376 if self.search_type == "similarity":
--> 377 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
378 elif self.search_type == "similarity_score_threshold":
379 docs_and_similarities = (
380 self.vectorstore.similarity_search_with_relevance_scores(
381 query, **self.search_kwargs
382 )
383 )

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\faiss.py:255, in FAISS.similarity_search(self, query, k, **kwargs)
243 def similarity_search(
244 self, query: str, k: int = 4, **kwargs: Any
245 ) -> List[Document]:
246 """Return docs most similar to query.
247
248 Args:
(...)
253 List of Documents most similar to the query.
254 """
--> 255 docs_and_scores = self.similarity_search_with_score(query, k)
256 return [doc for doc, _ in docs_and_scores]

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\faiss.py:225, in FAISS.similarity_search_with_score(self, query, k)
215 """Return docs most similar to query.
216
217 Args:
(...)
222 List of Documents most similar to the query and score for each
223 """
224 embedding = self.embedding_function(query)
--> 225 docs = self.similarity_search_with_score_by_vector(embedding, k)
226 return docs

File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\faiss.py:199, in FAISS.similarity_search_with_score_by_vector(self, embedding, k)
197 if self._normalize_L2:
198 faiss.normalize_L2(vector)
--> 199 scores, indices = self.index.search(vector, k)
200 docs = []
201 for j, i in enumerate(indices[0]):

File ~\anaconda3\envs\langchain\Lib\site-packages\faiss\class_wrappers.py:329, in handle_Index..replacement_search(self, x, k, params, D, I)
327 n, d = x.shape
328 x = np.ascontiguousarray(x, dtype='float32')
--> 329 assert d == self.d
331 assert k > 0
333 if D is None:

AssertionError: `

How can I resolve this behaviour?

@bbetters
Copy link

Having the same error

@AhmedSalem2
Copy link

I encountered a similar issue. I believe that the problem lies in the dimensions of Faiss. To resolve this (for me at least), you need to adjust the embedding_size to the appropriate value. If you're using HuggingFaceEmbeddings(), the correct value should be 768. Here's the updated code:

embedding_size = 768
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})

I hope this helps you!

@abhishekbpandit
Copy link

I am facing this error when I am following the official langchain documentation: https://python.langchain.com/docs/use_cases/autonomous_agents/autogpt

As well as in the above case. Has anybody found a reliable solution for this?

@alienhd alienhd closed this as completed Aug 4, 2023
@safimuhammad
Copy link

@AhmedSalem2 you're a lifesaver.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants