Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'NoneType' object has no attribute 'nodes' #20

Open
beatG123 opened this issue Apr 15, 2024 · 7 comments
Open

AttributeError: 'NoneType' object has no attribute 'nodes' #20

beatG123 opened this issue Apr 15, 2024 · 7 comments

Comments

@beatG123
Copy link

threr are some prombles when using process_response and convert_to_graph_documents:
AttributeError: 'NoneType' object has no attribute 'nodes'
in
llm=ChatOpenAI(model_name="gpt-3.5-turbo-0125") # gpt-4-0125-preview occasionally has issues llm_transformer = LLMGraphTransformer(llm=llm) document = Document(page_content="Elon Musk is suing OpenAI") print(document) graph_document = llm_transformer.process_response(document)
and
llm=ChatOpenAI(model_name="gpt-3.5-turbo-0125") # gpt-4-0125-preview occasionally has issues llm_transformer = LLMGraphTransformer(llm=llm) document = Document(page_content="Elon Musk is suing OpenAI") print(document) graph_documents = llm_transformer.convert_to_graph_documents([document]) graph.add_graph_documents( graph_documents, baseEntityLabel=True, include_source=True )
who can help me?

@beatG123
Copy link
Author

beatG123 commented Apr 15, 2024

blogs/llm
/enhancing_rag_with_graph.ipynb

I can use the convert_to_graph_documents method without any issues, but when I call the process_response method, it will cause an error. The strange thing is that convert_to_graph_documents actually includes process_response. What could be the reason for this?

@danielosagie
Copy link

danielosagie commented May 16, 2024

Not a solution but I was getting the same error, now I tried this workaround

from langchain_core.documents import Document

documents = [Document(page_content= f"{text}", metadata={"title": f"{file_path}"})]

print("")
print("final document")
print(documents[0].page_content)

len(text)

but I am getting a new error now
AttributeError: 'tuple' object has no attribute 'page_content'

Hoping to hear back soon on what to do here

@tomasonjo
Copy link
Owner

@danielosagie I need more information and code

@danielosagie
Copy link

Here is the notebook, I am trying to use a hug face inference API for llama3 on a local server and I had to change out the PDFloader cause it wasn't working for me but everything else is the same as you wrote it. At In [136] I am finally putting the documents in to get converted but it says I don't have anything in the object. I tried breaking it down and using langchain's Document class but it hasnt worked. I feel like I am so close but I just don't know what to do, any advice would be appreciated

@tomasonjo
Copy link
Owner

tomasonjo commented May 16, 2024 via email

@danielosagie
Copy link

Hey there I tried increasing the limit and eventually I got the same error.

I noticed that it only works on the very beginning of the document (like maybe the first chunk) and then just spits out the template until it reaches the token limit.

Your example was sort of short in the enhancing-rag notebook, so I am wondering how you would handle multiple documents or even just one really long document with multiple chunks?

@tomasonjo
Copy link
Owner

tomasonjo commented May 17, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants