-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
❌ Errors in create_base_entity_graph #437
Comments
im having this same issue |
me too |
1 similar comment
me too |
I did repo into the same directory and I got it working
…On Tue, 9 Jul 2024, 14:23 lifelmy, ***@***.***> wrote:
me too
—
Reply to this email directly, view it on GitHub
<#437 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BHZFMU7REM73EBXXA3555BLZLNCRHAVCNFSM6AAAAABKQ3CEAKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJWGI3TKMZUHE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
same issue !!!! any solution |
Try to modify the max_token . Worked for me (I have set it to 1700 for gpt 3.5) |
did you use gpt 3.5 turbo or same and i tried with 1200 max tokens but still not working |
I used gpt 3.5 turbo from azure |
Hi, can you please inspect in any of the cache entries for entity extraction and paste a result? I suspect it is an entity extraction issue. |
Result from cache/entity_extraction/chat-d87d9cc79a03b34a16a6895b3d54f53a It says no text provided, but I do have a txt file: root/input/input.txt
Also, regarding the max token length, which one are you referring to? Is it for llm (the default setting is 4000, and different for other tasks)?
|
Can you provide some examples of your correct file output? thanks |
This issue has been marked stale due to inactivity after repo maintainer or community member responses that request more information or suggest a solution. It will be closed after five additional days. |
It is fine when I used qwen2.But turn to other custom model,it doesn't work |
Same issue here, failed with both gpt-4o and gpt-4o-mini. |
|
indexing-engine.log |
|
你好,具体一点
LuMF ***@***.***>于2024年8月12日 周一16:34写道:
… indexing-engine.log
<https://github.com/user-attachments/files/16578251/indexing-engine.log>
logs.json <https://github.com/user-attachments/files/16578253/logs.json>
可以帮我看看我的问题吗,我用的是xinference的glm4和bge-m3模型
这个报错我也一直没找到问题在哪 应该是跟用的模型有关 建议换个模型试试 我之前也碰到过类似的问题 换了模型之后就OK了
—
Reply to this email directly, view it on GitHub
<#437 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BIAPQF7VCLIE4CEYKVDJSQLZRBXQ3AVCNFSM6AAAAABKQ3CEAKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOBTGM4TCMJVGY>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
This issue has been marked stale due to inactivity after repo maintainer or community member responses that request more information or suggest a solution. It will be closed after five additional days. |
我也遇到了相同的问题,求大佬帮助 |
有什么好的推荐呢?我这个情况如果不改提示词的话是可以正常运行的,修改后我不论怎么修改依然会遇到这个问题 |
In your logs it's failing at |
I have the same issue with ollama and qwen2. |
This issue has been marked stale due to inactivity after repo maintainer or community member responses that request more information or suggest a solution. It will be closed after five additional days. |
This issue has been closed after being marked as stale for five days. Please reopen if needed. |
I have successfully replicated the result for the official demo from get started.
This happens when I try on new data by extracting the textual info from a company's pdf report and storing textual information in input.txt. I have used UTF-8 encoding and the length of the document is rather short compared to the example.
This error is raised as seen in the log:
Can anyone help me with this issue, please? I am not very familiar with the pipeline as well as the technical details behind this. It's more for me to explore at this time. Thx in advance!
The text was updated successfully, but these errors were encountered: