-
Notifications
You must be signed in to change notification settings - Fork 996
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'NoneType' object has no attribute 'llama_batch_free' #660
Comments
I've noticed this as well. It seems to be an issue with how certain objects are garbage collected at system. I think it happens when llamacpp gets garbage collected before the actual model does -- does adding |
I think the underlying issue is this line in llama_cpp: Executed in: Python gives no guarantees on process shutdown regarding the order that objects are destructed. When it destructs objects, it changes any existing references to be 'None'. So, if the lib object above happens to be destructed first, then the call to getattr will pass in None for the lib parameter. The error message you get in a call to getattr(None, "llama_batch_free") matches the error observed in this issue: AttributeError: 'NoneType' object has no attribute 'llama_batch_free'. I haven't verified this, but I will submit a PR to llama-cpp if it checks out. |
@Fahmie23, what version do you get when you do:
I know I've seen an exception similar to what you reported on an older version of llama_cpp, but I'm not getting it to repo now on newer versions. I would expect this error to occur fairly randomly though, so I can't be sure if that's because it's been fixed or if I'm just not happening to see it due to randomness. |
…ng process shutdown (#734) llama_batch_free is failing during python shutdown when the interpreter can set objects to None non-deterministically. I believe Guidance is properly checking for None in the objects that can be accessed from Guidance, but llama-cpp-python is not, and the native library object in llama-cpp-python is being set to None before it can be called. Since we're only dealing with memory deallocation here, and since the process is being killed anyway, we can solve this on the Guidance side by not calling llama_batch_free during process termination.
I believe PR #734 should resolve this issue, but I wasn't getting the error in the first place, so I can't confirm. Closing this issue for now, but if anyone continues to see it in the next release, please re-open this issue. |
This is my code
This is the error that occur when i try to execute the file.
The text was updated successfully, but these errors were encountered: