Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

model fails to load when verbose=False #796

Closed
amaiya opened this issue Oct 6, 2023 · 0 comments
Closed

model fails to load when verbose=False #796

amaiya opened this issue Oct 6, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@amaiya
Copy link

amaiya commented Oct 6, 2023

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [X ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [ X] I carefully followed the README.md.
  • [ X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [ X] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

When supplying verbose=False to Llama, the model should load as expected.

Current Behavior

When verbose=False, the model fails to load:

from llama_cpp import Llama
llm = Llama(model_path='/tmp/model.gguf', verbose=False)
# ERROR:
# [/usr/local/lib/python3.10/dist-packages/llama_cpp/utils.py](https://localhost:8080/#) in __enter__(self)
#      9         self.errnull_file = open(os.devnull, "w")
#     10 
#---> 11         self.old_stdout_fileno_undup = sys.stdout.fileno()
#     12         self.old_stderr_fileno_undup = sys.stderr.fileno()
#     13 

#UnsupportedOperation: fileno

Environment and Context

The problem happens on Google Colab and a user reported this problem occurring on Windows 11, as well. Doesn't seem to happen on Ubuntu (or Ubuntu in WSL), though.

A Google Colab notebook reproducing the issue is available here.

Failure Information (for bugs)

# [/usr/local/lib/python3.10/dist-packages/llama_cpp/utils.py](https://localhost:8080/#) in __enter__(self)
#      9         self.errnull_file = open(os.devnull, "w")
#     10 
#---> 11         self.old_stdout_fileno_undup = sys.stdout.fileno()
#     12         self.old_stderr_fileno_undup = sys.stderr.fileno()
#     13 

Steps to Reproduce

A Google Colab notebook reproducing the issue is available here.

antoine-lizee pushed a commit to antoine-lizee/llama-cpp-python that referenced this issue Oct 30, 2023
Otherwise observing this in the interactive mode:
/usr/lib/gcc/x86_64-pc-linux-gnu/12/include/g++-v12/bits/stl_vector.h:1230: reference std::vector<int>::back() [_Tp = int, _Alloc = std::allocator<int>]: Assertion '!this->empty()' failed.
@abetlen abetlen added the bug Something isn't working label Nov 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants