Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference with GPT-J-6B.ipynb NameError: name 'init_empty_weights' is not defined when loading model #131

Closed
splevine opened this issue Jun 22, 2022 · 6 comments

Comments

@splevine
Copy link

When loading model in Google Colab Pro for Inference with GPT-J-6B notebook - I am able to download the model, but then receive this error:

Downloading: 100%
836/836 [00:00<00:00, 25.0kB/s]
Downloading: 100%
11.3G/11.3G [03:09<00:00, 62.7MB/s]


NameError                                 Traceback (most recent call last)
[<ipython-input-4-acd4d06d6493>](https://localhost:8080/#) in <module>()
      4 device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
      5 
----> 6 model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", low_cpu_mem_usage=True)
      7 model.to(device)
      8 tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")

[/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
   2063             init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts
   2064         elif low_cpu_mem_usage:
-> 2065             init_contexts.append(init_empty_weights())
   2066 
   2067         with ContextManagers(init_contexts):

NameError: name 'init_empty_weights' is not defined
@NielsRogge
Copy link
Owner

Cc'ing @sgugger, also experienced this when loading weights of GPT-NeoX. Might be fixed already on the main branch.

@sgugger
Copy link

sgugger commented Jun 22, 2022

Looks like you don't have accelerate installed: ! pip install accelerate. Since you're not giving the version of Transformers you're using, I can't know if it's fixed already (in the sense that you should get an error message telling you to do this) or not.

@splevine
Copy link
Author

Thank you installing accelerate and then restarting the notebook fixed it :)

@Rumidom
Copy link

Rumidom commented Apr 20, 2023

I have accelerate installed but I'm getting the same error when running:
model = GPTNeoXForCausalLM.from_pretrained(Model, device_map="auto", load_in_8bit=True, cache_dir='models_hf',low_cpu_mem_usage=True)

Transformers version 4.25.1

@OmarMohammed88
Copy link

iam trying to load vicuna-7b-delta-v1.1 on cloab and , all the above solution didn't work ; and got this error

Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

 and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so...
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
  warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')}
  warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('http'), PosixPath('8013'), PosixPath('//172.28.0.1')}
  warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-t4-s-ertbm97wfzyq --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')}
  warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')}
  warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('module'), PosixPath('//ipykernel.pylab.backend_inline')}
  warn(msg)
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0'), PosixPath('/usr/local/cuda/lib64/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
  warn(msg)```

i am using transformers version 4.28.1

@Pashisfisuta
Copy link

I have installed accelerate but same error in running:
NameError: name 'init_empty_weights' is not defined
my package is :
accelerate==0.19.0
transformers==4.29.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants