Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infer llama2 vocab_size from tokenizer model when params.json provides vocab_size=-1 #2805

Open
l3utterfly opened this issue Apr 2, 2024 · 7 comments
Assignees
Labels
bug Something isn't working module: examples Issues related to demos under examples directory triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@l3utterfly
Copy link

馃悰 Describe the bug

Following the instructions here: https://github.com/pytorch/executorch/tree/main/examples/models/llama2

I ran this command after downloading Llama2 weights: python3 -m examples.models.llama2.export_llama --checkpoint /path/to/Llama-2-7b/consolidated.00.pth --params /path/to/Llama-2-7b/params.json

I get this error: RuntimeError: Trying to create tensor with negative dimension -1: [-1, 4096]

Stacktrace:

INFO:datasets:PyTorch version 2.4.0.dev20240324+cpu available.
Could not import fairseq2 modules.
INFO:root:Loading model with checkpoint=/home/layla/src/text-generation-webui/models/Llama-2-7b/consolidated.00.pth, params=/home/layla/src/text-generation-webui/models/Llama-2-7b/params.json, use_kv_cache=False, weight_type=WeightType.LLAMA
Traceback (most recent call last):
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/layla/src/executorch/examples/models/llama2/export_llama.py", line 30, in <module>
    main()  # pragma: no cover
  File "/home/layla/src/executorch/examples/models/llama2/export_llama.py", line 26, in main
    export_llama(modelname, args)
  File "/home/layla/src/executorch/examples/models/llama2/export_llama_lib.py", line 504, in export_llama
    return _export_llama(modelname, args)
  File "/home/layla/src/executorch/examples/models/llama2/export_llama_lib.py", line 625, in _export_llama
    builder_exported_to_edge = _prepare_for_llama_export(
  File "/home/layla/src/executorch/examples/models/llama2/export_llama_lib.py", line 582, in _prepare_for_llama_export
    load_llama_model(
  File "/home/layla/src/executorch/examples/models/llama2/builder.py", line 83, in load_llama_model
    model, example_inputs, _ = EagerModelFactory.create_model(
  File "/home/layla/src/executorch/examples/models/model_factory.py", line 44, in create_model
    model = model_class(**kwargs)
  File "/home/layla/src/executorch/examples/models/llama2/model.py", line 139, in __init__
    self.model_ = Transformer(model_args)
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/site-packages/executorch/examples/models/llama2/llama_transformer.py", line 418, in __init__
    self.tok_embeddings = nn.Embedding(params.vocab_size, params.dim)
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 143, in __init__
    self.weight = Parameter(torch.empty((num_embeddings, embedding_dim), **factory_kwargs),
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/site-packages/torch/utils/_device.py", line 78, in __torch_function__
    return func(*args, **kwargs)
RuntimeError: Trying to create tensor with negative dimension -1: [-1, 4096]

Versions

CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5955WX 16-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 7031.2500
CPU min MHz: 1800.0000
BogoMIPS: 8000.05
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] executorch==0.1.0
[pip3] numpy==1.26.4
[pip3] torch==2.4.0.dev20240324+cpu
[pip3] torchao-nightly==2024.3.29
[pip3] torchaudio==2.2.0.dev20240324+cpu
[pip3] torchsr==1.0.4
[pip3] torchvision==0.19.0.dev20240324+cpu
[conda] executorch 0.1.0 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.4.0.dev20240324+cpu pypi_0 pypi
[conda] torchao-nightly 2024.3.29 pypi_0 pypi
[conda] torchaudio 2.2.0.dev20240324+cpu pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.19.0.dev20240324+cpu pypi_0 pypi

@dbort
Copy link
Contributor

dbort commented Apr 2, 2024

Thank you for reporting this issue @l3utterfly, and for all of the environment details!

Things are changing in this area pretty rapidly. Which specific git commit were you using when you saw this problem?

cc: @JacobSzwejbka @mikekgfb

@dbort dbort changed the title Unable to generate Llama2 pte by following the instructions "Trying to create tensor with negative dimension -1: [-1, 4096]" when generating Llama2 pte Apr 2, 2024
@l3utterfly
Copy link
Author

This is the commit hash I have in my environment: 57e3449

@dbort
Copy link
Contributor

dbort commented Apr 2, 2024

Thanks for the hash. What are the contents of your params.json file? I asked around, and one theory is that the vocab_size entry might be missing or might be -1. For llama7B, vocab_size should be 32000.

@l3utterfly
Copy link
Author

Yes.. vocab size is "-1".

But this is from the official Llama2 repository on huggingface: https://huggingface.co/meta-llama/Llama-2-7b/blob/main/params.json

{"dim": 4096, "multiple_of": 256, "n_heads": 32, "n_layers": 32, "norm_eps": 1e-05, "vocab_size": -1}

Maybe we should update the documentation to add a line about needing to edit this manually?

@mikekgfb
Copy link
Contributor

mikekgfb commented Apr 3, 2024 via email

@dbort dbort changed the title "Trying to create tensor with negative dimension -1: [-1, 4096]" when generating Llama2 pte Infer llama2 vocab_size from tokenizer model when params.json provides vocab_size=-1 Apr 3, 2024
@dbort
Copy link
Contributor

dbort commented Apr 3, 2024

@mikekgfb sounds like there are two steps here:

  • Near term: Update the llama2 export docs to mention this problem and the workaround
  • Longer term: If vocab_size is -1, infer the real size from the accompanying tokenizer model

@dbort dbort added bug Something isn't working module: examples Issues related to demos under examples directory triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Apr 3, 2024
mergennachin added a commit to mergennachin/executorch-1 that referenced this issue Apr 8, 2024
Summary: Fixing issues we've seen in pytorch#2907 and pytorch#2805

Differential Revision: D55893925
@mergennachin
Copy link
Contributor

#2926

mergennachin added a commit to mergennachin/executorch-1 that referenced this issue Apr 8, 2024
Summary:

Fixing issues we've seen in pytorch#2907 and pytorch#2805

Differential Revision: D55893925
facebook-github-bot pushed a commit that referenced this issue Apr 8, 2024
Summary:
Pull Request resolved: #2926

Fixing issues we've seen in #2907 and #2805

bypass-github-export-checks
bypass-github-pytorch-ci-checks
bypass-github-executorch-ci-checks

Reviewed By: iseeyuan, cccclai

Differential Revision: D55893925

fbshipit-source-id: c6e0264d868cb487faf02f95ff1bd223cbcc97ac
pytorchbot pushed a commit that referenced this issue Apr 9, 2024
Summary:
Pull Request resolved: #2926

Fixing issues we've seen in #2907 and #2805

bypass-github-export-checks
bypass-github-pytorch-ci-checks
bypass-github-executorch-ci-checks

Reviewed By: iseeyuan, cccclai

Differential Revision: D55893925

fbshipit-source-id: c6e0264d868cb487faf02f95ff1bd223cbcc97ac
(cherry picked from commit 6db9d72)
mergennachin added a commit that referenced this issue Apr 9, 2024
Summary:
Pull Request resolved: #2926

Fixing issues we've seen in #2907 and #2805

bypass-github-export-checks
bypass-github-pytorch-ci-checks
bypass-github-executorch-ci-checks

Reviewed By: iseeyuan, cccclai

Differential Revision: D55893925

fbshipit-source-id: c6e0264d868cb487faf02f95ff1bd223cbcc97ac
(cherry picked from commit 6db9d72)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working module: examples Issues related to demos under examples directory triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants