Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault in Llama2 #253

Closed
marcelooliveira opened this issue Feb 23, 2024 · 4 comments
Closed

Segmentation fault in Llama2 #253

marcelooliveira opened this issue Feb 23, 2024 · 4 comments
Assignees

Comments

@marcelooliveira
Copy link

Hi, I've uploaded the llama2 model image to Azure but I'm facing a Segmentation fault error in Python that is preventing my container to start.

Any suggestions?

Output

> kubectl logs workspace-llama-2-7b-0
Fatal Python error: Segmentation fault

Current thread 0x00007fa2975e0b80 (most recent call first):
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 113 in _call_store
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 64 in __init__
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 253 in create_backend
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/registry.py", line 36 in _create_c10d_handler
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/api.py", line 258 in create_handler
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/elastic/rendezvous/registry.py", line 66 in get_rendezvous_handler
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 238 in launch_agent        
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 135 in __call__
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/run.py", line 803 in run
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/run.py", line 812 in main
  File "/usr/local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347 in wrapper
  File "/usr/local/bin/torchrun", line 8 in 

Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special (total: 20)
Segmentation fault (core dumped)
@ishaansehgal99
Copy link
Collaborator

Hey @marcelooliveira are you using the release branch? (https://github.com/Azure/kaito/tree/v0.1.0)
If not can you try building using this branch.

@marcelooliveira
Copy link
Author

Hi Ishaan, I'm using the main branch. I'll try the 0.1.0 branch now. Thanks!

@marcelooliveira
Copy link
Author

Hi Ishaan, I've tried the v0.1.0 branch, but the "Segmentation Fault" error persists. Could it be due to memory/CPU/GPU limitations in my AKS cluster? Thanks!

@marcelooliveira
Copy link
Author

I found the cause. I pushed a corrupted model to my ACR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants