Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

did "mii.pipeline" support float16? #390

Closed
wangrendong-yition opened this issue Jan 25, 2024 · 3 comments
Closed

did "mii.pipeline" support float16? #390

wangrendong-yition opened this issue Jan 25, 2024 · 3 comments
Assignees

Comments

@wangrendong-yition
Copy link

got OOM when load Llama2-7B on a 24GB GPU, and I cannot find the config of dtypes

@wangrendong-yition
Copy link
Author

use legacy method still get OOM error, with mii_configs = {"tensor_parallel": 1, "dtype": "fp16"}
it seems the hf model had loaded successfully, but OOM in deepspeed when "replace_transformer_layer"

Traceback (most recent call last):
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/mii/legacy/launch/multi_gpu_server.py", line 97, in
main()
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/mii/legacy/launch/multi_gpu_server.py", line 89, in main
inference_pipeline = load_models(args.model_config)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/mii/legacy/models/load_models.py", line 72, in load_models
engine = deepspeed.init_inference(getattr(inference_pipeline,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/init.py", line 336, in init_inference
engine = InferenceEngine(model, config=ds_inference_config)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/inference/engine.py", line 158, in init
self._apply_injection_policy(config)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/inference/engine.py", line 418, in _apply_injection_policy
replace_transformer_layer(client_module, self.module, checkpoint, config, self.config)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 354, in replace_transformer_layer
replaced_module = replace_module(model=model,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 603, in replace_module
replaced_module, _ = _replace_module(model, policy, state_dict=sd)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 663, in _replace_module
_, layer_id = _replace_module(child,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 663, in _replace_module
_, layer_id = _replace_module(child,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 639, in _replace_module
replaced_module = policies[child.class][0](child,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 310, in replace_fn
new_module = replace_with_policy(child,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 250, in replace_with_policy
_container.transpose()
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/containers/features/meta_tensor.py", line 48, in transpose
super().transpose()
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/containers/base.py", line 286, in transpose
self.transpose_mlp()
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/containers/base.py", line 295, in transpose_mlp
self._h4h_w = self.transpose_impl(self.h4h_w.data)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/containers/base.py", line 300, in transpose_impl
data.reshape(-1).copy
(data.transpose(-1, -2).contiguous().reshape(-1))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 172.00 MiB. GPU 0 has a total capacty of 23.69 GiB of which 116.62 MiB is free. Process 2657081 has 2.49 GiB memory in use. Including non-PyTorch memory, this process has 21.08 GiB memory in use. Of the allocated memory 20.56 GiB is allocated by PyTorch, and 231.30 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@mrwyattii mrwyattii self-assigned this Jan 26, 2024
@mrwyattii
Copy link
Contributor

Hi @wangrendong-yition your 24GB of memory should be plenty to run the Llama-7B model. Could you share the GPU type, deepspeed/deepspeed-mii versions, and the script you are running? This will help me debug the OOM error you are seeing.

Thanks!

@wangrendong-yition
Copy link
Author

wangrendong-yition commented Jan 29, 2024

Hi @wangrendong-yition your 24GB of memory should be plenty to run the Llama-7B model. Could you share the GPU type, deepspeed/deepspeed-mii versions, and the script you are running? This will help me debug the OOM error you are seeing.

Thanks!

I don't know what happened before, but today I give it another try and import mii; pipe = mii.pipeline("/data/Llama-2-7b-hf"); response = pipe(["DeepSpeed is", "Seattle is"], max_new_tokens=512); print(response) works fine now.

The legacy method still got the OOM error on RTX3090, following codes:

import mii
mii_configs = {"tensor_parallel": 1, "dtype": "fp16"}
mii.deploy(task="text-generation",
           model="NousResearch/Llama-2-7b-hf",
           model_path="/data/deepspeed_mii_models",
           deployment_name="llama2_deployment",
           mii_config=mii_configs)

while I think this OOM doesn't matter now.

Anyway this issue could be closed. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants