Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with quantize model #85

Closed
2 of 4 tasks
prd-tuong-nguyen opened this issue Nov 30, 2023 · 5 comments
Closed
2 of 4 tasks

Problem with quantize model #85

prd-tuong-nguyen opened this issue Nov 30, 2023 · 5 comments
Labels
question Further information is requested

Comments

@prd-tuong-nguyen
Copy link

System Info

Can you tell me abit about how to serve model in 4bit quantize mode?
I added the --quantize bitsandbytes-nf4 when run docker container but nothing change, the GPU memory keep the same

Information

  • Docker
  • The CLI directly

Tasks

  • An officially supported command
  • My own modifications

Reproduction

  1. docker run --gpus all --shm-size 1g-p 8080:80 -v ./ckpts:/data ghcr.io/predibase/lorax:latest --model-id /data/OpenHermes-2-7B-base-2.3 --quantize bitsandbytes-nf4

Expected behavior

Reduce the GPU memory

@flozi00
Copy link
Collaborator

flozi00 commented Nov 30, 2023

Do you measure before or after warmup?

With the start up the kv cache gets reserved, with quantited model there is more memory for cache, but the total used memory is the same.
You could limit the available memory by an argument

@tgaddair
Copy link
Contributor

tgaddair commented Nov 30, 2023

Hey @prd-tuong-nguyen, as @flozi00 said this is likely due to the warmup phase, where we allocate additional memory in advance for batching to avoid having to allocate it on the fly during inference.

For example, here's the memory usage reported by nvidia-smi when running with nf4 quantization using mistral-7b before warmup:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-PCIE-40GB          On  | 00000000:11:00.0 Off |                    0 |
| N/A   27C    P0              54W / 250W |   5011MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+

And here are the results after warmup:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-PCIE-40GB          On  | 00000000:11:00.0 Off |                    0 |
| N/A   27C    P0              54W / 250W |  38911MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+

As you can see, lorax will use as much memory as it can get away to maximize the batch size.

@tgaddair tgaddair added the question Further information is requested label Nov 30, 2023
@prd-tuong-nguyen
Copy link
Author

@flozi00 , @tgaddair Oh thanks for your fast reply, I see that the memory before warming up is lower then after that.
Can I reduce the additional memory for batching , in that way I can serve multi-instance in the same GPU.

@prd-tuong-nguyen
Copy link
Author

Resolved by edit cuda_memory_fraction, thank you <3

@tgaddair
Copy link
Contributor

tgaddair commented Dec 1, 2023

Glad you got it working! We'll be adding some better docs soon to make these parameters easier to find. Closing this issue for now.

@tgaddair tgaddair closed this as completed Dec 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants