Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression on EETQ quantized models #1779

Closed
2 of 4 tasks
claudioMontanari opened this issue Apr 20, 2024 · 1 comment
Closed
2 of 4 tasks

Regression on EETQ quantized models #1779

claudioMontanari opened this issue Apr 20, 2024 · 1 comment
Labels

Comments

@claudioMontanari
Copy link

claudioMontanari commented Apr 20, 2024

System Info

I have reasons to believe that this #1729 is causing a 2-3x performance regression on decoding stage when running EETQ quantized models on multiple shards with Cuda graphs enabled. Find below supporting experiments.

Note: I understand TGI built-in benchmarker is the preferred way to provide such results, I can follow up with that in case.

Hardware used:

NVIDIA-SMI 535.129.03  
Driver Version: 535.129.03 
CUDA Version: 12.2
[NVIDIA A100-SXM4-40GB | 400W |  40960MiB] x 8

Information

  • Docker
  • The CLI directly

Tasks

  • An officially supported command
  • My own modifications

Reproduction

Experiment 1

TGI image: sha-c2fd35d (from #1716 before Upgrade EETQ)
Args:
--model-id mistralai/Mixtral-8x7B-Instruct-v0.1 --quantize eetq --sharded true --num-shard 2 --disable-grammar-support
Hardware: 2xA100 @40GB memory
50th Percentile of per-token decode latency: ~8ms
Load: sending 1 request at a time at /generate with inputs 128|256|512 tokens and max output 32 tokens.

Experiment 2

TGI image: sha-6c2c44b (Upgrade EETQ #1729)
Args:
--model-id mistralai/Mixtral-8x7B-Instruct-v0.1 --quantize eetq --sharded true --num-shard 2 --disable-grammar-support
Hardware: 2xA100 @40GB memory
50th Percentile of per-token decode latency: ~25ms
Load: sending 1 request at a time at /generate with inputs 128|256|512 tokens and max output 32 tokens.

Experiment 3

TGI image: 2.0.0
Args:
--model-id mistralai/Mixtral-8x7B-Instruct-v0.1 --sharded true --num-shard 4 --disable-grammar-support
Hardware: 4xA100 @40GB memory
50th Percentile of per-token decode latency: ~10ms
Load: sending 1 request at a time at /generate with inputs 128|256|512 tokens and max output 32 tokens.

Expected behavior

Exp. 2 shows a ~3x regression in per-token decode latency wrt Exp. 1 which has the same configuration but a TGI image pre-EETQ upgrade. Exp 3. shows that if the model is not quantized the performance for per-token decode latency are 2.5x better.

Performance should be consistent when sharding an EETQ quantized model.

@claudioMontanari claudioMontanari changed the title Regression on EETQ models Regression on EETQ quantized models Apr 21, 2024
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label May 22, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale May 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant