Skip to content

[Bug]: Qwen2-VL-2B quantization model has no improvement in reasoning speed compared to the original model #15601

@Eduiskss

Description

@Eduiskss

Your current environment

I performed GPTQ-int8 quantization using the fine-tuned Qwen2-VL-2B model, and there was no improvement in model inference speed after quantization

🐛 Describe the bug

I performed GPTQ-int8 quantization using the fine-tuned Qwen2-VL-2B model, and there was no improvement in model inference speed after quantization

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstaleOver 90 days of inactivity

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions