Skip to content

Feature Request: Add support for Eagle2_VL (Eagle2_5_VLForConditionalGeneration) multimodal models #16704

@prathameshza

Description

@prathameshza

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Feature Request

Please add support for Eagle2-VL vision-language models from NVIDIA.

Model example:

Conversion Log

Attempting mmproj conversion fails:

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 --outfile /home/mahadeva/code/models/Eagle2-1B/Eagle2-1B-f32.mmproj --model-name Eagle2-1B --mmproj --outtype f32
mmproj conversion failed for f32: INFO:hf-to-gguf:Loading model: 508bc72fb1a946db3d5c1ebca50165079afef782
WARNING:hf-to-gguf:Failed to load model config from /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782: The repository /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 contains custom code which must be executed to correctly load the model. You can inspect the repository content at /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 .
You can inspect the repository content at https://hf.co//home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782.
Please pass the argument trust_remote_code=True to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: Eagle2_5_VLForConditionalGeneration
ERROR:hf-to-gguf:Model Eagle2_5_VLForConditionalGeneration is not supported

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 --outfile /home/mahadeva/code/models/Eagle2-1B/Eagle2-1B-f16.mmproj --model-name Eagle2-1B --mmproj --outtype f16
mmproj conversion failed for f16: INFO:hf-to-gguf:Loading model: 508bc72fb1a946db3d5c1ebca50165079afef782
WARNING:hf-to-gguf:Failed to load model config from /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782: The repository /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 contains custom code which must be executed to correctly load the model. You can inspect the repository content at /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 .
You can inspect the repository content at https://hf.co//home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782.
Please pass the argument trust_remote_code=True to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: Eagle2_5_VLForConditionalGeneration
ERROR:hf-to-gguf:Model Eagle2_5_VLForConditionalGeneration is not supported

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 --outfile /home/mahadeva/code/models/Eagle2-1B/Eagle2-1B-bf16.mmproj --model-name Eagle2-1B --mmproj --outtype bf16
mmproj conversion failed for bf16: INFO:hf-to-gguf:Loading model: 508bc72fb1a946db3d5c1ebca50165079afef782
WARNING:hf-to-gguf:Failed to load model config from /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782: The repository /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 contains custom code which must be executed to correctly load the model. You can inspect the repository content at /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 .
You can inspect the repository content at https://hf.co//home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782.
Please pass the argument trust_remote_code=True to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: Eagle2_5_VLForConditionalGeneration
ERROR:hf-to-gguf:Model Eagle2_5_VLForConditionalGeneration is not supported

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 --outfile /home/mahadeva/code/models/Eagle2-1B/Eagle2-1B-q8_0.mmproj --model-name Eagle2-1B --mmproj --outtype q8_0
mmproj conversion failed for q8_0: INFO:hf-to-gguf:Loading model: 508bc72fb1a946db3d5c1ebca50165079afef782
WARNING:hf-to-gguf:Failed to load model config from /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782: The repository /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 contains custom code which must be executed to correctly load the model. You can inspect the repository content at /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 .
You can inspect the repository content at https://hf.co//home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782.
Please pass the argument trust_remote_code=True to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: Eagle2_5_VLForConditionalGeneration
ERROR:hf-to-gguf:Model Eagle2_5_VLForConditionalGeneration is not supported

#Discussion
https://huggingface.co/Mungert/Eagle2-1B-GGUF/discussions/1

Motivation

The Eagle2 series combines a visual encoder and a language model, similar to LLaVA and Qwen2-VL.
It would be useful for users who want to run the model locally via llama.cpp (GGUF) for multimodal inference.

Possible Implementation

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions