Skip to content

Misc. bug: ERROR:hf-to-gguf:Model DotsOCRForCausalLM is not supported #15979

@adaaaaaa

Description

@adaaaaaa

Name and Version

python convert_hf_to_gguf.py \
/data/model/dots.ocr
--outfile /data/model/dots.ocr-q8_0.gguf \
--outtype q8_0
INFO:hf-to-gguf:Loading model: dots.ocr
WARNING:hf-to-gguf:Failed to load model config from /data/model/dots.ocr: The repository /data/model/dots.ocr contains custom code which must be executed to correctly load the model. You can inspect the repository content at /data/model/dots.ocr .
You can inspect the repository content at https://hf.co//data/model/dots.ocr.
Please pass the argument trust_remote_code=True to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: DotsOCRForCausalLM
ERROR:hf-to-gguf:Model DotsOCRForCausalLM is not supported

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

No response

Command line

Problem description & steps to reproduce

python convert_hf_to_gguf.py \
/data/model/dots.ocr
--outfile /data/model/dots.ocr-q8_0.gguf \
--outtype q8_0

First Bad Commit

No response

Relevant log output

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions