Skip to content

unsupported dtype 'F64' #153

Closed
Closed
@red-scorp

Description

@red-scorp

Hi! Thanks for the great tool!

When I run sd.cpp with some not pruned SD models it drops the error.

[ERROR] model.cpp:773  - unsupported dtype 'F64'

It seems to me a simple thing to fix, either by supporting double float internally or converting during load into single float.
I want to hear you thoughts on this situation and how to solve it.

With best regards, AG

Activity

Cyberhan123

Cyberhan123 commented on Jan 27, 2024

@Cyberhan123
Contributor

Can you manually set the type to f32?

FrankEscobar

FrankEscobar commented on Feb 1, 2024

@FrankEscobar

I have the same issue with a model called revAnimated_v122 but trying to convert it to F32 or F16 it gets corrupt and takes only 809Kb

Did you managed to fix it?

askmyteapot

askmyteapot commented on Feb 28, 2024

@askmyteapot

Having this issue too with other models in the SD1.5 family.

Could this be looked at?

grauho

grauho commented on Mar 18, 2024

@grauho
Contributor

I suspect this has to do with GGML currently only having support for up to 32-bit width integer and float types in it's ggml_type. It might be possible if one is willing to accept a loss in precision to convert down to a 32-bit float, provided it does not exceed FLT_MAX, via a callback in the sdcpp load_tensors function.

grauho

grauho commented on Mar 20, 2024

@grauho
Contributor

Could someone please link a model where they are having the F64 problem? I think I've put together a fix that at least seems to work with the LoRAs I have that use I64.

red-scorp

red-scorp commented on Mar 20, 2024

@red-scorp
Author
grauho

grauho commented on Mar 22, 2024

@grauho
Contributor

I've written a small converter program in C that re-encodes entire safetensor files that does seem to do the job, once I put in handling for those using big-endian systems I'll publish it. Having trouble getting similar logic to work a la carte at tensor loading time in sdcpp though.

grauho

grauho commented on Mar 22, 2024

@grauho
Contributor

Alright here it is, feel free to give it a try. I was able to use it to convert both I64 and F64 containing models to something sdcpp could work with:

https://github.com/grauho/sdc

SA-j00u

SA-j00u commented on Apr 30, 2024

@SA-j00u

https://github.com/grauho/sdc

can you add converting to fp16 too?

grauho

grauho commented on May 4, 2024

@grauho
Contributor

https://github.com/grauho/sdc

can you add converting to fp16 too?

It now has handling to convert down into F16 as well as BF16 using the -f, --float-out switch, although with the caveat that if your system isn't using IEEE format floats and doubles it might not get the conversion right. Don't use the --replace option unless you want to risk losing data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      Participants

      @red-scorp@Cyberhan123@FrankEscobar@askmyteapot@grauho

      Issue actions

        unsupported dtype 'F64' · Issue #153 · leejet/stable-diffusion.cpp