Skip to content

Any particular reason int8 quantization is only supported on xl models? #3899

@qingswu

Description

@qingswu

As the code checks, stable diffusion quantization only supports on SDXL models, does it work in SD1.4 or 1.5?

if args.int8 and not args.version.startswith('xl'):

Metadata

Metadata

Assignees

Labels

triagedIssue has been triaged by maintainers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions