Older diffusion models won't work anymore, for example using models/ldm/text2img256/config.yaml and its corresponding checkpoint.
Using the text2img256 model checkpoint, the command line was: python3 scripts/txt2img.py --ckpt="model.ckpt" --config="models/ldm/text2img256/config.yaml"
Leading to a tensor mismatch:
File "/home/simon/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 443, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [192, 3, 3, 3], expected input[6, 4, 64, 64] to have 3 channels, but got 4 channels instead
Probably a batch size extra dimension?