Git commit
$git rev-parse HEAD
8afbeb6
Operating System & Version
Fedora 23
GGML backends
CPU, HIP
Command-line arguments used
sd -m crash-030.gguf --mode txt2img
Steps to reproduce
Steps to reproduce
- Create a minimal 32-byte GGUF file with an oversized metadata key length:
import struct
buf = b'GGUF' # magic
buf += struct.pack('<I', 3) # version
buf += struct.pack('<Q', 0) # tensor_count
buf += struct.pack('<Q', 1) # metadata_kv_count = 1
buf += struct.pack('<Q', 0xFFFFFFFFFFFF) # key_len = huge value
with open('crash-030.gguf', 'wb') as f:
f.write(buf)
- Run:
sd -m crash-030.gguf --mode txt2img
What you expected to happen
What you expected to happen
The process should reject the file with an error when metadata key length exceeds a reasonable bound or file size.
What actually happened
What actually happened
The process crashes via operator new → abort() when attempting to allocate a std::string of attacker-controlled size.
Logs / error messages / stack trace
Logs / error messages / stack trace
==PID==ERROR: AddressSanitizer: allocation-size-too-big
#0 operator new(unsigned long)
#1 std::__cxx11::basic_string<char>::basic_string(unsigned long, char)
#2 GGUFReader::read_metadata() gguf_reader.hpp:62
Aborted (core dumped)
Without ASan, the process crashes with std::bad_alloc → std::terminate() → abort().
Additional context / environment details
Additional context / environment details
Root cause — src/gguf_reader.hpp:62:
uint64_t key_len = 0;
if (!safe_read(fin, key_len))
return false;
std::string key(key_len, '\0'); // ← unbounded allocation from file input
Suggested fix — add bounds check before allocation:
uint64_t key_len = 0;
if (!safe_read(fin, key_len))
return false;
+
+if (key_len > 1024 * 1024) { // 1 MiB max key length
+ LOG_ERROR("GGUF metadata key length too large: %llu", (unsigned long long)key_len);
+ return false;
+}
+
std::string key(key_len, '\0');
Similar bounds checks should be added to all length fields read from GGUF files in gguf_reader.hpp (tensor name lengths, string value lengths, array counts).
- x86_64, clang 19
- Reproduces on both sanitizer and release builds
- 32-byte PoC requires no valid model structure beyond the GGUF magic and a manipulated length field
- Found via fuzz testing with crafted GGUF files
Git commit
Operating System & Version
Fedora 23
GGML backends
CPU, HIP
Command-line arguments used
sd -m crash-030.gguf --mode txt2img
Steps to reproduce
Steps to reproduce
sd -m crash-030.gguf --mode txt2imgWhat you expected to happen
What you expected to happen
The process should reject the file with an error when metadata key length exceeds a reasonable bound or file size.
What actually happened
What actually happened
The process crashes via
operator new→abort()when attempting to allocate astd::stringof attacker-controlled size.Logs / error messages / stack trace
Logs / error messages / stack trace
Without ASan, the process crashes with
std::bad_alloc→std::terminate()→abort().Additional context / environment details
Additional context / environment details
Root cause —
src/gguf_reader.hpp:62:Suggested fix — add bounds check before allocation:
uint64_t key_len = 0; if (!safe_read(fin, key_len)) return false; + +if (key_len > 1024 * 1024) { // 1 MiB max key length + LOG_ERROR("GGUF metadata key length too large: %llu", (unsigned long long)key_len); + return false; +} + std::string key(key_len, '\0');Similar bounds checks should be added to all length fields read from GGUF files in
gguf_reader.hpp(tensor name lengths, string value lengths, array counts).