### Prerequisites - [x] I am running the latest code. Mention the version if possible as well. - [x] I carefully followed the [README.md](https://github.com/ggml-org/llama.cpp/blob/master/README.md). - [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed). - [x] I reviewed the [Discussions](https://github.com/ggml-org/llama.cpp/discussions), and have a new and useful enhancement to share. ### Feature Description Convert deepseek-v3's mtp module to gguf and quantize to q4km ### Motivation I want to get the gguf weight of deepseek-v3 layer 62 and convert to q4km ### Possible Implementation _No response_