Conversation
Replaced the risky tensor move operation (m.to(load).to(offload)) with a safe in-memory copy mechanism. This prevents unnecessary GPU memory allocation and potential OOM errors when detaching mmap files. Implemented a `_detach_tensor` helper that handles `GGMLTensor`, `nn.Parameter` wrapping, and shared weights correctly.
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Implemented a robust solution for detaching mmap tensors in
GGUFModelPatcher.load.Previously, the code used
m.to(load_device).to(offload_device)to break mmap links. Ifload_devicewas GPU, this caused a massive VRAM spike and data movement overhead.The new
_detach_tensormethod performs a direct copy on the same device (CPU) into anonymous memory.It also handles:
GGMLTensorspecifics (creating a new instance with metadata).nn.Parameterwrapping (ensuring modules remain valid).This fixes the potential OOM and "risky workaround" issue identified in the code.
PR created automatically by Jules for task 7366240568078524864 started by @Aatricks