-
Notifications
You must be signed in to change notification settings - Fork 44
Open
Description
Your work is truly impressive. I'm currently reproducing Point LLM and encountered an out-of-memory (OOM) issue in the first stage. I have 2×24GB RTX 4090 GPUs—are there any configuration adjustments I can make to proceed with the reproduction? For the second stage, I plan to fine-tune using LoRA. Given these computational resources, would it be feasible to implement a complete training pipeline with minor modifications to PointLLM? I look forward to your response!
Metadata
Metadata
Assignees
Labels
No labels