Skip to content

2×24GB train PointLLM-7B #57

@user-deng

Description

@user-deng

Your work is truly impressive. I'm currently reproducing Point LLM and encountered an out-of-memory (OOM) issue in the first stage. I have 2×24GB RTX 4090 GPUs—are there any configuration adjustments I can make to proceed with the reproduction? For the second stage, I plan to fine-tune using LoRA. Given these computational resources, would it be feasible to implement a complete training pipeline with minor modifications to PointLLM? I look forward to your response!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions