Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when I use the model I trained , the cuda will out of memory, but if I use the pretained model it`s fine #61

Open
shhjjj opened this issue Dec 14, 2023 · 3 comments

Comments

@shhjjj
Copy link

shhjjj commented Dec 14, 2023

Hi ,
I have some problem when I try to use the model I trained.
The first question is that , I use the config of vit-h and my GPU is rtx A6000 48g, when I train the model with the vit_h pretrained model , there are no error. But after trained ,when I want to use the model I trained to run test.py , the cuda will out of memory. I aready try batch_size=1 but the error still happen.
The another question is when I try to use 2 gpu to train or test the model whit the .pth I saved , the first GPU will run all local_rank process , and this condition will make the cuda out of memory.

@lixhere
Copy link

lixhere commented Jan 10, 2024

Hello, I have also encountered the first situation you mentioned. May I ask if you have resolved it?

@WenDongyp
Copy link

Hello, I have also encountered the first situation you mentioned. May I ask if you have resolved it?

Hello, I also met the same situation, may I ask you to solve it

@Divine0719
Copy link

@shhjjj @lixhere @WenDongyp @tianrun-chen in test.py try this code

    **with torch.no_grad():**
        pred = torch.sigmoid(model.infer(inp))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants