Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about training speed #7

Closed
duanyifei777 opened this issue Apr 8, 2024 · 2 comments
Closed

about training speed #7

duanyifei777 opened this issue Apr 8, 2024 · 2 comments

Comments

@duanyifei777
Copy link

I'd like to ask about training speed, I'm running an epoch on a windows 4090 for about 5 minutes, I feel like it's a little slow, is that normal? My dataset size is only a quarter of the size of isic17.

@wurenkai
Copy link
Owner

wurenkai commented Apr 8, 2024

Hi, 5 minutes training time is not normal. It takes me less than half a minute to train an epoch on the ISIC2017 dataset. Also, your data size is a quarter of the ISIC2017 dataset.
1

You can check to see if the parameters and GFLOPs are as expected by changing the following code in 'train.py'.

    print('#----------Prepareing Models----------#')
    model_cfg = config.model_config
    model = UltraLight_VM_UNet(num_classes=model_cfg['num_classes'], 
                               input_channels=model_cfg['input_channels'], 
                               c_list=model_cfg['c_list'], 
                               split_att=model_cfg['split_att'], 
                               bridge=model_cfg['bridge'],)
    model = model.cuda()

    cal_params_flops(model, 256, logger) # 256 is the size of the input model image, change as desired.
    #model = torch.nn.DataParallel(model.cuda(), device_ids=gpu_ids, output_device=gpu_ids[0])

@duanyifei777
Copy link
Author

Ok, thanks, my parameters and complexity are normal, I think it's because I'm on windows, I'll try to run it on Linux.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants