Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Produce Larger Output Image #8

Closed
kyung645 opened this issue Feb 13, 2019 · 6 comments
Closed

Produce Larger Output Image #8

kyung645 opened this issue Feb 13, 2019 · 6 comments

Comments

@kyung645
Copy link

Hi,

Is it possible to produce larger output images? Currently, it seems the outputs are around 450x300. I tried adding a --load_size 1024 option but it returns with " TypeError: can't multiply sequence by non-int of type 'float' " . Would you happen to know how to generate larger images around 1024x1024? Thanks.

@enigmanx20
Copy link

enigmanx20 commented Feb 19, 2019

@kyung645
Hi,
I'm not a contributor but I succeeded to produce as large photo as size of 2048x2048.
My environment is
GPU:NVIDIA P100 with 16GB of VRAM x2
Pytorch version: 1.0
Original pytorch code is fine but as volatile option is deprecated in recent pytorch, test.py should be fixed unless you would be annoyed by memory exhaustion.

First, inference part should be under torch.no_grad(). Second, the cache should be removed every image.As a result, the last part of test.py should be like this. Then you can produce with large photo inputs modifying the --load_size option.

Good luck!

if opt.gpu > -1:
        with torch.no_grad():
            input_image = Variable(input_image).cuda()
            # forward
            output_image = model(input_image)
            output_image = output_image[0]
            # BGR -> RGB
            output_image = output_image[[2, 1, 0], :, :]
    else:
        with torch.no_grad():
            input_image = Variable(input_image).float()
            output_image = model(input_image)
    
    
    # deprocess, (0, 1)
    output_image = output_image.data.cpu().float() * 0.5 + 0.5
    # save
    print ('Saving...%s' % (files[:-4] + '_' + opt.style + '.jpg'))
    vutils.save_image(output_image, os.path.join(opt.output_dir, files[:-4] + '_' + opt.style + '.jpg'))
    
    
    torch.cuda.empty_cache() 

print('Done!')

@DrazHD
Copy link

DrazHD commented Feb 20, 2019

@kyung645 The --load_size argument is missing type declaration in 'test.py' on line 13. It should read like this;

parser.add_argument('--load_size', type=int, default = 450)

@Yijunmaverick
Copy link
Owner

Thank you all @enigmanx20 @DrazHD for making the code better!

@kyung645
Copy link
Author

Thank you @enigmanx20 @DrazHD.

The type declaration for the argument was a quick fix and it helped produce images with at least size 1024x1024.

I tried to generate 2048x2048 size after but ran into a memory problem so will be trying @enigmanx20's suggestion. By the way, NVIDIA P100 sounds nice! Is that a local setup?

Thanks again!

@FantasyJXF
Copy link

I test the 1024*1024 image, it takes 13G memory to run the model, is that OKay?

@huanmingcn
Copy link

@kyung645
Hi,
I'm not a contributor but I succeeded to produce as large photo as size of 2048x2048.
My environment is
GPU:NVIDIA P100 with 16GB of VRAM x2
Pytorch version: 1.0
Original pytorch code is fine but as volatile option is deprecated in recent pytorch, test.py should be fixed unless you would be annoyed by memory exhaustion.

First, inference part should be under torch.no_grad(). Second, the cache should be removed every image.As a result, the last part of test.py should be like this. Then you can produce with large photo inputs modifying the --load_size option.

Good luck!

if opt.gpu > -1:
        with torch.no_grad():
            input_image = Variable(input_image).cuda()
            # forward
            output_image = model(input_image)
            output_image = output_image[0]
            # BGR -> RGB
            output_image = output_image[[2, 1, 0], :, :]
    else:
        with torch.no_grad():
            input_image = Variable(input_image).float()
            output_image = model(input_image)
    
    
    # deprocess, (0, 1)
    output_image = output_image.data.cpu().float() * 0.5 + 0.5
    # save
    print ('Saving...%s' % (files[:-4] + '_' + opt.style + '.jpg'))
    vutils.save_image(output_image, os.path.join(opt.output_dir, files[:-4] + '_' + opt.style + '.jpg'))
    
    
    torch.cuda.empty_cache() 

print('Done!')

I don't have a nvidia gpu, so I run it by cpu,.
I have 32GB RAM, but it seems useless, I have tried the code your typed, and set load_size = 1000, but the OOM problem is also exist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants