Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Urgent!!!] When I use my own datasets, after I run the sheet "train_blending_gan.py", the command line seems to be stuck and no any errors! #40

Closed
TheWangYang opened this issue Dec 18, 2022 · 2 comments

Comments

@TheWangYang
Copy link

When I use my own datasets, after I run the sheet "train_blending_gan.py", the command line seems to be stuck like this:

Input arguments:
        nef: 64
        ngf: 64
        nc: 3
        nBottleneck: 4000
        ndf: 64
        lr_d: 0.0002
        lr_g: 0.002
        beta1: 0.5
        l2_weight: 0.999
        gpu: 1
        n_epoch: 25
        data_root: steel_plates
        load_size: 64
        image_size: 64
        ratio: 0.5
        val_ratio: 0.05
        d_iters: 5
        clamp_lower: -0.01
        clamp_upper: 0.01
        experiment: encoder_decoder_blending_result
        test_folder: samples
        workers: 4
        batch_size: 64
        test_size: 64
        train_samples: 150000
        test_samples: 256
        manual_seed: 5
        resume: 
        snapshot_interval: 1
        print_interval: 1
        plot_interval: 10

Create & Init models ...
        Init G network ...
        Init D network ...
        Copy models to gpu 1 ...
Init models done ...

Load images from steel_plates ...
        21 folders in total, 1 val folders ...
        Trainset contains 150000 image files
        Valset contains 256 image files

Saving samples to encoder_decoder_blending_result/samples ...



And, when I kill this script, the command line shown like this:

^CTraceback (most recent call last):
  File "train_blending_gan.py", line 183, in <module>
    main()
  File "train_blending_gan.py", line 164, in main
    train_batch = [trainset[idx][0] for idx in range(args.test_size)]
  File "train_blending_gan.py", line 164, in <listcomp>
    train_batch = [trainset[idx][0] for idx in range(args.test_size)]
  File "/home/wyy/anaconda3/envs/python36/lib/python3.6/site-packages/chainer/dataset/dataset_mixin.py", line 67, in __getitem__
    return self.get_example(index)
  File "/ssd3/wyy/projects/GP-GAN/dataset.py", line 84, in get_example
    obj_croped = self._crop(obj, rw, rh, sx, sy)
  File "/ssd3/wyy/projects/GP-GAN/dataset.py", line 66, in _crop
    im = resize(im, (rw, rh), order=1, preserve_range=False, mode='constant')
  File "/home/wyy/anaconda3/envs/python36/lib/python3.6/site-packages/skimage/transform/_warps.py", line 148, in resize
    cval=cval, mode=ndi_mode)
  File "/home/wyy/anaconda3/envs/python36/lib/python3.6/site-packages/scipy/ndimage/filters.py", line 299, in gaussian_filter
    mode, cval, truncate)
  File "/home/wyy/anaconda3/envs/python36/lib/python3.6/site-packages/scipy/ndimage/filters.py", line 217, in gaussian_filter1d
    return correlate1d(input, weights, axis, output, mode, cval, 0)
  File "/home/wyy/anaconda3/envs/python36/lib/python3.6/site-packages/scipy/ndimage/filters.py", line 95, in correlate1d
    origin)
KeyboardInterrupt

Is the size of the image in my dataset (5000x3000) too large? Or is it something else? I look forward to your reply and would appreciate it.

@wuhuikai
Copy link
Owner

It's too large. Can you try smaller size, see 64x64?

@TheWangYang
Copy link
Author

However, I scaled the original 5000x3000 to 64x64 while training, and the problem I described above occurred. @wuhuikai

@wuhuikai wuhuikai closed this as completed Nov 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants