Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data_file when training own images #62

Open
qwerdbeta opened this issue Aug 25, 2020 · 9 comments
Open

data_file when training own images #62

qwerdbeta opened this issue Aug 25, 2020 · 9 comments

Comments

@qwerdbeta
Copy link

Does anyone know the format of the data file with all of the paths when training your own images? I can't figure it out. I tried a .flist file tat had each image on a newline but that didn't work. Commas didn't either. Is it some special file format?

@shepnerd
Copy link
Owner

shepnerd commented Aug 25, 2020

Using the absolute paths of the target images, e.g., /data/proj/a.png or D:\\proj\\a.png. And yes, one line for one file path in the data file. The file extension name does not matter as long as it is readable text for IO functions.

@qwerdbeta
Copy link
Author

qwerdbeta commented Aug 25, 2020

Thank you! that worked but then I ran into the next issue when trying to train and it's not clear what cauase is. The error stack trace is:

(inpainting_gmcnn) G:\pythonAI\inpainting_gmcnn\tensorflow>python train.py --dataset celeba --data_file G:\pythonAI\training_images\source_images\t2.index --pretrain_network 1
------------ Options -------------
ae_loss_alpha: 1.2
batch_size: 24
checkpoints_dir: ./checkpoints
d_cnum: 64
data_file: G:\pythonAI\training_images\source_images\t2.index
dataset: celeba
dataset_path: G:\pythonAI\training_images\source_images\t2.index
date_str: 20200825-103721
g_cnum: 32
gan_loss_alpha: 0.001
gpu_ids: ['0', '1']
img_shapes: [256, 256, 3]
l1_loss_alpha: 1.4
load_model_dir:
lr: 1e-05
margins: [0, 0]
mask_shapes: [256, 256]
mask_type: rect
max_delta_shapes: [32, 32]
max_iters: 1000
model_folder: ./checkpoints\20200825-103721_GMCNN_celeba_b24_s256x256_gc32_dc64_randmask-rect_pretrain
model_name: GMCNN
model_prefix: snap
mrf_alpha: 0.05
pretrain_l1_alpha: 1.2
pretrain_network: True
random_mask: True
random_seed: False
train_spe: 1000
vgg19_path: vgg19_weights/imagenet-vgg-verydeep-19.mat
viz_steps: 5
wgan_gp_lambda: 10
-------------- End ----------------
[256, 256, 3]
G:\pythonAI\training_images\source_images\t2.index
24
['G:\pythonAI\training_images\source_images\001.png', 'G:\pythonAI\training_images\source_images\002.png']
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\data\data.py:15: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\data\data.py:17: slice_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\training\input.py:373: range_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.range(limit).shuffle(limit).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\training\input.py:319: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\training\input.py:189: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs).
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\training\input.py:198: QueueRunner.init (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\training\input.py:198: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\data\data.py:19: The name tf.read_file is deprecated. Please use tf.io.read_file instead.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\data\data.py:22: The name tf.image.resize_image_with_crop_or_pad is deprecated. Please use tf.image.resize_with_crop_or_pad instead.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\data\data.py:24: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.batch(batch_size) (or padded_batch(...) if dynamic_pad=True).
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\ops.py:119: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\ops.py:155: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_functions can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py:35: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras.layers.Conv2D instead.
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\layers\convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use layer.__call__ method instead.
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py:49: The name tf.image.resize_bilinear is deprecated. Please use tf.compat.v1.image.resize_bilinear instead.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py:66: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

Pretrain the whole net with only reconstruction loss.
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py:224: The name tf.summary.image is deprecated. Please use tf.compat.v1.summary.image instead.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py:225: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py:137: flatten (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.flatten instead.
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py:156: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead.
Set L1_LOSS_ALPHA to 1.400000
Set GAN_LOSS_ALPHA to 0.001000
Set AE_LOSS_ALPHA to 1.200000
WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py:286: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py:287: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

WARNING:tensorflow:From train.py:20: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.

WARNING:tensorflow:From train.py:24: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.

WARNING:tensorflow:From G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\ops\math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From train.py:30: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

WARNING:tensorflow:From train.py:32: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead.

WARNING:tensorflow:From train.py:34: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-08-25 10:37:24.089038: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
WARNING:tensorflow:From train.py:35: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

WARNING:tensorflow:From train.py:50: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

WARNING:tensorflow:From train.py:53: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
2020-08-25 10:37:27.684580: W tensorflow/core/kernels/queue_base.cc:277] _0_feed/input_producer/input_producer: Skipping cancelled enqueue attempt with queue not closed
2020-08-25 10:37:27.687946: W tensorflow/core/kernels/queue_base.cc:277] _1_feed/batch/fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2020-08-25 10:37:27.691128: W tensorflow/core/kernels/queue_base.cc:277] _1_feed/batch/fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2020-08-25 10:37:27.705672: W tensorflow/core/kernels/queue_base.cc:277] _1_feed/batch/fifo_queue: Skipping cancelled enqueue attempt with queue not closed
2020-08-25 10:37:27.719887: W tensorflow/core/kernels/queue_base.cc:277] _1_feed/batch/fifo_queue: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\client\session.py", line 1350, in _run_fn
target_list, run_metadata)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Need minval < maxval, got 0 >= 0
[[{{node random_uniform_1}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 61, in
_, g_loss = sess.run([g_train_op, losses['g_loss']])
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
run_metadata)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Need minval < maxval, got 0 >= 0
[[node random_uniform_1 (defined at G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]

Original stack trace for 'random_uniform_1':
File "train.py", line 18, in
g_vars, d_vars, losses = model.build_net(images, config=config)
File "G:\pythonAI\inpainting_gmcnn\tensorflow\net\network.py", line 172, in build_net
bbox = random_bbox(config)
File "G:\pythonAI\inpainting_gmcnn\tensorflow\net\ops.py", line 122, in random_bbox
[], minval=config.margins[1], maxval=maxl, dtype=tf.int32)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\ops\random_ops.py", line 243, in random_uniform
shape, minval, maxval, seed=seed1, seed2=seed2, name=name)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\ops\gen_random_ops.py", line 921, in random_uniform_int
seed=seed, seed2=seed2, name=name)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "G:\pythonAI\inpainting_gmcnn\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

(inpainting_gmcnn) G:\pythonAI\inpainting_gmcnn\tensorflow>

@qwerdbeta
Copy link
Author

is it because I don't have enough images? This is just a test run with 181 images

@qwerdbeta
Copy link
Author

Above was with tensorflow implementation. I also tried the pytorch implementation in windows but it causes this error on training:

(inpainting_gmcnn) G:\pythonAI\inpainting_gmcnn\pytorch>python train.py --dataset celeba --data_file G:\pythonAI\training_images\source_images\train_images.index
------------ Options -------------
D_max_iters: 5
batch_size: 16
checkpoint_dir: ./checkpoints
d_cnum: 64
data_file: G:\pythonAI\training_images\source_images\train_images.index
dataset: celeba
dataset_path: G:\pythonAI\training_images\source_images\train_images.index
date_str: 20200825-120636
epochs: 40
g_cnum: 32
gpu_ids: ['0']
img_shapes: [256, 256, 3]
lambda_adv: 0.001
lambda_ae: 1.2
lambda_gp: 10
lambda_mrf: 0.05
lambda_rec: 1.4
load_model_dir:
lr: 1e-05
margins: [0, 0]
mask_shapes: [128, 128]
mask_type: rect
max_delta_shapes: [32, 32]
model_folder: ./checkpoints\20200825-120636_GMCNN_celeba_b16_s256x256_gc32_dc64_randmask-rect
model_name: GMCNN
padding: SAME
phase: train
pretrain_network: False
random_crop: True
random_mask: True
random_seed: False
spectral_norm: True
train_spe: 1000
vgg19_path: vgg19_weights/imagenet-vgg-verydeep-19.mat
viz_steps: 5
-------------- End ----------------
loading data..
data loaded..
configuring model..
initialize network with normal
initialize network with normal
---------- Networks initialized -------------
GMCNN(
(EB1): ModuleList(
(0): Conv2d(4, 32, kernel_size=(7, 7), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(7, 7), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(7, 7), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(7, 7), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(12): PureUpsampling()
)
(EB2): ModuleList(
(0): Conv2d(4, 32, kernel_size=(5, 5), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(5, 5), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(12): PureUpsampling()
(13): Conv2d(128, 64, kernel_size=(5, 5), stride=(1, 1))
(14): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1))
(15): PureUpsampling()
)
(EB3): ModuleList(
(0): Conv2d(4, 32, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(12): PureUpsampling()
(13): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1))
(14): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(15): PureUpsampling()
(16): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1))
(17): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1))
)
(decoding_layers): ModuleList(
(0): Conv2d(224, 16, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(16, 3, kernel_size=(3, 3), stride=(1, 1))
)
(pads): ModuleList(
(0): ReflectionPad2d((0, 0, 0, 0))
(1): ReflectionPad2d((1, 1, 1, 1))
(2): ReflectionPad2d((2, 2, 2, 2))
(3): ReflectionPad2d((3, 3, 3, 3))
(4): ReflectionPad2d((4, 4, 4, 4))
(5): ReflectionPad2d((5, 5, 5, 5))
(6): ReflectionPad2d((6, 6, 6, 6))
(7): ReflectionPad2d((7, 7, 7, 7))
(8): ReflectionPad2d((8, 8, 8, 8))
(9): ReflectionPad2d((9, 9, 9, 9))
(10): ReflectionPad2d((10, 10, 10, 10))
(11): ReflectionPad2d((11, 11, 11, 11))
(12): ReflectionPad2d((12, 12, 12, 12))
(13): ReflectionPad2d((13, 13, 13, 13))
(14): ReflectionPad2d((14, 14, 14, 14))
(15): ReflectionPad2d((15, 15, 15, 15))
(16): ReflectionPad2d((16, 16, 16, 16))
(17): ReflectionPad2d((17, 17, 17, 17))
(18): ReflectionPad2d((18, 18, 18, 18))
(19): ReflectionPad2d((19, 19, 19, 19))
(20): ReflectionPad2d((20, 20, 20, 20))
(21): ReflectionPad2d((21, 21, 21, 21))
(22): ReflectionPad2d((22, 22, 22, 22))
(23): ReflectionPad2d((23, 23, 23, 23))
(24): ReflectionPad2d((24, 24, 24, 24))
(25): ReflectionPad2d((25, 25, 25, 25))
(26): ReflectionPad2d((26, 26, 26, 26))
(27): ReflectionPad2d((27, 27, 27, 27))
(28): ReflectionPad2d((28, 28, 28, 28))
(29): ReflectionPad2d((29, 29, 29, 29))
(30): ReflectionPad2d((30, 30, 30, 30))
(31): ReflectionPad2d((31, 31, 31, 31))
(32): ReflectionPad2d((32, 32, 32, 32))
(33): ReflectionPad2d((33, 33, 33, 33))
(34): ReflectionPad2d((34, 34, 34, 34))
(35): ReflectionPad2d((35, 35, 35, 35))
(36): ReflectionPad2d((36, 36, 36, 36))
(37): ReflectionPad2d((37, 37, 37, 37))
(38): ReflectionPad2d((38, 38, 38, 38))
(39): ReflectionPad2d((39, 39, 39, 39))
(40): ReflectionPad2d((40, 40, 40, 40))
(41): ReflectionPad2d((41, 41, 41, 41))
(42): ReflectionPad2d((42, 42, 42, 42))
(43): ReflectionPad2d((43, 43, 43, 43))
(44): ReflectionPad2d((44, 44, 44, 44))
(45): ReflectionPad2d((45, 45, 45, 45))
(46): ReflectionPad2d((46, 46, 46, 46))
(47): ReflectionPad2d((47, 47, 47, 47))
(48): ReflectionPad2d((48, 48, 48, 48))
)
)
[Network GM] Total number of parameters : 12.562 M

model setting up..
training initializing..
------------ Options -------------
D_max_iters: 5
batch_size: 16
checkpoint_dir: ./checkpoints
d_cnum: 64
data_file: G:\pythonAI\training_images\source_images\train_images.index
dataset: celeba
dataset_path: G:\pythonAI\training_images\source_images\train_images.index
date_str: 20200825-120640
epochs: 40
g_cnum: 32
gpu_ids: ['0']
img_shapes: [256, 256, 3]
lambda_adv: 0.001
lambda_ae: 1.2
lambda_gp: 10
lambda_mrf: 0.05
lambda_rec: 1.4
load_model_dir:
lr: 1e-05
margins: [0, 0]
mask_shapes: [128, 128]
mask_type: rect
max_delta_shapes: [32, 32]
model_folder: ./checkpoints\20200825-120640_GMCNN_celeba_b16_s256x256_gc32_dc64_randmask-rect
model_name: GMCNN
padding: SAME
phase: train
pretrain_network: False
random_crop: True
random_mask: True
random_seed: False
spectral_norm: True
train_spe: 1000
vgg19_path: vgg19_weights/imagenet-vgg-verydeep-19.mat
viz_steps: 5
-------------- End ----------------
loading data..
data loaded..
configuring model..
initialize network with normal
initialize network with normal
---------- Networks initialized -------------
GMCNN(
(EB1): ModuleList(
(0): Conv2d(4, 32, kernel_size=(7, 7), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(7, 7), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(7, 7), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(7, 7), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(12): PureUpsampling()
)
(EB2): ModuleList(
(0): Conv2d(4, 32, kernel_size=(5, 5), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(5, 5), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(12): PureUpsampling()
(13): Conv2d(128, 64, kernel_size=(5, 5), stride=(1, 1))
(14): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1))
(15): PureUpsampling()
)
(EB3): ModuleList(
(0): Conv2d(4, 32, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(12): PureUpsampling()
(13): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1))
(14): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(15): PureUpsampling()
(16): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1))
(17): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1))
)
(decoding_layers): ModuleList(
(0): Conv2d(224, 16, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(16, 3, kernel_size=(3, 3), stride=(1, 1))
)
(pads): ModuleList(
(0): ReflectionPad2d((0, 0, 0, 0))
(1): ReflectionPad2d((1, 1, 1, 1))
(2): ReflectionPad2d((2, 2, 2, 2))
(3): ReflectionPad2d((3, 3, 3, 3))
(4): ReflectionPad2d((4, 4, 4, 4))
(5): ReflectionPad2d((5, 5, 5, 5))
(6): ReflectionPad2d((6, 6, 6, 6))
(7): ReflectionPad2d((7, 7, 7, 7))
(8): ReflectionPad2d((8, 8, 8, 8))
(9): ReflectionPad2d((9, 9, 9, 9))
(10): ReflectionPad2d((10, 10, 10, 10))
(11): ReflectionPad2d((11, 11, 11, 11))
(12): ReflectionPad2d((12, 12, 12, 12))
(13): ReflectionPad2d((13, 13, 13, 13))
(14): ReflectionPad2d((14, 14, 14, 14))
(15): ReflectionPad2d((15, 15, 15, 15))
(16): ReflectionPad2d((16, 16, 16, 16))
(17): ReflectionPad2d((17, 17, 17, 17))
(18): ReflectionPad2d((18, 18, 18, 18))
(19): ReflectionPad2d((19, 19, 19, 19))
(20): ReflectionPad2d((20, 20, 20, 20))
(21): ReflectionPad2d((21, 21, 21, 21))
(22): ReflectionPad2d((22, 22, 22, 22))
(23): ReflectionPad2d((23, 23, 23, 23))
(24): ReflectionPad2d((24, 24, 24, 24))
(25): ReflectionPad2d((25, 25, 25, 25))
(26): ReflectionPad2d((26, 26, 26, 26))
(27): ReflectionPad2d((27, 27, 27, 27))
(28): ReflectionPad2d((28, 28, 28, 28))
(29): ReflectionPad2d((29, 29, 29, 29))
(30): ReflectionPad2d((30, 30, 30, 30))
(31): ReflectionPad2d((31, 31, 31, 31))
(32): ReflectionPad2d((32, 32, 32, 32))
(33): ReflectionPad2d((33, 33, 33, 33))
(34): ReflectionPad2d((34, 34, 34, 34))
(35): ReflectionPad2d((35, 35, 35, 35))
(36): ReflectionPad2d((36, 36, 36, 36))
(37): ReflectionPad2d((37, 37, 37, 37))
(38): ReflectionPad2d((38, 38, 38, 38))
(39): ReflectionPad2d((39, 39, 39, 39))
(40): ReflectionPad2d((40, 40, 40, 40))
(41): ReflectionPad2d((41, 41, 41, 41))
(42): ReflectionPad2d((42, 42, 42, 42))
(43): ReflectionPad2d((43, 43, 43, 43))
(44): ReflectionPad2d((44, 44, 44, 44))
(45): ReflectionPad2d((45, 45, 45, 45))
(46): ReflectionPad2d((46, 46, 46, 46))
(47): ReflectionPad2d((47, 47, 47, 47))
(48): ReflectionPad2d((48, 48, 48, 48))
)
)
[Network GM] Total number of parameters : 12.562 M

model setting up..
training initializing..
Traceback (most recent call last):
File "", line 1, in
Traceback (most recent call last):
File "train.py", line 34, in
for i, data in enumerate(dataloader):
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 105, in spawn_main
File "G:\pythonAI\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in iter
exitcode = _main(fd)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 114, in _main
return _DataLoaderIter(self)
prepare(preparation_data)
File "G:\pythonAI\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in init
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 225, in prepare
w.start()
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\process.py", line 112, in start
_fixup_main_from_path(data['init_main_from_path'])
self._popen = self._Popen(self)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\context.py", line 223, in _Popen
run_name="mp_main")
return _default_context.get_context().Process._Popen(process_obj)
File "G:\pythonAI\Miniconda3\Lib\runpy.py", line 263, in run_path
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\context.py", line 322, in _Popen
pkg_name=pkg_name, script_name=fname)
File "G:\pythonAI\Miniconda3\Lib\runpy.py", line 96, in _run_module_code
return Popen(process_obj)
mod_name, mod_spec, pkg_name, script_name)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\popen_spawn_win32.py", line 89, in init
File "G:\pythonAI\Miniconda3\Lib\runpy.py", line 85, in _run_code
reduction.dump(process_obj, to_child)
exec(code, run_globals)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\reduction.py", line 60, in dump
File "G:\pythonAI\inpainting_gmcnn\pytorch\train.py", line 34, in
ForkingPickler(file, protocol).dump(obj)
for i, data in enumerate(dataloader):
BrokenPipeError: [Errno 32] Broken pipe
File "G:\pythonAI\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in iter
return _DataLoaderIter(self)
File "G:\pythonAI\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in init
w.start()
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

@qwerdbeta
Copy link
Author

I finally got the pytorch implementation working after several tweaks.

@patricegaofei
Copy link

Hello @shepnerd and @qwerdbeta,
Please, I need your help. I have been trying to run the pytorch version using only the image samples in "imgs". However, I have been stuck for many hours with the following error.

 python train.py --dataset celebahq_256x256 --data_file /home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/imgs/
------------ Options -------------
D_max_iters: 5
batch_size: 16
checkpoint_dir: ./checkpoints
d_cnum: 64
data_file: /home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/imgs/
dataset: celebahq_256x256
dataset_path: /home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/imgs/
date_str: 20201013-145021
epochs: 40
g_cnum: 32
gpu_ids: ['0']
img_shapes: [256, 256, 3]
lambda_adv: 0.001
lambda_ae: 1.2
lambda_gp: 10
lambda_mrf: 0.05
lambda_rec: 1.4
load_model_dir:
lr: 1e-05
margins: [0, 0]
mask_shapes: [128, 128]
mask_type: rect
max_delta_shapes: [32, 32]
model_folder: ./checkpoints/20201013-145021_GMCNN_celebahq_256x256_b16_s256x256_gc32_dc64_randmask-rect
model_name: GMCNN
padding: SAME
phase: train
pretrain_network: False
random_crop: True
random_mask: True
random_seed: False
spectral_norm: True
train_spe: 1000
vgg19_path: vgg19_weights/imagenet-vgg-verydeep-19.mat
viz_steps: 5
-------------- End ----------------
loading data..
Traceback (most recent call last):
  File "train.py", line 15, in <module>
    ToTensor()
  File "/home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/data/data.py", line 20, in __init__
    self.filenames = open(info_list, 'rt').read().splitlines()
IsADirectoryError: [Errno 21] Is a directory: '/home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/imgs/'

Please, how can I fix this error? My aim is to first make the codes run, and then train again with my own dataset. Any comments or suggestions would be highly appreciated.

Best regards,
Patrice

@Cristo-R
Copy link

Hello @shepnerd and @qwerdbeta,
Please, I need your help. I have been trying to run the pytorch version using only the image samples in "imgs". However, I have been stuck for many hours with the following error.

 python train.py --dataset celebahq_256x256 --data_file /home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/imgs/
------------ Options -------------
D_max_iters: 5
batch_size: 16
checkpoint_dir: ./checkpoints
d_cnum: 64
data_file: /home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/imgs/
dataset: celebahq_256x256
dataset_path: /home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/imgs/
date_str: 20201013-145021
epochs: 40
g_cnum: 32
gpu_ids: ['0']
img_shapes: [256, 256, 3]
lambda_adv: 0.001
lambda_ae: 1.2
lambda_gp: 10
lambda_mrf: 0.05
lambda_rec: 1.4
load_model_dir:
lr: 1e-05
margins: [0, 0]
mask_shapes: [128, 128]
mask_type: rect
max_delta_shapes: [32, 32]
model_folder: ./checkpoints/20201013-145021_GMCNN_celebahq_256x256_b16_s256x256_gc32_dc64_randmask-rect
model_name: GMCNN
padding: SAME
phase: train
pretrain_network: False
random_crop: True
random_mask: True
random_seed: False
spectral_norm: True
train_spe: 1000
vgg19_path: vgg19_weights/imagenet-vgg-verydeep-19.mat
viz_steps: 5
-------------- End ----------------
loading data..
Traceback (most recent call last):
  File "train.py", line 15, in <module>
    ToTensor()
  File "/home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/data/data.py", line 20, in __init__
    self.filenames = open(info_list, 'rt').read().splitlines()
IsADirectoryError: [Errno 21] Is a directory: '/home/gaofei/newResearch/Inpainting_new/inpainting_gmcnn-master/pytorch/imgs/'

Please, how can I fix this error? My aim is to first make the codes run, and then train again with my own dataset. Any comments or suggestions would be highly appreciated.

Best regards,
Patrice

u need creat a file.txt or any others file,then input the datasetpath to the file's every lines
for ex:
c:/xxx/1.png
c:/xxx/2.png

@andy8744
Copy link

andy8744 commented Jan 1, 2021

Above was with tensorflow implementation. I also tried the pytorch implementation in windows but it causes this error on training:

(inpainting_gmcnn) G:\pythonAI\inpainting_gmcnn\pytorch>python train.py --dataset celeba --data_file G:\pythonAI\training_images\source_images\train_images.index

------------ Options -------------
D_max_iters: 5
batch_size: 16
checkpoint_dir: ./checkpoints
d_cnum: 64
data_file: G:\pythonAI\training_images\source_images\train_images.index
dataset: celeba
dataset_path: G:\pythonAI\training_images\source_images\train_images.index
date_str: 20200825-120636
epochs: 40
g_cnum: 32
gpu_ids: ['0']
img_shapes: [256, 256, 3]
lambda_adv: 0.001
lambda_ae: 1.2
lambda_gp: 10
lambda_mrf: 0.05
lambda_rec: 1.4
load_model_dir:
lr: 1e-05
margins: [0, 0]
mask_shapes: [128, 128]
mask_type: rect
max_delta_shapes: [32, 32]
model_folder: ./checkpoints\20200825-120636_GMCNN_celeba_b16_s256x256_gc32_dc64_randmask-rect
model_name: GMCNN
padding: SAME
phase: train
pretrain_network: False
random_crop: True
random_mask: True
random_seed: False
spectral_norm: True
train_spe: 1000
vgg19_path: vgg19_weights/imagenet-vgg-verydeep-19.mat
viz_steps: 5
-------------- End ----------------
loading data..
data loaded..
configuring model..
initialize network with normal
initialize network with normal
---------- Networks initialized -------------
GMCNN(
(EB1): ModuleList(
(0): Conv2d(4, 32, kernel_size=(7, 7), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(7, 7), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(7, 7), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(7, 7), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(12): PureUpsampling()
)
(EB2): ModuleList(
(0): Conv2d(4, 32, kernel_size=(5, 5), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(5, 5), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(12): PureUpsampling()
(13): Conv2d(128, 64, kernel_size=(5, 5), stride=(1, 1))
(14): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1))
(15): PureUpsampling()
)
(EB3): ModuleList(
(0): Conv2d(4, 32, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(12): PureUpsampling()
(13): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1))
(14): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(15): PureUpsampling()
(16): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1))
(17): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1))
)
(decoding_layers): ModuleList(
(0): Conv2d(224, 16, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(16, 3, kernel_size=(3, 3), stride=(1, 1))
)
(pads): ModuleList(
(0): ReflectionPad2d((0, 0, 0, 0))
(1): ReflectionPad2d((1, 1, 1, 1))
(2): ReflectionPad2d((2, 2, 2, 2))
(3): ReflectionPad2d((3, 3, 3, 3))
(4): ReflectionPad2d((4, 4, 4, 4))
(5): ReflectionPad2d((5, 5, 5, 5))
(6): ReflectionPad2d((6, 6, 6, 6))
(7): ReflectionPad2d((7, 7, 7, 7))
(8): ReflectionPad2d((8, 8, 8, 8))
(9): ReflectionPad2d((9, 9, 9, 9))
(10): ReflectionPad2d((10, 10, 10, 10))
(11): ReflectionPad2d((11, 11, 11, 11))
(12): ReflectionPad2d((12, 12, 12, 12))
(13): ReflectionPad2d((13, 13, 13, 13))
(14): ReflectionPad2d((14, 14, 14, 14))
(15): ReflectionPad2d((15, 15, 15, 15))
(16): ReflectionPad2d((16, 16, 16, 16))
(17): ReflectionPad2d((17, 17, 17, 17))
(18): ReflectionPad2d((18, 18, 18, 18))
(19): ReflectionPad2d((19, 19, 19, 19))
(20): ReflectionPad2d((20, 20, 20, 20))
(21): ReflectionPad2d((21, 21, 21, 21))
(22): ReflectionPad2d((22, 22, 22, 22))
(23): ReflectionPad2d((23, 23, 23, 23))
(24): ReflectionPad2d((24, 24, 24, 24))
(25): ReflectionPad2d((25, 25, 25, 25))
(26): ReflectionPad2d((26, 26, 26, 26))
(27): ReflectionPad2d((27, 27, 27, 27))
(28): ReflectionPad2d((28, 28, 28, 28))
(29): ReflectionPad2d((29, 29, 29, 29))
(30): ReflectionPad2d((30, 30, 30, 30))
(31): ReflectionPad2d((31, 31, 31, 31))
(32): ReflectionPad2d((32, 32, 32, 32))
(33): ReflectionPad2d((33, 33, 33, 33))
(34): ReflectionPad2d((34, 34, 34, 34))
(35): ReflectionPad2d((35, 35, 35, 35))
(36): ReflectionPad2d((36, 36, 36, 36))
(37): ReflectionPad2d((37, 37, 37, 37))
(38): ReflectionPad2d((38, 38, 38, 38))
(39): ReflectionPad2d((39, 39, 39, 39))
(40): ReflectionPad2d((40, 40, 40, 40))
(41): ReflectionPad2d((41, 41, 41, 41))
(42): ReflectionPad2d((42, 42, 42, 42))
(43): ReflectionPad2d((43, 43, 43, 43))
(44): ReflectionPad2d((44, 44, 44, 44))
(45): ReflectionPad2d((45, 45, 45, 45))
(46): ReflectionPad2d((46, 46, 46, 46))
(47): ReflectionPad2d((47, 47, 47, 47))
(48): ReflectionPad2d((48, 48, 48, 48))
)
)
[Network GM] Total number of parameters : 12.562 M

model setting up..

training initializing..
------------ Options -------------
D_max_iters: 5
batch_size: 16
checkpoint_dir: ./checkpoints
d_cnum: 64
data_file: G:\pythonAI\training_images\source_images\train_images.index
dataset: celeba
dataset_path: G:\pythonAI\training_images\source_images\train_images.index
date_str: 20200825-120640
epochs: 40
g_cnum: 32
gpu_ids: ['0']
img_shapes: [256, 256, 3]
lambda_adv: 0.001
lambda_ae: 1.2
lambda_gp: 10
lambda_mrf: 0.05
lambda_rec: 1.4
load_model_dir:
lr: 1e-05
margins: [0, 0]
mask_shapes: [128, 128]
mask_type: rect
max_delta_shapes: [32, 32]
model_folder: ./checkpoints\20200825-120640_GMCNN_celeba_b16_s256x256_gc32_dc64_randmask-rect
model_name: GMCNN
padding: SAME
phase: train
pretrain_network: False
random_crop: True
random_mask: True
random_seed: False
spectral_norm: True
train_spe: 1000
vgg19_path: vgg19_weights/imagenet-vgg-verydeep-19.mat
viz_steps: 5
-------------- End ----------------
loading data..
data loaded..
configuring model..
initialize network with normal
initialize network with normal
---------- Networks initialized -------------
GMCNN(
(EB1): ModuleList(
(0): Conv2d(4, 32, kernel_size=(7, 7), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(7, 7), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(7, 7), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(7, 7), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1))
(12): PureUpsampling()
)
(EB2): ModuleList(
(0): Conv2d(4, 32, kernel_size=(5, 5), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(5, 5), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(5, 5), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1))
(12): PureUpsampling()
(13): Conv2d(128, 64, kernel_size=(5, 5), stride=(1, 1))
(14): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1))
(15): PureUpsampling()
)
(EB3): ModuleList(
(0): Conv2d(4, 32, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2))
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(3): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2))
(4): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(5): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(6): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(2, 2))
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(4, 4))
(8): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(8, 8))
(9): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), dilation=(16, 16))
(10): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(11): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1))
(12): PureUpsampling()
(13): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1))
(14): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(15): PureUpsampling()
(16): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1))
(17): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1))
)
(decoding_layers): ModuleList(
(0): Conv2d(224, 16, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(16, 3, kernel_size=(3, 3), stride=(1, 1))
)
(pads): ModuleList(
(0): ReflectionPad2d((0, 0, 0, 0))
(1): ReflectionPad2d((1, 1, 1, 1))
(2): ReflectionPad2d((2, 2, 2, 2))
(3): ReflectionPad2d((3, 3, 3, 3))
(4): ReflectionPad2d((4, 4, 4, 4))
(5): ReflectionPad2d((5, 5, 5, 5))
(6): ReflectionPad2d((6, 6, 6, 6))
(7): ReflectionPad2d((7, 7, 7, 7))
(8): ReflectionPad2d((8, 8, 8, 8))
(9): ReflectionPad2d((9, 9, 9, 9))
(10): ReflectionPad2d((10, 10, 10, 10))
(11): ReflectionPad2d((11, 11, 11, 11))
(12): ReflectionPad2d((12, 12, 12, 12))
(13): ReflectionPad2d((13, 13, 13, 13))
(14): ReflectionPad2d((14, 14, 14, 14))
(15): ReflectionPad2d((15, 15, 15, 15))
(16): ReflectionPad2d((16, 16, 16, 16))
(17): ReflectionPad2d((17, 17, 17, 17))
(18): ReflectionPad2d((18, 18, 18, 18))
(19): ReflectionPad2d((19, 19, 19, 19))
(20): ReflectionPad2d((20, 20, 20, 20))
(21): ReflectionPad2d((21, 21, 21, 21))
(22): ReflectionPad2d((22, 22, 22, 22))
(23): ReflectionPad2d((23, 23, 23, 23))
(24): ReflectionPad2d((24, 24, 24, 24))
(25): ReflectionPad2d((25, 25, 25, 25))
(26): ReflectionPad2d((26, 26, 26, 26))
(27): ReflectionPad2d((27, 27, 27, 27))
(28): ReflectionPad2d((28, 28, 28, 28))
(29): ReflectionPad2d((29, 29, 29, 29))
(30): ReflectionPad2d((30, 30, 30, 30))
(31): ReflectionPad2d((31, 31, 31, 31))
(32): ReflectionPad2d((32, 32, 32, 32))
(33): ReflectionPad2d((33, 33, 33, 33))
(34): ReflectionPad2d((34, 34, 34, 34))
(35): ReflectionPad2d((35, 35, 35, 35))
(36): ReflectionPad2d((36, 36, 36, 36))
(37): ReflectionPad2d((37, 37, 37, 37))
(38): ReflectionPad2d((38, 38, 38, 38))
(39): ReflectionPad2d((39, 39, 39, 39))
(40): ReflectionPad2d((40, 40, 40, 40))
(41): ReflectionPad2d((41, 41, 41, 41))
(42): ReflectionPad2d((42, 42, 42, 42))
(43): ReflectionPad2d((43, 43, 43, 43))
(44): ReflectionPad2d((44, 44, 44, 44))
(45): ReflectionPad2d((45, 45, 45, 45))
(46): ReflectionPad2d((46, 46, 46, 46))
(47): ReflectionPad2d((47, 47, 47, 47))
(48): ReflectionPad2d((48, 48, 48, 48))
)
)
[Network GM] Total number of parameters : 12.562 M
model setting up..
training initializing..
Traceback (most recent call last):
File "", line 1, in
Traceback (most recent call last):
File "train.py", line 34, in
for i, data in enumerate(dataloader):
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 105, in spawn_main
File "G:\pythonAI\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in iter
exitcode = _main(fd)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 114, in _main
return _DataLoaderIter(self)
prepare(preparation_data)
File "G:\pythonAI\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in init
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 225, in prepare
w.start()
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\process.py", line 112, in start
_fixup_main_from_path(data['init_main_from_path'])
self._popen = self._Popen(self)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\context.py", line 223, in _Popen
run_name="mp_main")
return _default_context.get_context().Process._Popen(process_obj)
File "G:\pythonAI\Miniconda3\Lib\runpy.py", line 263, in run_path
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\context.py", line 322, in _Popen
pkg_name=pkg_name, script_name=fname)
File "G:\pythonAI\Miniconda3\Lib\runpy.py", line 96, in _run_module_code
return Popen(process_obj)
mod_name, mod_spec, pkg_name, script_name)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\popen_spawn_win32.py", line 89, in init
File "G:\pythonAI\Miniconda3\Lib\runpy.py", line 85, in _run_code
reduction.dump(process_obj, to_child)
exec(code, run_globals)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\reduction.py", line 60, in dump
File "G:\pythonAI\inpainting_gmcnn\pytorch\train.py", line 34, in
ForkingPickler(file, protocol).dump(obj)
for i, data in enumerate(dataloader):
BrokenPipeError: [Errno 32] Broken pipe
File "G:\pythonAI\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in iter
return _DataLoaderIter(self)
File "G:\pythonAI\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in init
w.start()
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "G:\pythonAI\Miniconda3\Lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

I got the same error. How did you get the pytorch implementation from working? Thanks

@marcomameli1992
Copy link

marcomameli1992 commented Jan 19, 2021

Dear I do not understand how to use my personalized mask as input.
In particular for the training stage it is only used the image without mask and the masks is random generated? If yes this means that I need to use only correct images withouth error for the input?
For the test if I would like to use my masks for the network it would be in a separate file or I need to set to white the three color channels on the input image? If the image has to be an input in the test_option which is the right parameter to be used to give it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants
@shepnerd @patricegaofei @qwerdbeta @andy8744 @marcomameli1992 @Cristo-R and others