loading options from checkpoint pretrained_models/kth/ours_savp ----------------------------------- Options ------------------------------------ batch_size = 9 checkpoint = pretrained_models/kth/ours_savp dataset = kth dataset_hparams = sequence_length=30 fps = 4 gif_length = None gpu_mem_frac = 0 input_dir = data/kth mode = test model = savp model_hparams = None num_epochs = 1 num_samples = None num_stochastic_samples = 5 output_gif_dir = results_test_samples/kth/ours_savp output_png_dir = results_test_samples/kth/ours_savp results_dir = results_test_samples/kth results_gif_dir = results_test_samples/kth results_png_dir = results_test_samples/kth seed = 7 ------------------------------------- End -------------------------------------- WARNING:tensorflow:From /Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/datasets/kth_dataset.py:24: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)` WARNING:tensorflow:From /Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/models/base_model.py:299: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From /Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/datasets/base_dataset.py:149: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.experimental.map_and_batch(...)`. WARNING:tensorflow:From /Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/utils/tf_utils.py:139: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version. Instructions for updating: Please use `keras.layers.RNN(cell)`, which is equivalent to this API WARNING:tensorflow:From /Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/models/savp_model.py:404: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From /Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/models/savp_model.py:361: LSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version. Instructions for updating: This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0. WARNING:tensorflow:From /Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/utils/gif_summary.py:112: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. Instructions for updating: tf.py_func is deprecated in TF V2. Instead, use tf.py_function, which takes a python function which manipulates tf eager tensors instead of numpy arrays. It's easy to convert a tf eager tensor to an ndarray (just call tensor.numpy()) but having access to eager tensors means `tf.py_function`s can use accelerators such as GPUs as well as being differentiable using a gradient tape. 2019-04-04 16:03:17.120200: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA ***Checkpoint : pretrained_models/kth/ours_savp creating restore saver from checkpoint pretrained_models/kth/ours_savp checkpoint variables that were not used for restoring because they are not in the graph: discriminator/video_sn_conv0_0/conv3d/bias discriminator/video_sn_conv0_0/conv3d/bias/Adam discriminator/video_sn_conv0_0/conv3d/bias/Adam_1 discriminator/video_sn_conv0_0/conv3d/kernel discriminator/video_sn_conv0_0/conv3d/kernel/Adam discriminator/video_sn_conv0_0/conv3d/kernel/Adam_1 discriminator/video_sn_conv0_0/conv3d/u discriminator/video_sn_conv0_1/conv3d/bias discriminator/video_sn_conv0_1/conv3d/bias/Adam discriminator/video_sn_conv0_1/conv3d/bias/Adam_1 discriminator/video_sn_conv0_1/conv3d/kernel discriminator/video_sn_conv0_1/conv3d/kernel/Adam discriminator/video_sn_conv0_1/conv3d/kernel/Adam_1 discriminator/video_sn_conv0_1/conv3d/u discriminator/video_sn_conv1_0/conv3d/bias discriminator/video_sn_conv1_0/conv3d/bias/Adam discriminator/video_sn_conv1_0/conv3d/bias/Adam_1 discriminator/video_sn_conv1_0/conv3d/kernel discriminator/video_sn_conv1_0/conv3d/kernel/Adam discriminator/video_sn_conv1_0/conv3d/kernel/Adam_1 discriminator/video_sn_conv1_0/conv3d/u discriminator/video_sn_conv1_1/conv3d/bias discriminator/video_sn_conv1_1/conv3d/bias/Adam discriminator/video_sn_conv1_1/conv3d/bias/Adam_1 discriminator/video_sn_conv1_1/conv3d/kernel discriminator/video_sn_conv1_1/conv3d/kernel/Adam discriminator/video_sn_conv1_1/conv3d/kernel/Adam_1 discriminator/video_sn_conv1_1/conv3d/u discriminator/video_sn_conv2_0/conv3d/bias discriminator/video_sn_conv2_0/conv3d/bias/Adam discriminator/video_sn_conv2_0/conv3d/bias/Adam_1 discriminator/video_sn_conv2_0/conv3d/kernel discriminator/video_sn_conv2_0/conv3d/kernel/Adam discriminator/video_sn_conv2_0/conv3d/kernel/Adam_1 discriminator/video_sn_conv2_0/conv3d/u discriminator/video_sn_conv2_1/conv3d/bias discriminator/video_sn_conv2_1/conv3d/bias/Adam discriminator/video_sn_conv2_1/conv3d/bias/Adam_1 discriminator/video_sn_conv2_1/conv3d/kernel discriminator/video_sn_conv2_1/conv3d/kernel/Adam discriminator/video_sn_conv2_1/conv3d/kernel/Adam_1 discriminator/video_sn_conv2_1/conv3d/u discriminator/video_sn_conv3_0/conv3d/bias discriminator/video_sn_conv3_0/conv3d/bias/Adam discriminator/video_sn_conv3_0/conv3d/bias/Adam_1 discriminator/video_sn_conv3_0/conv3d/kernel discriminator/video_sn_conv3_0/conv3d/kernel/Adam discriminator/video_sn_conv3_0/conv3d/kernel/Adam_1 discriminator/video_sn_conv3_0/conv3d/u discriminator/video_sn_fc4/dense/bias discriminator/video_sn_fc4/dense/bias/Adam discriminator/video_sn_fc4/dense/bias/Adam_1 discriminator/video_sn_fc4/dense/kernel discriminator/video_sn_fc4/dense/kernel/Adam discriminator/video_sn_fc4/dense/kernel/Adam_1 discriminator/video_sn_fc4/dense/u discriminator_encoder/video_sn_conv0_0/conv3d/bias discriminator_encoder/video_sn_conv0_0/conv3d/bias/Adam discriminator_encoder/video_sn_conv0_0/conv3d/bias/Adam_1 discriminator_encoder/video_sn_conv0_0/conv3d/kernel discriminator_encoder/video_sn_conv0_0/conv3d/kernel/Adam discriminator_encoder/video_sn_conv0_0/conv3d/kernel/Adam_1 discriminator_encoder/video_sn_conv0_0/conv3d/u discriminator_encoder/video_sn_conv0_1/conv3d/bias discriminator_encoder/video_sn_conv0_1/conv3d/bias/Adam discriminator_encoder/video_sn_conv0_1/conv3d/bias/Adam_1 discriminator_encoder/video_sn_conv0_1/conv3d/kernel discriminator_encoder/video_sn_conv0_1/conv3d/kernel/Adam discriminator_encoder/video_sn_conv0_1/conv3d/kernel/Adam_1 discriminator_encoder/video_sn_conv0_1/conv3d/u discriminator_encoder/video_sn_conv1_0/conv3d/bias discriminator_encoder/video_sn_conv1_0/conv3d/bias/Adam discriminator_encoder/video_sn_conv1_0/conv3d/bias/Adam_1 discriminator_encoder/video_sn_conv1_0/conv3d/kernel discriminator_encoder/video_sn_conv1_0/conv3d/kernel/Adam discriminator_encoder/video_sn_conv1_0/conv3d/kernel/Adam_1 discriminator_encoder/video_sn_conv1_0/conv3d/u discriminator_encoder/video_sn_conv1_1/conv3d/bias discriminator_encoder/video_sn_conv1_1/conv3d/bias/Adam discriminator_encoder/video_sn_conv1_1/conv3d/bias/Adam_1 discriminator_encoder/video_sn_conv1_1/conv3d/kernel discriminator_encoder/video_sn_conv1_1/conv3d/kernel/Adam discriminator_encoder/video_sn_conv1_1/conv3d/kernel/Adam_1 discriminator_encoder/video_sn_conv1_1/conv3d/u discriminator_encoder/video_sn_conv2_0/conv3d/bias discriminator_encoder/video_sn_conv2_0/conv3d/bias/Adam discriminator_encoder/video_sn_conv2_0/conv3d/bias/Adam_1 discriminator_encoder/video_sn_conv2_0/conv3d/kernel discriminator_encoder/video_sn_conv2_0/conv3d/kernel/Adam discriminator_encoder/video_sn_conv2_0/conv3d/kernel/Adam_1 discriminator_encoder/video_sn_conv2_0/conv3d/u discriminator_encoder/video_sn_conv2_1/conv3d/bias discriminator_encoder/video_sn_conv2_1/conv3d/bias/Adam discriminator_encoder/video_sn_conv2_1/conv3d/bias/Adam_1 discriminator_encoder/video_sn_conv2_1/conv3d/kernel discriminator_encoder/video_sn_conv2_1/conv3d/kernel/Adam discriminator_encoder/video_sn_conv2_1/conv3d/kernel/Adam_1 discriminator_encoder/video_sn_conv2_1/conv3d/u discriminator_encoder/video_sn_conv3_0/conv3d/bias discriminator_encoder/video_sn_conv3_0/conv3d/bias/Adam discriminator_encoder/video_sn_conv3_0/conv3d/bias/Adam_1 discriminator_encoder/video_sn_conv3_0/conv3d/kernel discriminator_encoder/video_sn_conv3_0/conv3d/kernel/Adam discriminator_encoder/video_sn_conv3_0/conv3d/kernel/Adam_1 discriminator_encoder/video_sn_conv3_0/conv3d/u discriminator_encoder/video_sn_fc4/dense/bias discriminator_encoder/video_sn_fc4/dense/bias/Adam discriminator_encoder/video_sn_fc4/dense/bias/Adam_1 discriminator_encoder/video_sn_fc4/dense/kernel discriminator_encoder/video_sn_fc4/dense/kernel/Adam discriminator_encoder/video_sn_fc4/dense/kernel/Adam_1 discriminator_encoder/video_sn_fc4/dense/u generator/encoder/layer_1/conv2d/bias/Adam generator/encoder/layer_1/conv2d/bias/Adam_1 generator/encoder/layer_1/conv2d/kernel/Adam generator/encoder/layer_1/conv2d/kernel/Adam_1 generator/encoder/layer_2/InstanceNorm/beta/Adam generator/encoder/layer_2/InstanceNorm/beta/Adam_1 generator/encoder/layer_2/InstanceNorm/gamma/Adam generator/encoder/layer_2/InstanceNorm/gamma/Adam_1 generator/encoder/layer_2/conv2d/bias/Adam generator/encoder/layer_2/conv2d/bias/Adam_1 generator/encoder/layer_2/conv2d/kernel/Adam generator/encoder/layer_2/conv2d/kernel/Adam_1 generator/encoder/layer_3/InstanceNorm/beta/Adam generator/encoder/layer_3/InstanceNorm/beta/Adam_1 generator/encoder/layer_3/InstanceNorm/gamma/Adam generator/encoder/layer_3/InstanceNorm/gamma/Adam_1 generator/encoder/layer_3/conv2d/bias/Adam generator/encoder/layer_3/conv2d/bias/Adam_1 generator/encoder/layer_3/conv2d/kernel/Adam generator/encoder/layer_3/conv2d/kernel/Adam_1 generator/encoder/z_log_sigma_sq/dense/bias/Adam generator/encoder/z_log_sigma_sq/dense/bias/Adam_1 generator/encoder/z_log_sigma_sq/dense/kernel/Adam generator/encoder/z_log_sigma_sq/dense/kernel/Adam_1 generator/encoder/z_mu/dense/bias/Adam generator/encoder/z_mu/dense/bias/Adam_1 generator/encoder/z_mu/dense/kernel/Adam generator/encoder/z_mu/dense/kernel/Adam_1 generator/rnn/dna_cell/cdna_kernels/dense/bias/Adam generator/rnn/dna_cell/cdna_kernels/dense/bias/Adam_1 generator/rnn/dna_cell/cdna_kernels/dense/kernel/Adam generator/rnn/dna_cell/cdna_kernels/dense/kernel/Adam_1 generator/rnn/dna_cell/h0/InstanceNorm/beta/Adam generator/rnn/dna_cell/h0/InstanceNorm/beta/Adam_1 generator/rnn/dna_cell/h0/InstanceNorm/gamma/Adam generator/rnn/dna_cell/h0/InstanceNorm/gamma/Adam_1 generator/rnn/dna_cell/h0/conv_pool2d/bias/Adam generator/rnn/dna_cell/h0/conv_pool2d/bias/Adam_1 generator/rnn/dna_cell/h0/conv_pool2d/kernel/Adam generator/rnn/dna_cell/h0/conv_pool2d/kernel/Adam_1 generator/rnn/dna_cell/h1/InstanceNorm/beta/Adam generator/rnn/dna_cell/h1/InstanceNorm/beta/Adam_1 generator/rnn/dna_cell/h1/InstanceNorm/gamma/Adam generator/rnn/dna_cell/h1/InstanceNorm/gamma/Adam_1 generator/rnn/dna_cell/h1/conv_pool2d/bias/Adam generator/rnn/dna_cell/h1/conv_pool2d/bias/Adam_1 generator/rnn/dna_cell/h1/conv_pool2d/kernel/Adam generator/rnn/dna_cell/h1/conv_pool2d/kernel/Adam_1 generator/rnn/dna_cell/h2/InstanceNorm/beta/Adam generator/rnn/dna_cell/h2/InstanceNorm/beta/Adam_1 generator/rnn/dna_cell/h2/InstanceNorm/gamma/Adam generator/rnn/dna_cell/h2/InstanceNorm/gamma/Adam_1 generator/rnn/dna_cell/h2/conv_pool2d/bias/Adam generator/rnn/dna_cell/h2/conv_pool2d/bias/Adam_1 generator/rnn/dna_cell/h2/conv_pool2d/kernel/Adam generator/rnn/dna_cell/h2/conv_pool2d/kernel/Adam_1 generator/rnn/dna_cell/h3/InstanceNorm/beta/Adam generator/rnn/dna_cell/h3/InstanceNorm/beta/Adam_1 generator/rnn/dna_cell/h3/InstanceNorm/gamma/Adam generator/rnn/dna_cell/h3/InstanceNorm/gamma/Adam_1 generator/rnn/dna_cell/h3/upsample_conv2d/bias/Adam generator/rnn/dna_cell/h3/upsample_conv2d/bias/Adam_1 generator/rnn/dna_cell/h3/upsample_conv2d/kernel/Adam generator/rnn/dna_cell/h3/upsample_conv2d/kernel/Adam_1 generator/rnn/dna_cell/h4/InstanceNorm/beta/Adam generator/rnn/dna_cell/h4/InstanceNorm/beta/Adam_1 generator/rnn/dna_cell/h4/InstanceNorm/gamma/Adam generator/rnn/dna_cell/h4/InstanceNorm/gamma/Adam_1 generator/rnn/dna_cell/h4/upsample_conv2d/bias/Adam generator/rnn/dna_cell/h4/upsample_conv2d/bias/Adam_1 generator/rnn/dna_cell/h4/upsample_conv2d/kernel/Adam generator/rnn/dna_cell/h4/upsample_conv2d/kernel/Adam_1 generator/rnn/dna_cell/h5/InstanceNorm/beta/Adam generator/rnn/dna_cell/h5/InstanceNorm/beta/Adam_1 generator/rnn/dna_cell/h5/InstanceNorm/gamma/Adam generator/rnn/dna_cell/h5/InstanceNorm/gamma/Adam_1 generator/rnn/dna_cell/h5/upsample_conv2d/bias/Adam generator/rnn/dna_cell/h5/upsample_conv2d/bias/Adam_1 generator/rnn/dna_cell/h5/upsample_conv2d/kernel/Adam generator/rnn/dna_cell/h5/upsample_conv2d/kernel/Adam_1 generator/rnn/dna_cell/h6_masks/InstanceNorm/beta/Adam generator/rnn/dna_cell/h6_masks/InstanceNorm/beta/Adam_1 generator/rnn/dna_cell/h6_masks/InstanceNorm/gamma/Adam generator/rnn/dna_cell/h6_masks/InstanceNorm/gamma/Adam_1 generator/rnn/dna_cell/h6_masks/conv2d/bias/Adam generator/rnn/dna_cell/h6_masks/conv2d/bias/Adam_1 generator/rnn/dna_cell/h6_masks/conv2d/kernel/Adam generator/rnn/dna_cell/h6_masks/conv2d/kernel/Adam_1 generator/rnn/dna_cell/h6_scratch/InstanceNorm/beta/Adam generator/rnn/dna_cell/h6_scratch/InstanceNorm/beta/Adam_1 generator/rnn/dna_cell/h6_scratch/InstanceNorm/gamma/Adam generator/rnn/dna_cell/h6_scratch/InstanceNorm/gamma/Adam_1 generator/rnn/dna_cell/h6_scratch/conv2d/bias/Adam generator/rnn/dna_cell/h6_scratch/conv2d/bias/Adam_1 generator/rnn/dna_cell/h6_scratch/conv2d/kernel/Adam generator/rnn/dna_cell/h6_scratch/conv2d/kernel/Adam_1 generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam_1 generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam_1 generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/kernel/Adam generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/kernel/Adam_1 generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/state/beta/Adam generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/state/beta/Adam_1 generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/state/gamma/Adam generator/rnn/dna_cell/lstm_h0/basic_conv2dlstm_cell/state/gamma/Adam_1 generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam_1 generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam_1 generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/kernel/Adam generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/kernel/Adam_1 generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/state/beta/Adam generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/state/beta/Adam_1 generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/state/gamma/Adam generator/rnn/dna_cell/lstm_h1/basic_conv2dlstm_cell/state/gamma/Adam_1 generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam_1 generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam_1 generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/kernel/Adam generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/kernel/Adam_1 generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/state/beta/Adam generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/state/beta/Adam_1 generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/state/gamma/Adam generator/rnn/dna_cell/lstm_h2/basic_conv2dlstm_cell/state/gamma/Adam_1 generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam_1 generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam_1 generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/kernel/Adam generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/kernel/Adam_1 generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/state/beta/Adam generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/state/beta/Adam_1 generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/state/gamma/Adam generator/rnn/dna_cell/lstm_h3/basic_conv2dlstm_cell/state/gamma/Adam_1 generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/input_transform_forget_output/beta/Adam_1 generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/input_transform_forget_output/gamma/Adam_1 generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/kernel/Adam generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/kernel/Adam_1 generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/state/beta/Adam generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/state/beta/Adam_1 generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/state/gamma/Adam generator/rnn/dna_cell/lstm_h4/basic_conv2dlstm_cell/state/gamma/Adam_1 generator/rnn/dna_cell/lstm_z/basic_lstm_cell/bias/Adam generator/rnn/dna_cell/lstm_z/basic_lstm_cell/bias/Adam_1 generator/rnn/dna_cell/lstm_z/basic_lstm_cell/kernel/Adam generator/rnn/dna_cell/lstm_z/basic_lstm_cell/kernel/Adam_1 generator/rnn/dna_cell/masks/conv2d/bias/Adam generator/rnn/dna_cell/masks/conv2d/bias/Adam_1 generator/rnn/dna_cell/masks/conv2d/kernel/Adam generator/rnn/dna_cell/masks/conv2d/kernel/Adam_1 generator/rnn/dna_cell/scratch_image/conv2d/bias/Adam generator/rnn/dna_cell/scratch_image/conv2d/bias/Adam_1 generator/rnn/dna_cell/scratch_image/conv2d/kernel/Adam generator/rnn/dna_cell/scratch_image/conv2d/kernel/Adam_1 optimize/beta1_power optimize/beta1_power_1 optimize/beta2_power optimize/beta2_power_1 vgg/block1_conv1/bias vgg/block1_conv1/kernel vgg/block1_conv2/bias vgg/block1_conv2/kernel vgg/block2_conv1/bias vgg/block2_conv1/kernel vgg/block2_conv2/bias vgg/block2_conv2/kernel vgg/block3_conv1/bias vgg/block3_conv1/kernel vgg/block3_conv2/bias vgg/block3_conv2/kernel vgg/block3_conv3/bias vgg/block3_conv3/kernel vgg/block4_conv1/bias vgg/block4_conv1/kernel vgg/block4_conv2/bias vgg/block4_conv2/kernel vgg/block4_conv3/bias vgg/block4_conv3/kernel vgg/block5_conv1/bias vgg/block5_conv1/kernel vgg/block5_conv2/bias vgg/block5_conv2/kernel vgg/block5_conv3/bias vgg/block5_conv3/kernel Traceback (most recent call last): File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call return fn(*args) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3,3,32,1] rhs shape= [3,3,32,3] [[{{node save/Assign_78}}]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "scripts/generate.py", line 202, in main() File "scripts/generate.py", line 161, in main model.restore(sess, args.checkpoint) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/models/savp_model.py", line 855, in restore super(SAVPVideoPredictionModel, self).restore(sess, checkpoints, restore_to_checkpoint_mapping) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/models/base_model.py", line 247, in restore sess.run(restore_op) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run run_metadata_ptr) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run feed_dict_tensor, options, run_metadata) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run run_metadata) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3,3,32,1] rhs shape= [3,3,32,3] [[node save/Assign_78 (defined at /Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/utils/tf_utils.py:542) ]] Caused by op 'save/Assign_78', defined at: File "scripts/generate.py", line 202, in main() File "scripts/generate.py", line 161, in main model.restore(sess, args.checkpoint) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/models/savp_model.py", line 855, in restore super(SAVPVideoPredictionModel, self).restore(sess, checkpoints, restore_to_checkpoint_mapping) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/models/base_model.py", line 244, in restore restore_to_checkpoint_mapping=restore_to_checkpoint_mapping) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/utils/tf_utils.py", line 542, in get_checkpoint_restore_saver restore_saver = tf.train.Saver(max_to_keep=0, var_list=restore_and_checkpoint_vars, filename=checkpoint) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 832, in __init__ self.build() File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 844, in build self._build(self._filename, build_save=True, build_restore=True) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 881, in _build build_save=build_save, build_restore=build_restore) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 513, in _build_internal restore_sequentially, reshape) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 354, in _AddRestoreOps assign_ops.append(saveable.restore(saveable_tensors, shapes)) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 73, in restore self.op.get_shape().is_fully_defined()) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 223, in assign validate_shape=validate_shape) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 64, in assign use_locking=use_locking, name=name) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper op_def=op_def) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op op_def=op_def) File "/Users/shreyaskolpe/Documents/GitHub/video_prediction/myenv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init__ self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [3,3,32,1] rhs shape= [3,3,32,3] [[node save/Assign_78 (defined at /Users/shreyaskolpe/Documents/GitHub/video_prediction/video_prediction/utils/tf_utils.py:542) ]]