Skip to content

VariationalAutoencoderRunner.py - Fixed gaussian_sample_size causes incompatible shapes #25

@daviddao

Description

@daviddao

I fixed the scaling issue in VariationalAutoencoderRunner.py (See #23). However, running the default example causes following:

Epoch: 0001 cost= 1114.439753835
Epoch: 0002 cost= 662.529461080
Epoch: 0003 cost= 594.752329830
Epoch: 0004 cost= 569.599913920
Epoch: 0005 cost= 556.361018750
Epoch: 0006 cost= 545.052694460
Epoch: 0007 cost= 537.334268253
Epoch: 0008 cost= 530.251896875
Epoch: 0009 cost= 523.817275994
Epoch: 0010 cost= 519.874919247
Epoch: 0011 cost= 514.975155966
Epoch: 0012 cost= 510.715168395
Epoch: 0013 cost= 506.326094318
Epoch: 0014 cost= 502.172605824
Epoch: 0015 cost= 498.612383310
Epoch: 0016 cost= 495.592024787
Epoch: 0017 cost= 493.580289986
Epoch: 0018 cost= 490.370449006
Epoch: 0019 cost= 489.957028977
Epoch: 0020 cost= 486.818214844
W tensorflow/core/common_runtime/executor.cc:1102] 0x27f47b0 Compute status: Invalid argument: Incompatible shapes: [10000,200] vs. [128,200]
         [[Node: Mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Sqrt, random_normal)]]
W tensorflow/core/common_runtime/executor.cc:1102] 0x542b0b0 Compute status: Invalid argument: Incompatible shapes: [10000,200] vs. [128,200]
         [[Node: Mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Sqrt, random_normal)]]
         [[Node: range_1/_29 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_226_range_1", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
W tensorflow/core/common_runtime/executor.cc:1102] 0x542b0b0 Compute status: Invalid argument: Incompatible shapes: [10000,200] vs. [128,200]
         [[Node: Mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Sqrt, random_normal)]]
         [[Node: add_1/_27 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_225_add_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Traceback (most recent call last):
  File "VariationalAutoencoderRunner.py", line 53, in <module>
    print "Total cost: " + str(autoencoder.calc_total_cost(X_test))

It seems that fixing the gaussian_sample_size causes an error everytime we want to evaluate a batch of data where gaussian_sample_size != batch_size.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions