You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear @tkipf,
I really appreciate your work and I would like to adapt your code to my own use cases. In particular, I would like to generate new graphs by sampling from a learned latent space, as usually done with images in Variational Autoencoder models. In such a case, indeed, new data (images) might be generated by sampling from a latent space which is constrained to be Normal distributed.
However, in your implementation, it is not really clear to me if this could be done.
As far as I understood, the reconstruction of the original adjacency matrix is performed by an inner product of the embedded input z_mean. This imply that in order to generate new graphs, I cannot sample from a standard Normal distribution since there would be no trained layers to be used. Do I understood correctly?
Is there any other way to train your model in order to sample from a Normal distribution after training the model?
Thanks in advance for your precious help.
Bests,
The text was updated successfully, but these errors were encountered:
Dear Haorannlp,
as far as i understood,the way in which I can use this model is the following:
emb = sess.run(model.z_mean, feed_dict=feed_dict)
adj_rec = np.dot(emb, emb.T)
Now, the problem is that after training the model, I would like to generate new instances without using any graph as input of the model. In order to do this I need to produce the "emb" and perform the dot product. Now, I cannot figure out how can I obtain this "emb" just by sampling from a strandard Normal distribution. In fact, from the code I see that:
is the part that should be distributed as normal after training. Am I understood correctly?
If the answer is yes and I sample from a normal distribution, I will obtain "z" and not "z_mean". Is that right?
Sampling from a standard normal distribution : self.z = tf.random_normal([1, FLAGS.hidden2]). You don't need encoder to generate new graphs, only the decoder is enough.
Dear @tkipf,
I really appreciate your work and I would like to adapt your code to my own use cases. In particular, I would like to generate new graphs by sampling from a learned latent space, as usually done with images in Variational Autoencoder models. In such a case, indeed, new data (images) might be generated by sampling from a latent space which is constrained to be Normal distributed.
However, in your implementation, it is not really clear to me if this could be done.
As far as I understood, the reconstruction of the original adjacency matrix is performed by an inner product of the embedded input z_mean. This imply that in order to generate new graphs, I cannot sample from a standard Normal distribution since there would be no trained layers to be used. Do I understood correctly?
Is there any other way to train your model in order to sample from a Normal distribution after training the model?
Thanks in advance for your precious help.
Bests,
The text was updated successfully, but these errors were encountered: