Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generation of new graph from trained GVAE #53

Open
BerardinoB opened this issue Dec 27, 2019 · 3 comments
Open

Generation of new graph from trained GVAE #53

BerardinoB opened this issue Dec 27, 2019 · 3 comments

Comments

@BerardinoB
Copy link

Dear @tkipf,
I really appreciate your work and I would like to adapt your code to my own use cases. In particular, I would like to generate new graphs by sampling from a learned latent space, as usually done with images in Variational Autoencoder models. In such a case, indeed, new data (images) might be generated by sampling from a latent space which is constrained to be Normal distributed.
However, in your implementation, it is not really clear to me if this could be done.
As far as I understood, the reconstruction of the original adjacency matrix is performed by an inner product of the embedded input z_mean. This imply that in order to generate new graphs, I cannot sample from a standard Normal distribution since there would be no trained layers to be used. Do I understood correctly?
Is there any other way to train your model in order to sample from a Normal distribution after training the model?

Thanks in advance for your precious help.
Bests,

@haorannlp
Copy link

You can sample from a standard normal distribution directly after training to generate new graphs

@BerardinoB
Copy link
Author

Dear Haorannlp,
as far as i understood,the way in which I can use this model is the following:

  1. emb = sess.run(model.z_mean, feed_dict=feed_dict)
  2. adj_rec = np.dot(emb, emb.T)

Now, the problem is that after training the model, I would like to generate new instances without using any graph as input of the model. In order to do this I need to produce the "emb" and perform the dot product. Now, I cannot figure out how can I obtain this "emb" just by sampling from a strandard Normal distribution. In fact, from the code I see that:

self.z = self.z_mean + tf.random_normal([self.n_samples, FLAGS.hidden2]) * tf.exp(self.z_log_std)

is the part that should be distributed as normal after training. Am I understood correctly?
If the answer is yes and I sample from a normal distribution, I will obtain "z" and not "z_mean". Is that right?

@haorannlp
Copy link

Sampling from a standard normal distribution : self.z = tf.random_normal([1, FLAGS.hidden2]). You don't need encoder to generate new graphs, only the decoder is enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants