Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewrite the example of VAE using Chainer distributions #5356

Merged
merged 19 commits into from Nov 21, 2018
Merged
Changes from 1 commit
Commits
File filter...
Filter file types
Jump to…
Jump to file or symbol
Failed to load files and symbols.
+8 −13
Diff settings

Always

Just for now

change the implementation of `Prior` by inheriting `chainer.Link` rat…

…her than `chainer.Chain`
  • Loading branch information...
ganow committed Nov 12, 2018
commit 39f8620760142d3873790a112332a3b72c3b50f2
@@ -85,19 +85,14 @@ def forward(self, z, inference=False):
return D.Bernoulli(logit=h, binary_check=self.binary_check)


class Prior(chainer.Chain):
class Prior(chainer.Link):

def __init__(self, n_latent, dtype=np.float32, device=-1):
def __init__(self, n_latent):
super(Prior, self).__init__()

loc = np.zeros(n_latent, dtype=dtype)
scale = np.ones(n_latent, dtype=dtype)
if device != -1:
loc = cuda.to_gpu(loc, device=device)
scale = cuda.to_gpu(scale, device=device)

self.loc = chainer.Variable(loc)
self.scale = chainer.Variable(scale)
with self.init_scope():
self.loc = chainer.Parameter(0, n_latent)
self.scale = chainer.Parameter(1, n_latent)
This conversation was marked as resolved by toslunar

This comment has been minimized.

Copy link
@toslunar

toslunar Nov 13, 2018

Member
-        with self.init_scope():
-            self.loc = chainer.Parameter(0, n_latent)
-            self.scale = chainer.Parameter(1, n_latent)
+        self.loc = np.zeros(n_latent, np.float32)
+        self.scale = np.ones(n_latent, np.float32)

(Persistent values are not variables.)

This comment has been minimized.

Copy link
@ganow

ganow Nov 13, 2018

Author Contributor

This comment has been minimized.

Copy link
@toslunar

toslunar Nov 16, 2018

Member

The lines

        self.register_persistent('loc')
        self.register_persistent('scale')

are needed.

This comment has been minimized.

Copy link
@ganow

ganow Nov 16, 2018

Author Contributor

I misunderstood your recommendation. I fixed it, thank you. 61565f4


def forward(self):
return D.Normal(self.loc, scale=self.scale)
@@ -111,5 +106,5 @@ def make_decoder(n_in, n_latent, n_h, binary_check=False):
return Decoder(n_in, n_latent, n_h, binary_check=binary_check)


def make_prior(n_latent, dtype=np.float32, device=-1):
return Prior(n_latent, dtype=dtype, device=device)
def make_prior(n_latent):
return Prior(n_latent)
@@ -53,7 +53,7 @@ def main():
encoder = net.make_encoder(784, args.dim_z, args.dim_h)
decoder = net.make_decoder(784, args.dim_z, args.dim_h,
binary_check=args.binary)
prior = net.make_prior(args.dim_z, device=args.gpu)
prior = net.make_prior(args.dim_z)
avg_elbo_loss = net.AvgELBOLoss(encoder, decoder, prior,
beta=args.beta, k=args.k)
This conversation was marked as resolved by ganow

This comment has been minimized.

Copy link
@toslunar

toslunar Nov 13, 2018

Member
Suggested change
beta=args.beta, k=args.k)
beta=args.beta, k=args.k)
if args.gpu >= 0:
avg_elbo_loss.to_gpu(args.gpu)

ProTip! Use n and p to navigate between commits in a pull request.
You can’t perform that action at this time.