Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inversion doesn't look like the face of img source #2

Closed
molo32 opened this issue Feb 28, 2021 · 4 comments
Closed

inversion doesn't look like the face of img source #2

molo32 opened this issue Feb 28, 2021 · 4 comments

Comments

@molo32
Copy link

molo32 commented Feb 28, 2021

inversion doesn't look like the face of img source,How can I make it look more like img source?

@omertov
Copy link
Owner

omertov commented Feb 28, 2021

Hi @molo32,
Can you provide further details? have you performed the required face alignment?

@molo32
Copy link
Author

molo32 commented Feb 28, 2021

image_path = "/content/8.jpg"
original_image = Image.open(image_path)
original_image = original_image.convert("RGB")
input_image = run_alignment(image_path)

def run_on_batch(inputs, net):
    images, latents = net(inputs.to("cuda").float(), randomize_noise=False, return_latents=True)
    if experiment_type == 'cars_encode':
        images = images[:, :, 32:224, :]
    return images, latents

def display_alongside_source_image(result_image, source_image):
    res = np.concatenate([np.array(source_image.resize(resize_dims)),
                          np.array(result_image.resize(resize_dims))], axis=1)
    return Image.fromarray(res)

input_image.resize(resize_dims)
img_transforms = EXPERIMENT_ARGS['transform']
transformed_image = img_transforms(input_image)
with torch.no_grad():
    tic = time.time()
    images, latents = run_on_batch(transformed_image.unsqueeze(0), net)
    result_image, latent = images[0], latents[0]
    toc = time.time()
    print('Inference took {:.4f} seconds.'.format(toc - tic))
# Display inversion:
display_alongside_source_image(tensor2im(result_image), input_image)

download

@omertov
Copy link
Owner

omertov commented Mar 1, 2021

It seems like you run our encoder correctly.

Generally speaking, our pretrained e4e encoder is specifically designed to balance the tradeoffs existing in the StyleGAN's latent space (See our paper for further details and examples).
By doing so, we lose some reconstruction accuracy to gain more editable latent codes (that can be better used by other existing latent-space manipulation techniques, StyleFlow for example) compared to other inversion methods.

If exact reconstruction is what you seek, direct optimization will always yield the best results, or alternatively, you can control the tradeoff yourself according to your needs.
For example, you can train the encoder to favor reconstruction over editablity by not using the latent codes discriminator or by tuning the progressive training parameters.

@woctezuma
Copy link

woctezuma commented Mar 1, 2021

Relevant: rolux/stylegan2encoder#2 (comment) (posted in January 2020)

It took me a while to appreciate the fact that encoder output can have high visual quality, but bad semantics.

That is the kind of idea that you find in the paper: a good inversion is the result of a trade-off between i) perception (visual quality in terms of a realistic output), ii) distortion (visual quality in terms of an output close to the input), and iii) edit-ability (semantics).

If you look at the projected face of Angelina Jolie, you can see that it looks like a human face (perception), it slightly looks like Angelina Jolie (distortion), and it should hopefully change according to plan if you try to edit it (edit-ability).

Closely related, if you want to get an idea of what to expect from projections as implemented:

  • in the original StyleGAN2 paper (W or W(1,*))
  • in its forks (W+ or W(18,*)) which predate encoder4editing,

then you can check the results shown in the README of my repository: https://github.com/woctezuma/stylegan2-projecting-images Basically, the more constrained the projection, the higher the distortion, but the output should behave better.
With encoder4editing, one has access to a smart way to constrain the projection. Plus, the projection is fast.

@omertov omertov closed this as completed Apr 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants