Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to transform my own sketch to latent z ? #12

Open
hahaCui opened this issue Feb 23, 2022 · 5 comments
Open

How to transform my own sketch to latent z ? #12

hahaCui opened this issue Feb 23, 2022 · 5 comments

Comments

@hahaCui
Copy link

hahaCui commented Feb 23, 2022

Hi! Glad to see your work!
But I have a question, as follows.

Consider practical usage:
step 1: I make a cat sketch image by hand.
step 2: transform the sketch image to latent_z.
step 3: feed latent_z to netG network to get a cat image.

I am surprised how to realize step2 ?
Do you mean that I need netG, photo2Sketch network, and use pix2latent method? Or only need netG and use pix2latent method?
If I just use netG to get z, it will still generate the cat sketch, but not cat image. Is not it ?

Thanks!

@PeterWang512
Copy link
Owner

I think you are asking about two things in our paper.
(1) Our method takes in one or a few cat sketches, and update the entire generator to synthesis endless cats with similar shape and poses.
(2) We show an application to transform a real cat image into latent_z. Note that this projection is done on the original cat network. We can then feed the latent_z into the new created generator to achieve an image manipulation effect (i.e. changing the shape and pose of the real cat into the one depicted by the sketch).

@hahaCui
Copy link
Author

hahaCui commented Feb 24, 2022

Nice to receive your reply! Now I get it.

If so, suppose using handreds of cat sketches with each with different pose and shape from others, I'm afraid there may exists two problems:
(1) Just as your reply to @qingqingisrunning, If I use whole sketches to train sketchGan, the inference result may be a strange result;
(2) But If I train one model for each cat sketch, it really consumes much time and computation cost!

Looking forward to your new research !

@PeterWang512
Copy link
Owner

yes these are great suggestions, and unfortunately our method currently is not capable of achieving fast model creation right now. It would be a great direction to speed up the model creation process.

@Zeeshan75
Copy link

Zeeshan75 commented Aug 9, 2022

Hi @PeterWang512, Thanks for sharing your work. It's Interesting!

I have gone through your recent repo gan warping, There I could find the saved latent spaces for cats and you were doing warping or edits by using those .npz latent space files.

Can you give me the suggestions or the approach to the below questions:

  1. How can we generate the latent space by taking the input cat image by using your cat pre-trained model?
  2. How can we transform the above latent space (or) train the model to generate the cartoon(toonify) cat image?

Thanks in advance.

@jingnian-yxq
Copy link

I tried pix2latent for getting latent_z, but the model provided is for size 512, and my images are size 256. It reports "RuntimeError: mat1 dim 1 must match mat2 dim 0". How should I change to apply for 256x256 images?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants