Skip to content
DeepNude's algorithm and general image generation theory and practice research, including pix2pix, CycleGAN, DCGAN, and VAE models (TensorFlow2 implementation). DeepNude的算法以及通用图像生成的理论与实践研究。
Branch: master
Clone or download


This repository contains the pix2pixHD algorithms(proposed by NVIDIA) of DeepNude, and more importantly, the general image generation theory and practice behind DeepNude.


This resource includes the TensorFlow2 implementation of image generation models such as pix2pix, CycleGAN, DCGAN, and VAE. 本资源包含pix2pix、CycleGAN、DCGAN、VAE等图像生成模型的 TensorFlow2 实现。

What is DeepNude? 什么是 DeepNude?

DeepNude uses a slightly modified version of the pix2pixHD GAN architecture, quoted from deepnude_official. pix2pixHD is a general-purpose Image2Image technology proposed by NVIDIA. Obviously, DeepNude is the wrong application of artificial intelligence technology, but it uses Image2Image technology for researchers and developers working in other fields such as fashion, film and visual effects.

DeepNude使用了一个稍微修改过的 pix2pixHD GAN 架构。pix2pixHD是由NVIDIA提出的一种通用的Image2Image技术。显然,DeepNude是人工智能技术的错误应用,但它使用的Image2Image技术对于在时尚,电影和视觉效果等其他领域工作的研究人员和开发人员非常有用。

Content of this resource 本资源内容

English 中文
What is DeepNude? 什么是 DeepNude?
Fake Image Generation and Image-to-Image Demo 试玩Demo
DeepNude Algorithm DeepNude算法
Image Generation Theoretical Research 图像生成理论研究
Image Generation Practice Research 图像生成实践研究
Future 展望未来

Fake Image Generation Demo

This section provides a fake image generation demo that you can use as you wish. They are fake images generated by StyleGAN without any copyright issues. Note: Each time you refresh the page, a new fake image will be generated, pay attention to save!


Image-to-Image Demo

This section provides a demo of Image-to-Image Demo: Black and white stick figures to colorful cats, shoes, handbags. DeepNude software mainly uses Image-to-Image technology, which theoretically converts the images you enter into any image you want. You can experience Image-to-Image technology in your browser by clicking Image-to-Image Demo below.

这一部分提供一个试玩的 Image-to-Image Demo:黑白简笔画到色彩丰富的猫、鞋、手袋。DeepNude 软件主要使用了Image-to-Image技术,该技术理论上可以把你输入的图片转换成任何你想要的图片。你可以点击下方的Image-to-Image Demo在浏览器中体验Image-to-Image技术。

Try Image-to-Image Demo

An example of using this demo is as follows:

In the left side box, draw a cat as you imagine, and then click the process button, you can output a model generated cat.


🔞 DeepNude Algorithm

DeepNude is a pornographic software that is forbidden by minors. If you are not interested in DeepNude itself, you can skip this section and see the general Image-to-Image theory and practice in the following chapters. DeepNude 是一款含有色情的软件,未成年人禁止使用。如果你对DeepNude本身不感兴趣,可以直接跳过本节,查看后面章节中通用的Image-to-Image理论与实践研究。

Title Content
DeepNude_software_itself DeepNude software usage process and evaluation of advantages and disadvantages. DeepNude 软件的使用过程和优缺点评价。
Official DeepNude Algorithm(Based on Pytorch) 官方版本DeepNude算法(基于Pytorch)

Image Generation Theoretical Research

This section describes DeepNude-related AI/Deep Learning theory (especially computer vision) research. If you like to read the paper and use the latest papers, enjoy it.


Click here to systematically understand GAN

1. Pix2Pix


Image-to-Image Translation with Conditional Adversarial Networks is a general solution for the use of conditional confrontation networks as an image-to-image conversion problem proposed by the University of Berkeley.

Image-to-Image Translation with Conditional Adversarial Networks 是伯克利大学研究提出的使用条件对抗网络作为图像到图像转换问题的通用解决方案。

2. Pix2PixHD

DeepNude mainly uses this Image-to-Image(Pix2PixHD) technology.


Get high resolution images from the semantic map. The semantic graph is a color picture. The different color blocks on the map represent different kinds of objects, such as pedestrians, cars, traffic signs, buildings, and so on. Pix2PixHD takes a semantic map as input and produces a high-resolution, realistic image. Most of the previous techniques can only produce rough, low-resolution images that don't look real. This research has produced images with a resolution of 2k by 1k, which is very close to full HD photos.


3. CycleGAN


CycleGAN uses a cycle consistency loss to enable training without the need for paired data. In other words, it can translate from one domain to another without a one-to-one mapping between the source and target domain. This opens up the possibility to do a lot of interesting tasks like photo-enhancement, image colorization, style transfer, etc. All you need is the source and the target dataset.


4. StyleGAN


Source A + Source B (Style) = ?

StyleGAN can not only generate fake images source A and source B, but also combine the content of source A and source B from different strengths, as shown in the following table.

StyleGAN 不仅可以生成假的图片source A 和 source B,还可以结合从不同的力度上结合source A的内容和 source B的样式,具体说明如下表。

Style 等级(源自 Source B) Source A Source B
高等级(粗略) 所有颜色(眼睛,头发,光线)和细节面部特征来自Source A 继承Source B高级的面部特征,如姿势、一般的发型、脸部形状和眼镜
中等级 姿势、一般的面部形状和眼镜来自Source A 继承Source B中级的面部特征 ,如发型,张开/闭着的眼睛
高等级(细微) 主要面部内容来自Source A 继承Source B高级面部特征,如颜色方案和微观结构

5. Image Inpainting 图像修复


In the image interface of Image_Inpainting(NVIDIA_2018).mp4 video, you only need to use tools to simply smear the unwanted content in the image. Even if the shape is very irregular, NVIDIA's model can “restore” the image with very realistic The picture fills the smeared blank. It can be described as a one-click P picture, and "no ps traces." The study was based on a team from Nvidia's Guilin Liu et al. who published a deep learning method that could edit images or reconstruct corrupted images, even if the images were worn or lost pixels. This is the current 2018 state-of-the-art approach.

Image_Inpainting(NVIDIA_2018).mp4 视频中左侧的操作界面,只需用工具将图像中不需要的内容简单涂抹掉,哪怕形状很不规则,NVIDIA的模型能够将图像“复原”,用非常逼真的画面填补被涂抹的空白。可谓是一键P图,而且“毫无ps痕迹”。该研究来自Nvidia的Guilin Liu等人的团队,他们发布了一种可以编辑图像或重建已损坏图像的深度学习方法,即使图像穿了个洞或丢失了像素。这是目前2018 state-of-the-art的方法。

Image Generation Practice Research

These models are based on the latest implementation of TensorFlow2.

This section explains DeepNude-related AI/Deep Learning (especially computer vision) code practices, and if you like to experiment, enjoy them.


1. Pix2Pix

Use the Pix2Pix model (Conditional Adversarial Networks) to implement black and white stick figures to color graphics, flat houses to stereoscopic houses and aerial maps to maps.

使用Pix2Pix模型(Conditional Adversarial Networks)实现黑白简笔画到彩图、平面房屋到立体房屋和航拍图到地图等功能。

Click Start Experience 1

2. Pix2PixHD

Under development... First you can use the official implementation

3. CycleGAN

The CycleGAN neural network model is used to realize the four functions of photo style conversion, photo effect enhancement, landscape season change, and object conversion.


Click Start Experience 3


DCGAN is used to achieve random number to image generation tasks, such as face generation. 使用DCGAN来实现随机数到图片生成任务,比如人脸生成。

Click Start Experience 4

5. Variational Autoencoder (VAE)

VAE is used to achieve random number to image generation tasks, such as face generation. 使用VAE来实现随机数到图片生成任务,比如人脸生成。

Click Start Experience 5

6. Neural style transfer

Use VGG19 to achieve image style migration effects, such as photo changes to oil paintings and comics. 使用VGG19来实现图片风格迁移效果,比如照片变油画、漫画。

Click Start Experience 6


如果你是使用PaddlePaddle的用户,可以参考以上模型的Paddle版本 图像生成模型库 PaddleGAN


Click

You can’t perform that action at this time.