Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 11 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# DeepCreamPy
*Decensoring Hentai with Deep Neural Networks.*

## DeepCreamPyV2--a major upgrade over DeepCreamPyV1--is under construction.
## DeepCreamPy 2.0--a major upgrade over DeepCreamPy 1.3--is under construction.

## Please bear with me. Many, many things will be broken.*
## DeepCreamPy 2.0 can be used by running the code yourself.

## All available binaries are outdated. Wait for the next release.
## No binary is available for DeepCreamPy 2.0 yet. Wait for the next release.

[![GitHub release](https://img.shields.io/github/release/deeppomf/DeepCreamPy.svg)](https://github.com/deeppomf/DeepCreamPy/releases/latest)
[![GitHub downloads](https://img.shields.io/github/downloads/deeppomf/DeepCreamPy/latest/total.svg)](https://github.com/deeppomf/DeepCreamPy/releases/latest)
Expand All @@ -17,7 +17,7 @@

A deep learning-based tool to automatically replace censored artwork in hentai with plausible reconstructions.

Before can be DeepCreamPy used, the user must color censored regions in their hentai green in an image editing program like GIMP or Photoshop. DeepCreamPy takes the green colored images as input, and a neural network autommatically fills in the censored regions.
Before DeepCreamPy can be used, the user must color censored regions in their hentai green in an image editing program like GIMP or Photoshop. DeepCreamPy takes the green colored images as input, and a neural network autommatically fills in the censored regions.

DeepCreamPy has a pre-built binary for Windows 64-bit available [here](https://github.com/deeppomf/DeepCreamPy/releases/latest). DeepCreamPy's code works on Windows, Mac, and Linux.

Expand All @@ -30,8 +30,7 @@ Please before you open a new issue check [closed issues](https://github.com/deep
## Features
- Decensoring images of ANY size
- Decensoring of ANY shaped censor (e.g. black lines, pink hearts, etc.)
- Higher quality decensors
- Support for mosaic decensors
- Decensoring of mosaic decensors

## Limitations
The decensorship is for color hentai images that have minor to moderate censorship of the penis or vagina. If a vagina or penis is completely censored out, decensoring will be ineffective.
Expand All @@ -46,7 +45,7 @@ It does NOT work with:

## Table of Contents
Setup:
* [Running latest Window 64-bit release](docs/INSTALLATION_BINARY.md)
* [UNDER CONSTRUCTION: Running latest Window 64-bit release](docs/INSTALLATION_BINARY.md)
* [Running code yourself](docs/INSTALLATION.md)

Usage:
Expand All @@ -60,14 +59,17 @@ Miscellaneous:
## To do
- Resolve all Tensorflow compatibility problems
- Finish the user interface
- Add support for black and white images
- Add error log

Follow me on Twitter [@deeppomf](https://twitter.com/deeppomf) (NSFW Tweets) for project updates.

Contributions are welcome! Special thanks to ccppoo, IAmTheRedSpy, 0xb8, deniszh, Smethan, mrmajik45, harjitmoe, itsVale, StartleStars, and SoftArmpit!
## Contributions
Contributions are closed for the near future.

Special thanks to ccppoo, IAmTheRedSpy, 0xb8, deniszh, Smethan, mrmajik45, harjitmoe, itsVale, StartleStars, and SoftArmpit for their contributions!

## License
Please see the [EULA](EULA.txt) for license details.

## Acknowledgements
Example mermaid image by Shurajo & AVALANCHE Game Studio under [CC BY 3.0 License](https://creativecommons.org/licenses/by/3.0/). The example image is modified from the original, which can be found [here](https://opengameart.org/content/mermaid).
Expand Down
38 changes: 38 additions & 0 deletions config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import argparse

def str2floatarr(v):
if type(v) == str:
try:
return [float(v) for v in v.split(',')]
except:
raise argparse.ArgumentTypeError('Integers seperated by commas expected.')
else:
raise argparse.ArgumentTypeError('Integers seperated by commas expected.')

def str2bool(v):
if v.lower() in ('yes', 'true', 't', 'y', '1', True):
return True
elif v.lower() in ('no', 'false', 'f', 'n', '0', False):
return False
else:
raise argparse.ArgumentTypeError('Boolean value expected.')

def get_args():
parser = argparse.ArgumentParser(description='')

#Input output folders settings
parser.add_argument('--decensor_input_path', dest='decensor_input_path', default='./decensor_input/', help='input images with censored regions colored green to be decensored by decensor.py path')
parser.add_argument('--decensor_input_original_path', dest='decensor_input_original_path', default='./decensor_input_original/', help='input images with no modifications to be decensored by decensor.py path')
parser.add_argument('--decensor_output_path', dest='decensor_output_path', default='./decensor_output/', help='output images generated from running decensor.py path')

#Decensor settings
parser.add_argument('--mask_color_red', dest='mask_color_red', default=0, help='red channel of mask color in decensoring')
parser.add_argument('--mask_color_green', dest='mask_color_green', default=255, help='green channel of mask color in decensoring')
parser.add_argument('--mask_color_blue', dest='mask_color_blue', default=0, help='blue channel of mask color in decensoring')
parser.add_argument('--is_mosaic', dest='is_mosaic', default='False', type=str2bool, help='true if image has mosaic censoring, false otherwise')

args = parser.parse_args()
return args

if __name__ == '__main__':
get_args()
251 changes: 251 additions & 0 deletions decensor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,251 @@
#!/usr/bin/env python3

try:
import numpy as np
from PIL import Image

import os
from copy import deepcopy

import config
import file
from model import InpaintNN
from libs.utils import *
except ImportError as e:
print("Error when importing libraries: ", e)
print("Some Python libraries are missing. You can install all requirements by running in the command line 'pip install -r requirements.txt' ")
exit(1)

class Decensor:

def __init__(self):
self.args = config.get_args()
self.is_mosaic = self.args.is_mosaic

self.mask_color = [self.args.mask_color_red/255.0, self.args.mask_color_green/255.0, self.args.mask_color_blue/255.0]

if not os.path.exists(self.args.decensor_output_path):
os.makedirs(self.args.decensor_output_path)

self.load_model()

def get_mask(self, colored):
mask = np.ones(colored.shape, np.uint8)
i, j = np.where(np.all(colored[0] == self.mask_color, axis=-1))
mask[0, i, j] = 0
return mask

def load_model(self):
self.model = InpaintNN(bar_model_name = "./models/bar/Train_775000.meta", bar_checkpoint_name = "./models/bar/", mosaic_model_name = "./models/mosaic/Train_290000.meta", mosaic_checkpoint_name = "./models/mosaic/", is_mosaic=self.is_mosaic)

def decensor_all_images_in_folder(self):
#load model once at beginning and reuse same model
#self.load_model()
color_dir = self.args.decensor_input_path
file_names = os.listdir(color_dir)

input_dir = self.args.decensor_input_path
output_dir = self.args.decensor_output_path

# Change False to True before release --> file.check_file( input_dir, output_dir, True)
file_names, self.files_removed = file.check_file( input_dir, output_dir, False)

#convert all images into np arrays and put them in a list
for file_name in file_names:
color_file_path = os.path.join(color_dir, file_name)
color_bn, color_ext = os.path.splitext(file_name)
if os.path.isfile(color_file_path) and color_ext.casefold() == ".png":
print("--------------------------------------------------------------------------")
print("Decensoring the image {}".format(color_file_path))
try :
colored_img = Image.open(color_file_path)
except:
print("Cannot identify image file (" +str(color_file_path)+")")
self.files_removed.append((color_file_path,3))
# incase of abnormal file format change (ex : text.txt -> text.png)
continue

#if we are doing a mosaic decensor
if self.is_mosaic:
#get the original file that hasn't been colored
ori_dir = self.args.decensor_input_original_path
#since the original image might not be a png, test multiple file formats
valid_formats = {".png", ".jpg", ".jpeg"}
for test_file_name in os.listdir(ori_dir):
test_bn, test_ext = os.path.splitext(test_file_name)
if (test_bn == color_bn) and (test_ext.casefold() in valid_formats):
ori_file_path = os.path.join(ori_dir, test_file_name)
ori_img = Image.open(ori_file_path)
# colored_img.show()
self.decensor_image(ori_img, colored_img, file_name)
break
else: #for...else, i.e if the loop finished without encountering break
print("Corresponding original, uncolored image not found in {}".format(color_file_path))
print("Check if it exists and is in the PNG or JPG format.")
else:
self.decensor_image(colored_img, colored_img, file_name)
else:
print("--------------------------------------------------------------------------")
print("Irregular file detected : "+str(color_file_path))
print("--------------------------------------------------------------------------")
if(self.files_removed is not None):
file.error_messages(None, self.files_removed)
print("\nDecensoring complete!")

#decensors one image at a time
#TODO: decensor all cropped parts of the same image in a batch (then i need input for colored an array of those images and make additional changes)
def decensor_image(self, ori, colored, file_name=None):
width, height = ori.size
#save the alpha channel if the image has an alpha channel
has_alpha = False
if (ori.mode == "RGBA"):
has_alpha = True
alpha_channel = np.asarray(ori)[:,:,3]
alpha_channel = np.expand_dims(alpha_channel, axis =-1)
ori = ori.convert('RGB')

ori_array = image_to_array(ori)
ori_array = np.expand_dims(ori_array, axis = 0)

if self.is_mosaic:
#if mosaic decensor, mask is empty
# mask = np.ones(ori_array.shape, np.uint8)
# print(mask.shape)
colored = colored.convert('RGB')
color_array = image_to_array(colored)
color_array = np.expand_dims(color_array, axis = 0)
mask = self.get_mask(color_array)
mask_reshaped = mask[0,:,:,:] * 255.0
mask_img = Image.fromarray(mask_reshaped.astype('uint8'))
# mask_img.show()

else:
mask = self.get_mask(ori_array)

#colored image is only used for finding the regions
regions = find_regions(colored.convert('RGB'), [v*255 for v in self.mask_color])
print("Found {region_count} censored regions in this image!".format(region_count = len(regions)))

if len(regions) == 0 and not self.is_mosaic:
print("No green regions detected! Make sure you're using exactly the right color.")
return

output_img_array = ori_array[0].copy()

for region_counter, region in enumerate(regions, 1):
bounding_box = expand_bounding(ori, region, expand_factor=1.5)
crop_img = ori.crop(bounding_box)
# crop_img.show()
#convert mask back to image
mask_reshaped = mask[0,:,:,:] * 255.0
mask_img = Image.fromarray(mask_reshaped.astype('uint8'))
#resize the cropped images
crop_img = crop_img.resize((256, 256))
crop_img_array = image_to_array(crop_img)
#resize the mask images
mask_img = mask_img.crop(bounding_box)
mask_img = mask_img.resize((256, 256))
# mask_img.show()
#convert mask_img back to array
mask_array = image_to_array(mask_img)
#the mask has been upscaled so there will be values not equal to 0 or 1

# mask_array[mask_array > 0] = 1
# crop_img_array[..., :-1][mask_array==0] = (0,0,0)

if not self.is_mosaic:
a, b = np.where(np.all(mask_array == 0, axis = -1))
# print(a,b)
# print(crop_img_array[a,b])
# print(crop_img_array[a,b,0])
# print(crop_img_array.shape)
# print(type(crop_img_array[0,0]))
crop_img_array[a,b,:] = 0.
temp = Image.fromarray((crop_img_array * 255.0).astype('uint8'))
# temp.show()

# if self.is_mosaic:
# a, b = np.where(np.all(mask_array == 0, axis = -1))
# print(a, b)
# coords = [coord for coord in zip(a,b) if ((coord[0] + coord[1]) % 2 == 0)]
# a,b = zip(*coords)

# mask_array[a,b] = 1
# mask_array = mask_array * 255.0
# img = Image.fromarray(mask_array.astype('uint8'))
# img.show()
# return

crop_img_array = np.expand_dims(crop_img_array, axis = 0)
mask_array = np.expand_dims(mask_array, axis = 0)

# print(np.amax(crop_img_array))
# print(np.amax(mask_array))
# # print(np.amax(masked))

# print(np.amin(crop_img_array))
# print(np.amin(mask_array))
# # print(np.amin(masked))

# print(mask_array)

crop_img_array = crop_img_array * 2.0 - 1
# mask_array = mask_array / 255.0

# Run predictions for this batch of images
pred_img_array = self.model.predict(crop_img_array, crop_img_array, mask_array)

pred_img_array = np.squeeze(pred_img_array, axis = 0)
pred_img_array = (255.0 * ((pred_img_array + 1.0) / 2.0)).astype(np.uint8)

#scale prediction image back to original size
bounding_width = bounding_box[2]-bounding_box[0]
bounding_height = bounding_box[3]-bounding_box[1]
#convert np array to image

# print(bounding_width,bounding_height)
# print(pred_img_array.shape)

pred_img = Image.fromarray(pred_img_array.astype('uint8'))
# pred_img.show()
pred_img = pred_img.resize((bounding_width, bounding_height), resample = Image.BICUBIC)
# pred_img.show()

pred_img_array = image_to_array(pred_img)

# print(pred_img_array.shape)
pred_img_array = np.expand_dims(pred_img_array, axis = 0)

# copy the decensored regions into the output image
for i in range(len(ori_array)):
for col in range(bounding_width):
for row in range(bounding_height):
bounding_width_index = col + bounding_box[0]
bounding_height_index = row + bounding_box[1]
if (bounding_width_index, bounding_height_index) in region:
output_img_array[bounding_height_index][bounding_width_index] = pred_img_array[i,:,:,:][row][col]
print("{region_counter} out of {region_count} regions decensored.".format(region_counter=region_counter, region_count=len(regions)))

output_img_array = output_img_array * 255.0

#restore the alpha channel if the image had one
if has_alpha:
output_img_array = np.concatenate((output_img_array, alpha_channel), axis = 2)

output_img = Image.fromarray(output_img_array.astype('uint8'))

if file_name != None:
#save the decensored image
#file_name, _ = os.path.splitext(file_name)
save_path = os.path.join(self.args.decensor_output_path, file_name)
output_img.save(save_path)

print("Decensored image saved to {save_path}!".format(save_path=save_path))
return
else:
print("Decensored image. Returning it.")
return output_img

if __name__ == '__main__':
decensor = Decensor()
decensor.decensor_all_images_in_folder()
7 changes: 4 additions & 3 deletions docs/INSTALLATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,15 @@ You can download the latest release [here](https://github.com/deeppomf/DeepCream
Binary only available for Windows 64-bit.

## Run Code Yourself
If you want to run the code yourself, you can clone this repo and download the model from https://drive.google.com/open?id=1Nzh2KAUO_rYPu2vDM_YP1EgbWtjI2hRX. Unzip the file into the /models/ folder.
If you want to run the code yourself, you can clone this repo and download the model from https://drive.google.com/open?id=1YbN0iPS-RDJaCyyaBRJon-tMsFgrUNSH. Unzip the file into the /models/ folder.

### Dependencies (for running the code yourself)
- Python 3.6.7
- TensorFlow 1.12
- TensorFlow 1.14
- Keras 2.2.4
- Pillow
- h5py
- Scipy
- OpenCV

No GPU required! Tested on Ubuntu 16.04 and Windows. Tensorflow on Windows is compatible with Python 3 and not Python 2. Tensorflow is not compatible with Python 3.7.

Expand Down
Loading