This repo is part of project in Asia University Machine Learning Camp 2018
This repo is part of the code in paper "Image splicing localzation via semi-global network and fully connected conditional random fields",accepted by ECCV Workshop on Objectionable Content and Misinformation 2018 [PDF]
Spliced image is created from two authentic images. By masking the part of donor image, the selected region is pasted to the host image after some operations (translation and rescale the donor region). Sometimes, several post-processing techniques(such as Gaussian filter on the border of selected region) are used to the spliced region for the harmony of the selected region and host image.
As shown in bottom figure, We address the problem of image splicing localization: given an input image, localizing the spliced region which is cut from another image. We formulate this as a classification task but, critically, instead of classifying the spliced region by local patch, we leverage the features from whole image and local patch together to classify patch. We call this structure Semi-Global Network. Our approach exploits the observation that the spliced region should not only highly relate to local features (spliced edges), but also global features (semantic information, illumination, etc.) from the whole image. We show that our method outperforms other state-of-the-art methods in Columbia datasets.
- Python 3.6
- PyTorch 0.4
You need to install the requirements by follow command firstly:
pip install -r requirements.txt
We use Columbia Dataset for training and testing. We Split all the spliced images(in subfolder 4cam_splc
) as three folds. Training(65%), validation(15%), testing(25%). For training faster, we firstly made patches dataset offline:
python tools/make_dataset_columbia /path/to/dataset
This script will generate image paches and resized full image for training and testing. So we have dataset:
training | validation | testing | |
---|---|---|---|
patches | 14k | 3k | 5k |
You need to modify the parameters in shell script in train_local.sh
for training the model, the full paramters list can be found in hybird.py
:
python hybird.py\
--epochs 60\
--lr 1e-4\
-c checkpoint/local\
--arch sgn\
--train-batch 64\
--data columbia64\
--base-dir /Users/oishii/Dataset/columbia/
We use TensorboardX
to watch the training process, just install it by the readme.
run the watching commond as :
tensorboard --logdir ./checkpoint
Here we show some sample results of our methods, from the left to right are the output of label, the output of mask and the ground truth mask:
Here are the label loss, segmentation loss, label accuracy, segmentation accuracy on validation set:
This work is partially support by Jeju National University and JDC (Jeju Free International City Development Center).