This project was completed for my Computer Science capstone.
I found the research paper for the CNN, referred to as the original CNN, that inspired me to make a better CNN that worked on larger images and used a smaller more relavant part of the image during training.
The papers on PReLU and Seperable Convolution layers were used in the design of the new CNN.
Download PyCharm IDE and open the project folder. Configure the project to use the latest version of python 3.11
The good thing is that now you can just use the IDE to pull in the proper dependencies that should work.
If you don't want to use PyCharm, or are on some Linux distro, I would recommend using miniconda to get the proper version of python and gather the dependencies.
Run the MakeCSV.py file: python make_csv.py
The MakeCSV.py file will make a folder called "images" in the same directory as MakeCSV.py. The "images" folder will have a folder called "test", "train", and "val". Inside the "train" and "val" folders are more folders called "fake_image" and "real_image". Put the real and fake images you have in their proper folders.
Then run MakeCSV.py again: python make_csv.py
This will make a .csv file that contains all of the file names and classes of each image. Run this script everytime you add or remove or edit images in the folders.
Running either one of the CNN files will start the training process. At the end, a csv file will be written out for the training and validation process that contains all of the data for each epoch.
A model file in the hdf5 format, as well as the keras format, will be written in the modelfolder directory upon completion.
You can use the model files in a variety of ways, but the deep_fake_finder.py file is an example of how to do so. You will have to resize the images and put them in the proper format, as this file does, for the models to work.
The heatmap files are used to show what pixels in the image are being used, and what size, at a certain step. The idea was that my CNN was using a much smaller part of the image, and focusing on things like the eyes, ears, and hands, on images that started out much larger than the original CNN was taking in.
Link to original repository: https://github.com/BinaryGears/KerasDeepFakeDetection/
The original CNN is based on this paper: https://doi.org/10.1109/ACCESS.2023.3251417
The PReLU function paper is here: https://arxiv.org/pdf/1502.01852
The Seperable Convolution layer paper is here: https://arxiv.org/pdf/1610.02357/1000
Visualkeras citation: @misc{Gavrikov2020VisualKeras,
author = {Gavrikov, Paul},
title = {visualkeras},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/paulgavrikov/visualkeras}},
}