DeepFake Detection made easy
DeepSafe consists of 3 tools.
- WebApp - The webapp has 2 modes. The one live on GCP is limited to one model each for image and video deepfake detection and the other one consists of multiple detector which the users can choose from a drop down. The later comes with a lot of rich features out of box designed so that users can add their own custom detectors and use all the existing ones out-of-the-box with minimal changes.
- DeepSafe API - This is an API written in Flask which returns the probability of an image being a deepfake in json.
- Chrome Extension - This extension works right out of the browser to help identify any video the user is watching on the internet. Currently, it redirects the user to the DeepSafe webapp with the current URL.
WebApp - Live here
This is a limited access app, for the full access please contact me.
- Clone the repository:
git clone https://github.com/siddharthksah/DeepSafe
cd DeepSafe
Main supported version : 3.8
Other supported versions : 3.7 & 3.9
- Creating conda environment
conda create -n deepsafe python==3.8 -y
conda activate deepsafe
- Install dependencies:
pip install -r requirements.txt
- Download Model Weights
Service | Google Drive | Mega Drive |
---|---|---|
Link | Google Drive | Mega Drive |
The weights can be downloaded Gdown as well if these drives are inaccessible in your area.
pip install gdown
# to upgrade
pip install --upgrade gdown
import gdown
#this takes a while cause the folder is quite big about 3.4G
url = "https://drive.google.com/drive/folders/1Gan21zLaPD0wHbNF3P3a7BzgKE91BOpq?usp=sharing
gdown.download_folder(url, quiet=True, use_cookies=False)
- Starting the application:
streamlit run main.py
#Base Image to use
FROM python:3.7.9-slim
#Expose port 8080
EXPOSE 8080
#Optional - install git to fetch packages directly from github
RUN apt-get update && apt-get install -y git
RUN apt-get install ffmpeg libsm6 libxext6 -y
#Copy Requirements.txt file into app directory
COPY requirements.txt app/requirements.txt
#install all requirements in requirements.txt
RUN pip install -r app/requirements.txt
#Copy all files in current directory into app directory
COPY . /app
#Change Working Directory to app directory
WORKDIR /app
#Run the application on port 8080
ENTRYPOINT ["streamlit", "run", "main.py", "--server.port=8080", "--server.address=0.0.0.0"]
Building the Docker Image
docker build -f Dockerfile -t app:latest .
Running the docker image and creating the container
docker run -p 8501:8501 app:latest
You might need to use sudo before the docker commands if the user does not have admin privileges.
After you made your own Docker image, you can sign up for an account on https://hub.docker.com/. After verifying your email you are ready to go and upload your first docker image.
- Log in on https://hub.docker.com/
- Click on Create Repository.
- Choose a name (e.g. verse_gapminder) and a description for your repository and click Create.
- Log into the Docker Hub from the command line
docker login --username=yourhubusername --email=youremail@company.com
just with your own user name and email that you used for the account. Enter your password when prompted. If everything worked you will get a message similar to
WARNING: login credentials saved in /home/username/.docker/config.json
Login Succeeded
Check the image ID using
docker images
and what you will see will be similar to
REPOSITORY TAG IMAGE ID CREATED SIZE
verse_gapminder_gsl latest 023ab91c6291 3 minutes ago 1.975 GB
verse_gapminder latest bb38976d03cf 13 minutes ago 1.955 GB
rocker/verse latest 0168d115f220 3 days ago 1.954 GB
and tag your image
docker tag bb38976d03cf yourhubusername/verse_gapminder:firsttry
The number must match the image ID and :firsttry is the tag. In general, a good choice for a tag is something that will help you understand what this container should be used in conjunction with, or what it represents. If this container contains the analysis for a paper, consider using that paper’s DOI or journal-issued serial number; if it’s meant for use with a particular version of a code or data version control repo, that’s a good choice too - whatever will help you understand what this particular image is intended for.
Push your image to the repository you created
docker push yourhubusername/verse_gapminder
Command to build the application. Please remember to change the project name and application name
gcloud builds submit --tag gcr.io/<ProjectName>/<AppName> --project=<ProjectName>
Command to deploy the application
gcloud run deploy --image gcr.io/<ProjectName>/<AppName> --platform managed --project=<ProjectName> --allow-unauthenticated
To install the app on Google Cloud, need to have account and gcloud tool installed in the system.
Initiate GCloud
gcloud init
Set Project,Billing, Service Account and Region and Zone example to set Region as Mumbai India...
gcloud config set compute/region asia-south1
gcloud config set compute/zone asia-south1-b
Enable Container Registry and Cloud Run Api run the following command in gcloud terminal
gcloud services enable run.googleapis.com containerregistry.googleapis.com
Push Local Image to GCP Cloud Container Registry. The following command will allow local docker engine to be used by gcloud tool.
Quick note - Make sure to select the right google account if you are logging from the browser and you have multiple google accounts.
gcloud auth configure-docker
Following step will create a tag of the local image as per gcp requirment.
docker tag st_demo:v1.0 gcr.io/< GCP PROJECT ID > /st_demo:v1.0
Push Local Image to GCP Registry
docker push gcr.io/< GCP PROJECT ID > /st_demo:v1.0
Finally ! Deploy on Serverless Cloud Run. Run the following Single Line command to deploy / host the app.
gcloud run deploy < service name > --image < gcp image name> --platform managed --allow-unauthenticated --region < your region > --memory 2Gi --timeout=3600
Below are the arguments.
< service name > : Service Name User Supplied < gcp image name> : Image Pushed into GCP < your region > : Region was set at the Gcloud Init. < platform managed > : GCP Specific Parameter, consult GCP Manual for further details. < allow-unauthenticated > : GCP Specific Parameter, consult GCP Manual for further details. < memory > : Memory to be allocated for the container deployment. < timeout > : GCP Specific Parameter, consult GCP Manual for further details. For streamlit deployment, this value should be set to a high value to avoid a timeout / connection error.
DeepSafe Chrome Extension is a Chrome browser extension that provides easy access to the DeepSafe WebApp DeepFake Detection by one-click.
Get a copy of the browser extension. Either clone the repository:
git clone https://github.com/siddharthksah/DeepSafe-API
or download the zip file.
Install the unpacked extension in Chrome.
Click on the browser extension icon (the dog emoji 🐶). It should open DeepSafe WebApp in a new tab.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Distributed under the MIT License.
Follow the below steps to add the extension to your Google Chrome browser:
Open the Extension Manager by following:
-
Kebab menu(⋮) -> More Tools -> Extensions
-
If the developer mode is not turned on, turn it on by clicking the toggle in the top right corner
-
Download the extension file from here.
-
Extract the downloaded .zip file and note the extracted path
-
Now click on Load unpacked button on the top left and select the extracted folder
A RESTful Flask API. DeepSafe-API combines the powerful features of the DeepSafe-WebApp into an API. DeepSafe WebApp is an open-source platform that integrates state-of-the-art DeepFake detection methods and provide a convenient interface for the users to compare their custom detectors against SOTA along with improving the literacy of DeepFakes among common folks.
The code consists of both client and server side of the code. The image is saved locally before doing the inference, but you can delete the save location or even use the image on the fly.
The output is a json which consists the deepfake probability, were the probability closer to 1 means the model thinks it is a deepfake.
- Clone the repository:
git clone https://github.com/siddharthksah/DeepSafe
cd DeepSafe/DeepSafe-API/v1
Main supported version : 3.8
Other supported versions : 3.7 & 3.9
- Creating conda environment
conda create -n deepsafe-api python==3.8 -y
conda activate deepsafe-api
- Install dependencies:
pip install -r requirements.txt
from flask import Flask, request, Response
import jsonpickle
import numpy as np
import cv2, os
from PIL import Image
import warnings
warnings.filterwarnings("ignore")
from predictor import predictor_CNN
# Initialize the Flask application
app = Flask(__name__)
# route http posts to this method
@app.route('/api_v1/', methods=['POST'])
def test():
r = request
# convert string of image data to uint8
nparr = np.fromstring(r.data, np.uint8)
# decode image
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
#print(img)
img = Image.fromarray(img)
b, g, r = img.split()
img = Image.merge("RGB", (r, g, b))
if not os.path.exists('tempDir'):
os.makedirs('tempDir')
img.save('./tempDir/image.jpg', 'JPEG')
#run prediction
probab = predictor_CNN()
response = {'Probability of DeepFake': probab}
# encode response using jsonpickle
response_pickled = jsonpickle.encode(response)
return Response(response=response_pickled, status=200, mimetype="application/json")
# start flask app
app.run(host="0.0.0.0", port=5001)
import requests
import json
import cv2
import os, shutil
addr = 'http://localhost:5001'
test_url = addr + '/api_v1/'
# prepare headers for http request
content_type = 'image/jpeg'
headers = {'content-type': content_type}
img = cv2.imread('tempDir/image.jpg')
# encode image as jpeg
_, img_encoded = cv2.imencode('.jpg', img)
# send http request with image and receive response
response = requests.post(test_url, data=img_encoded.tobytes(), headers=headers)
print(json.loads(response.text))
DeepSafe acts like a platform on which the newer models can be incorporated onto the app.
Any kind of enhancement or contribution is welcomed.