Home

Awesome

<div align="center"> <h1> the Deepfake Offensive Toolkit </h1>

stars license Python 3.8 build-dot code-check

<a href="https://colab.research.google.com/github/sensity-ai/dot/blob/main/notebooks/colab_demo.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" height=20></a>

</div>

dot (aka Deepfake Offensive Toolkit) makes real-time, controllable deepfakes ready for virtual cameras injection. dot is created for performing penetration testing against e.g. identity verification and video conferencing systems, for the use by security analysts, Red Team members, and biometrics researchers.

If you want to learn more about dot is used for penetration tests with deepfakes in the industry, read these articles by The Verge and Biometric Update.

dot is developed for research and demonstration purposes. As an end user, you have the responsibility to obey all applicable laws when using this program. Authors and contributing developers assume no liability and are not responsible for any misuse or damage caused by the use of this program.

<p align="center"> <img src="./assets/dot_intro.gif" width="500"/> </p>

How it works

In a nutshell, dot works like this

flowchart LR;
    A(your webcam feed) --> B(suite of realtime deepfakes);
    B(suite of realtime deepfakes) --> C(virtual camera injection);

All deepfakes supported by dot do not require additional training. They can be used in real-time on the fly on a photo that becomes the target of face impersonation. Supported methods:

Running dot

Graphical interface

GUI Installation

Download and run the dot executable for your OS:

GUI Usage

Usage example:

  1. Specify the source image in the field source.
  2. Specify the camera id number in the field target. In most cases, 0 is the correct camera id.
  3. Specify the config file in the field config_file. Select a default configuration from the dropdown list or use a custom file.
  4. (Optional) Check the field use_gpu to use the GPU.
  5. Click on the RUN button to start the deepfake.

For more information about each field, click on the menu Help/Usage.

Watch the following demo video for better understanding of the interface

<p align="center"> <img src="./assets/gui_dot_demo.gif" width="500" height="406"/> </p>

Command Line

CLI Installation

Install Pre-requisites
Create Conda Environment

The instructions assumes that you have Miniconda installed on your machine. If you don't, you can refer to this link for installation instructions.

With GPU Support
conda env create -f envs/environment-gpu.yaml
conda activate dot

Install the torch and torchvision dependencies based on the CUDA version installed on your machine:

To check that torch and torchvision are installed correctly, run the following command: python -c "import torch; print(torch.cuda.is_available())". If the output is True, the dependencies are installed with CUDA support.

With MPS Support(Apple Silicon)
conda env create -f envs/environment-apple-m2.yaml
conda activate dot

To check that torch and torchvision are installed correctly, run the following command: python -c "import torch; print(torch.backends.mps.is_available())". If the output is True, the dependencies are installed with Metal programming framework support.

With CPU Support (slow, not recommended)
conda env create -f envs/environment-cpu.yaml
conda activate dot
Install dot
pip install -e .
Download Models

CLI Usage

Run dot --help to get a full list of available options.

  1. Simswap

    dot -c ./configs/simswap.yaml --target 0 --source "./data" --use_gpu
    
  2. SimSwapHQ

    dot -c ./configs/simswaphq.yaml --target 0 --source "./data" --use_gpu
    
  3. FOMM

    dot -c ./configs/fomm.yaml --target 0 --source "./data" --use_gpu
    
  4. FaceSwap CV2

    dot -c ./configs/faceswap_cv2.yaml --target 0 --source "./data" --use_gpu
    
    

Note: To enable face superresolution, use the flag --gpen_type gpen_256 or --gpen_type gpen_512. To use dot on CPU (not recommended), do not pass the --use_gpu flag.

Controlling dot with CLI

Disclaimer: We use the SimSwap technique for the following demonstration

Running dot via any of the above methods generates real-time Deepfake on the input video feed using source images from the data/ folder.

<p align="center"> <img src="./assets/dot_run.gif" width="500"/> </p>

When running dot a list of available control options appear on the terminal window as shown above. You can toggle through and select different source images by pressing the associated control key.

Watch the following demo video for better understanding of the control options:

<p align="center"> <img src="./assets/dot_demo.gif" width="480"/> </p>

Docker

Setting up docker

Connect docker to the webcam

Ubuntu

  1. Build the container

    docker build -t dot -f Dockerfile .
    
  2. Run the container

    xhost +
    docker run -ti --gpus all \
    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
    -e NVIDIA_VISIBLE_DEVICES=all \
    -e PYTHONUNBUFFERED=1 \
    -e DISPLAY \
    -v .:/dot \
    -v /tmp/.X11-unix:/tmp/.X11-unix:rw \
    --runtime nvidia \
    --entrypoint /bin/bash \
    -p 8080:8080 \
    --device=/dev/video0:/dev/video0 \
    dot
    

Windows

  1. Follow the instructions here under Windows to set up the webcam with docker.

  2. Build the container

    docker build -t dot -f Dockerfile .
    
  3. Run the container

    docker run -ti --gpus all \
    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
    -e NVIDIA_VISIBLE_DEVICES=all \
    -e PYTHONUNBUFFERED=1 \
    -e DISPLAY=192.168.99.1:0 \
    -v .:/dot \
    --runtime nvidia \
    --entrypoint /bin/bash \
    -p 8080:8080 \
    --device=/dev/video0:/dev/video0 \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    dot
    

macOS

  1. Follow the instructions here to set up the webcam with docker.

  2. Build the container

    docker build -t dot -f Dockerfile .
    
  3. Run the container

    docker run -ti --gpus all \
    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
    -e NVIDIA_VISIBLE_DEVICES=all \
    -e PYTHONUNBUFFERED=1 \
    -e DISPLAY=$IP:0 \
    -v .:/dot \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    --runtime nvidia \
    --entrypoint /bin/bash \
    -p 8080:8080 \
    --device=/dev/video0:/dev/video0 \
    dot
    

Virtual Camera Injection

Instructions vary depending on your operating system.

Windows

Ubuntu

sudo apt update
sudo apt install v4l-utils v4l2loopback-dkms v4l2loopback-utils
sudo modprobe v4l2loopback devices=1 card_label="OBS Cam" exclusive_caps=1
v4l2-ctl --list-devices
sudo add-apt-repository ppa:obsproject/obs-studio
sudo apt install obs-studio

Open OBS Studio and check if tools --> v4l2sink exists. If it doesn't follow these instructions:

mkdir -p ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/
ln -s /usr/lib/obs-plugins/v4l2sink.so ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/

Use the virtual camera with OBS Studio:

MacOS

Run dot with an Android emulator

If you are performing a test against a mobile app, virtual cameras are much harder to inject. An alternative is to use mobile emulators and still resort to virtual camera injection.

Speed

With GPU

Tested on a AMD Ryzen 5 2600 Six-Core Processor with one NVIDIA GeForce RTX 2070

Simswap: FPS 13.0
Simswap + gpen 256: FPS 7.0
SimswapHQ: FPS 11.0
FOMM: FPS 31.0

With Apple Silicon

Tested on Macbook Air M2 2022 16GB

Simswap: FPS 3.2
Simswap + gpen 256: FPS 1.8
SimswapHQ: FPS 2.7
FOMM: FPS 2.0

License

This is not a commercial Sensity product, and it is distributed freely with no warranties

The software is distributed under BSD 3-Clause. dot utilizes several open source libraries. If you use dot, make sure you agree with their licenses too. In particular, this codebase is built on top of the following research projects:

Contributing

If you have ideas for improving dot, feel free to open relevant Issues and PRs. Please read CONTRIBUTING.md before contributing to the repository.

Maintainers

Contributors

<a href="https://github.com/sensity-ai/dot/graphs/contributors"> <img src="https://contrib.rocks/image?repo=sensity-ai/dot" /> </a>

Run dot on pre-recorded image and video files

FAQ

Make sure that you are running it on a GPU card by using the --use_gpu flag. CPU is not recommended. If you still find it too slow it may be because you running it on an old GPU model, with less than 8GB of RAM.

You can use dot on a pre-recorded video file by these scripts or try it directly on Colab.