Home

Awesome

VSGAN-tensorrt-docker

Repository to use super resolution models and video frame interpolation models and also trying to speed them up with TensorRT. This repository contains the fastest inference code that you can find, at least I am trying to archive that. Not all codes can use TensorRT due to various reasons, but I try to add that if it works. Further model architectures are planned to be added later on.

Table of contents

<!--ts--> <!--te-->

Currently working networks:

Onnx files can be found here.

Also used:

<div id='usage'/>

Usage

The following docker requires the latest Nvidia driver (560+). After that, follow the following steps:

WARNING FOR WINDOWS USERS: Docker Desktop 4.17.1 is broken. I confirmed that 4.25.0 should work. Older tested versions are 4.16.3 or 4.17.0. I would recommend to use 4.25.0. 4.17.1 results in Docker not starting which is mentioned in this issue.

ANOTHER WARNING FOR PEOPLE WITHOUT AVX512: Instead of using styler00dollar/vsgan_tensorrt:latest, which I build with my 7950x and thus with all AVX, use styler00dollar/vsgan_tensorrt:latest_no_avx512 in compose.yaml to avoid Illegal instruction (core dumped) which is mentioned in this issue.

AND AS A FINAL INFO, Error opening input file pipe: IS NOT A REAL ERROR MESSAGE. That means invalid data got piped into ffmpeg and can be piped error messages for example. To see the actual error messages and what got piped, you can use vspipe -c y4m inference.py -.

Quickstart:

# if you have Windows, install Docker Desktop https://www.docker.com/products/docker-desktop/
# if you encounter issues, install one of the following versions:
# 4.16.3: https://desktop.docker.com/win/main/amd64/96739/Docker%20Desktop%20Installer.exe
# 4.17.0: https://desktop.docker.com/win/main/amd64/99724/Docker%20Desktop%20Installer.exe

# if you have Arch, install the following dependencies
yay -S docker nvidia-docker nvidia-container-toolkit docker-compose docker-buildx

# run the docker with docker-compose
# you need to be inside the vsgan folder with cli before running the following step, git clone repo and cd into it
# go into the vsgan folder, inside that folder should be compose.yaml, run this command
# you can adjust folder mounts in the yaml file
docker-compose run --rm vsgan_tensorrt

There are now multiple containers to choose from, if you don't want the default, then edit compose.yaml and set a different tag image: styler00dollar/vsgan_tensorrt:x prior to running docker-compose run --rm vsgan_tensorrt.

docker imagecompressed downloadextracted containershort description
styler00dollar/vsgan_tensorrt:latest9gb17gbdefault latest with trt10.4
styler00dollar/vsgan_tensorrt:latest_no_avx5129gb17gbtrt10.3 without avx512 (needs update, some plugins not included)
styler00dollar/vsgan_tensorrt:trt9.38gb15gbtrt9.3 use bfdb96a with this docker
styler00dollar/vsgan_tensorrt:trt9.3_no_avx5128gb15gbtrt9.3 without avx512 use bfdb96a with this docker
styler00dollar/vsgan_tensorrt:minimal5gb10gbtrt10.3 + ffmpeg + mlrt + ffms2 + lsmash + bestsource

Piping usage:

vspipe -c y4m inference.py - | ffmpeg -i pipe: example.mkv -y

If docker does not want to start, try this before you use docker:

sudo systemctl start docker

Linux docker autostart:

sudo systemctl enable --now docker

The following stuff is for people who want to run things from scratch. Manual ways of downloading the docker image:

# Download prebuild image from dockerhub (recommended)
docker pull styler00dollar/vsgan_tensorrt:latest

# if you have `unauthorized: authentication required` problems, download the docker with
git clone https://github.com/NotGlop/docker-drag
cd docker-drag
python docker_pull.py styler00dollar/vsgan_tensorrt:latest
docker load -i styler00dollar_vsgan_tensorrt.tar

Manually building docker image from scratch:

# Build docker manually (only required if you want to build from scratch)
# This step is not needed if you already downloaded the docker and is only needed if yo
# want to build it from scratch. Keep in mind that you need to set env variables in windows differently and
# this command will only work in linux. Run that inside that directory
DOCKER_BUILDKIT=1 sudo docker build -t styler00dollar/vsgan_tensorrt:latest .
# If you want to rebuild from scratch or have errors, try to build without cache
DOCKER_BUILDKIT=1 sudo docker build --no-cache -t styler00dollar/vsgan_tensorrt:latest .

Manually run docker:

# you need to be inside the vsgan folder with cli before running the following step, git clone repo and cd into it
# the folderpath before ":" will be mounted in the path which follows afterwards
# contents of the vsgan folder should appear inside /workspace/tensorrt

sudo docker run --privileged --gpus all -it --rm -v /home/vsgan_path/:/workspace/tensorrt styler00dollar/vsgan_tensorrt:latest

# Windows is mostly similar, but the path needs to be changed slightly:
Example for C://path
docker run --privileged --gpus all -it --rm -v /mnt/c/path:/workspace/tensorrt styler00dollar/vsgan_tensorrt:latest
docker run --privileged --gpus all -it --rm -v //c/path:/workspace/tensorrt styler00dollar/vsgan_tensorrt:latest
<div id='usage-example'/>

Usage example

Small minimalistic example of how to configure inference. If you only want to process one video, then edit video path in inference.py

video_path = "test.mkv"

and then afterwards edit inference_config.py.

Small example for upscaling with TensorRT:

import sys
import os

sys.path.append("/workspace/tensorrt/")
import vapoursynth as vs

core = vs.core
vs_api_below4 = vs.__api_version__.api_major < 4
core.num_threads = 8

core.std.LoadPlugin(path="/usr/local/lib/libvstrt.so")


def inference_clip(video_path="", clip=None):
    clip = core.bs.VideoSource(source=video_path)

    clip = vs.core.resize.Bicubic(clip, format=vs.RGBH, matrix_in_s="709")  # RGBS means fp32, RGBH means fp16
    clip = core.trt.Model(
        clip,
        engine_path="/workspace/tensorrt/2x_AnimeJaNai_V2_Compact_36k_op18_fp16_clamp.engine",  # read readme on how to build engine
        num_streams=2,
    )
    clip = vs.core.resize.Bicubic(clip, format=vs.YUV420P8, matrix_s="709")  # you can also use YUV420P10 for example

    return clip

Small example for rife interpolation with TensorRT without scene change detection:

import sys
import vapoursynth as vs
from src.rife_trt import rife_trt

sys.path.append("/workspace/tensorrt/")
core = vs.core
core.num_threads = 4

core.std.LoadPlugin(path="/usr/local/lib/libvstrt.so")


def inference_clip(video_path):
    clip = core.bs.VideoSource(source=video_path)

    clip = core.resize.Bicubic(
        clip, format=vs.RGBS, matrix_in_s="709"
    )  # RGBS means fp32, RGBH means fp16

    # interpolation
    clip = rife_trt(
        clip,
        multi=2,
        scale=1.0,
        device_id=0,
        num_streams=2,
        engine_path="/workspace/tensorrt/rife414_ensembleTrue_op18_fp16_clamp_sim.engine",  # read readme on how to build engine
    )

    clip = core.resize.Bicubic(clip, format=vs.YUV420P8, matrix_s="709")
    return clip

More examples in custom_scripts/.

Then use the commands above to render. For example:

vspipe -c y4m inference.py - | ffmpeg -i pipe: example.mkv

Video will be rendered without sound and other attachments. You can add that manually to the ffmpeg command.

To process videos in batch and copy their properties like audio and subtitle to another file, you need to use main.py. Edit filepaths and file extention:

input_dir = "/workspace/tensorrt/input/"
output_dir = "/workspace/tensorrt/output/"
files = glob.glob(input_dir + "/**/*.webm", recursive=True)

and configure inference_config.py like wanted. Afterwards just run

python main.py
<div id='individual-examples'/>

Individual examples

More parameter documentation can be found in the plugin repositories.

core.std.LoadPlugin(path="/usr/lib/x86_64-linux-gnu/libffms2.so")
clip = core.ffms2.Source(source=video_path)
clip = core.lsmas.LWLibavSource(source=video_path)
clip = core.bs.VideoSource(source=video_path) # recommended
clip = core.descale.Debilinear(clip, 1280, 720)
clip = core.resize.Bicubic(clip, format=vs.RGBS, matrix_in_s="709")
clip = core.resize.Bicubic(clip, width=1280, height=720,, format=vs.RGBS, matrix_in_s="709")
clip = core.akarin.Expr(clip, "x 0 1 clamp")
clip = clip.std.Expr("x 0 max 1 min")
clip = core.std.Limiter(clip, max=1, planes=[0,1,2])
clip = core.vmaf.Metric(clip, offs1, feature=2)
clip = core.misc.SCDetect(clip=clip, threshold=0.100)

from src.scene_detect import scene_detect
clip = scene_detect(clip, fp16=True, thresh=0.85, model=12)
core.std.LoadPlugin(path="/usr/local/lib/libvstrt.so")
clip = core.trt.Model(
    clip,
    engine_path="/workspace/tensorrt/cugan.engine",
    tilesize=[854, 480],
    overlap=[0, 0],
    num_streams=4,
)
core.std.LoadPlugin(path="/usr/local/lib/libvstrt.so")
strength = 10.0
noise_level = clip.std.BlankClip(format=vs.GRAYS, color=strength / 100)
clip = core.trt.Model(
    [clip, noise_level],
    engine_path="dpir.engine",
    tilesize=[1280, 720],
    num_streams=2,
)
from src.rife_trt import rife_trt
clip = rife_trt(clip, multi=2, scale=1.0, device_id=0, num_streams=2, engine_path="/workspace/tensorrt/rife46.engine")
core.std.LoadPlugin(path="/usr/local/lib/x86_64-linux-gnu/libawarpsharp2.so")
clip = core.warp.AWarpSharp2(clip, thresh=128, blur=2, type=0, depth=[16, 8, 8], chroma=0, opt=True, planes=[0,1,2], cplace="mpeg1")
clip = core.cas.CAS(clip, sharpness=0.5)
import vs_colorfix
clip = vs_colorfix.average(clip, ref, radius=10, planes=[0, 1, 2], fast=False)
core.std.LoadPlugin(path="/usr/local/lib/x86_64-linux-gnu/libmvtools.so")
core.std.LoadPlugin(path="/usr/local/lib/x86_64-linux-gnu/libfillborders.so")
core.std.LoadPlugin(path="/usr/local/lib/x86_64-linux-gnu/libmotionmask.so")
core.std.LoadPlugin(path="/usr/local/lib/x86_64-linux-gnu/libtemporalmedian.so")
from vs_temporalfix import vs_temporalfix
clip = vs_temporalfix(clip, strength=400, tr=6, exclude="[10 20]", debug=False)
from src.utils import FastLineDarkenMOD
clip = FastLineDarkenMOD(clip)
<div id='vs-mlrt'/>

vs-mlrt (C++ TRT)

You need to convert onnx models into engines. You need to do that on the same system where you want to do inference. Download onnx models from here or from my Github page. Inside the docker, you do one of the following commands:

Good default choice:

trtexec --bf16 --fp16 --onnx=model.onnx --minShapes=input:1x3x8x8 --optShapes=input:1x3x720x1280 --maxShapes=input:1x3x1080x1920 --saveEngine=model.engine --tacticSources=+CUDNN,-CUBLAS,-CUBLAS_LT --skipInference --useCudaGraph --noDataTransfers --builderOptimizationLevel=5

If you have the vram to fit the model multiple times, add --infStreams.

trtexec --bf16 --fp16 --onnx=model.onnx --minShapes=input:1x3x8x8 --optShapes=input:1x3x720x1280 --maxShapes=input:1x3x1080x1920 --saveEngine=model.engine --tacticSources=+CUDNN,-CUBLAS,-CUBLAS_LT --skipInference --useCudaGraph --noDataTransfers --builderOptimizationLevel=5 --infStreams=4

DPIR (color) needs 4 channels.

trtexec --bf16 --fp16 --onnx=model.onnx --minShapes=input:1x4x8x8 --optShapes=input:1x4x720x1280 --maxShapes=input:1x4x1080x1920 --saveEngine=model.engine --tacticSources=+CUDNN,-CUBLAS,-CUBLAS_LT --skipInference --useCudaGraph --noDataTransfers --builderOptimizationLevel=5

Rife v1 needs 8 channels.

trtexec --bf16 --fp16 --onnx=model.onnx --minShapes=input:1x8x64x64 --optShapes=input:1x8x720x1280 --maxShapes=input:1x8x1080x1920 --saveEngine=model.engine --tacticSources=+CUDNN,-CUBLAS,-CUBLAS_LT --skipInference --useCudaGraph --noDataTransfers --builderOptimizationLevel=5

Rife v2 needs 7 channels. Set the same shape everywhere to avoid build errors.

trtexec --bf16 --fp16 --onnx=model.onnx --minShapes=input:1x7x1080x1920 --optShapes=input:1x7x1080x1920 --maxShapes=input:1x7x1080x1920 --saveEngine=model.engine --tacticSources=+CUDNN,-CUBLAS,-CUBLAS_LT --skipInference --useCudaGraph --noDataTransfers --builderOptimizationLevel=5

My Shuffle Span has a static shape and needs dynamic conv to be in fp32 for lower precision to work.

trtexec --bf16 --fp16 --onnx=sudo_shuffle_span_op20_10.5m_1080p_onnxslim.onnx --saveEngine=sudo_shuffle_span_op20_10.5m_1080p_onnxslim.engine --tacticSources=+CUDNN,-CUBLAS,-CUBLAS_LT --skipInference --useCudaGraph --noDataTransfers --builderOptimizationLevel=5 --infStreams=4 --layerPrecisions=/dynamic/Conv:fp32 --precisionConstraints=obey

Put that engine path into inference_config.py.

Warnings:

<div id='deduplicated'/>

Deduplicated inference

Calculate similarity between frames with HomeOfVapourSynthEvolution/VapourSynth-VMAF and skip similar frames in interpolation tasks. The properties in the clip will then be used to skip similar frames.

from src.rife_trt import rife_trt

core.std.LoadPlugin(path="/usr/local/lib/libvstrt.so")


# calculate metrics
def metrics_func(clip):
    offs1 = core.std.BlankClip(clip, length=1) + clip[:-1]
    offs1 = core.std.CopyFrameProps(offs1, clip)
    return core.vmaf.Metric(clip, offs1, 2)

def inference_clip(video_path):
    interp_scale = 2
    clip = core.bs.VideoSource(source=video_path)

    # ssim
    clip_metric = vs.core.resize.Bicubic(
        clip, width=224, height=224, format=vs.YUV420P8, matrix_s="709"  # resize before ssim for speedup
    )
    clip_metric = metrics_func(clip_metric)
    clip_orig = core.std.Interleave([clip] * interp_scale)

    # interpolation
    clip = rife_trt(
        clip,
        multi=interp_scale,
        scale=1.0,
        device_id=0,
        num_streams=2,
        engine_path="/workspace/tensorrt/rife414_ensembleTrue_op18_fp16_clamp_sim.engine",
    )

    # skip frames based on calculated metrics
    # in this case if ssim > 0.999, then copy frame
    clip = core.akarin.Select([clip, clip_orig], clip_metric, "x.float_ssim 0.999 >")

    return clip

There are multiple different metrics that can be used, but be aware that you may need to adjust the threshold metric value in vfi_inference.py, since they work differently. SSIM has a maximum of 1 and PSNR has a maximum of infinity. I would recommend leaving the defaults unless you know what you do.

# 0 = PSNR, 1 = PSNR-HVS, 2 = SSIM, 3 = MS-SSIM, 4 = CIEDE2000
return core.vmaf.Metric(clip, offs1, 2)
<div id='shot-boundry'/>

Shot Boundary Detection

Detection is implemented in various different ways. To use traditional scene change you can do:

clip_sc = core.misc.SCDetect(
  clip=clip,
  threshold=0.100
)

Afterwards you can call clip = core.akarin.Select([clip, clip_orig], clip_sc, "x._SceneChangeNext 1 0 ?") to apply it.

Or use models like this. Adjust thresh to a value between 0 and 1, higher means to ignore with less confidence.

clip_sc = scene_detect(
    clip,
    fp16=True,
    thresh=0.5,
    model=3,
)

Warning: Keep in mind that different models may require a different thresh to be good.

The rife models mean, that flow gets used as an additional input into the classification model. That should increase stability without major speed decrease. Models that are not linked will be converted later.

Available onnx files:

Other models I trained but are not available due to various reasons:

Interesting observations:

Comparison to traditional methods:

Decided to only do scene change inference with ORT with TensorRT backend to keep code small and optimized.

Example usage:

from src.scene_detect import scene_detect
from src.rife_trt import rife_trt

core.std.LoadPlugin(path="/usr/local/lib/libvstrt.so")


clip_sc = scene_detect(
    clip,
    fp16=True,
    thresh=0.5,
    model=3,
)

clip = rife_trt(
    clip,
    multi=2,
    scale=1.0,
    device_id=0,
    num_streams=2,
    engine_path="/workspace/tensorrt/rife414_ensembleTrue_op18_fp16_clamp_sim.engine",
)

clip_orig = core.std.Interleave([clip_orig] * 2)  # 2 means interpolation factor here
clip = core.akarin.Select([clip, clip_orig], clip_sc, "x._SceneChangeNext 1 0 ?")
<div id='multi-gpu'/>

multi-gpu

Thanks to tepete who figured it out, there is also a way to do inference on multipe GPUs.

stream0 = core.std.SelectEvery(core.trt.Model(clip, engine_path="models/engines/model.engine", num_streams=2, device_id=0), cycle=3, offsets=0)
stream1 = core.std.SelectEvery(core.trt.Model(clip, engine_path="models/engines/model.engine", num_streams=2, device_id=1), cycle=3, offsets=1)
stream2 = core.std.SelectEvery(core.trt.Model(clip, engine_path="models/engines/model.engine", num_streams=2, device_id=2), cycle=3, offsets=2)
clip = core.std.Interleave([stream0, stream1, stream2])
<div id='ddfi'/>

ddfi

To quickly explain what ddfi is, the repository Mr-Z-2697/ddfi-rife deduplicates frames and interpolates between frames. Normally, frames which are duplicated can create a stuttering visual effect and to mitigate that, a higher interpolation factor is used on scenes which have a duplicated frames to compensate.

Visual examples from that repository:

https://user-images.githubusercontent.com/74594146/142829178-ff08b96f-9ca7-45ab-82f0-4e95be045f2d.mp4

Example usage is in custom_scripts/ddfi_rife_dedup_scene_change/. As a quick summary, you need to do two processing passes. One pass to calculate metrics and another to use interpolation combined with VFRToCFR. You need to use deduped_vfi.py similar to how you used main.py.

<div id='vfr'/>

VFR

Warning: Using variable refresh rate video input will result in desync errors. To check if a video is do

ffmpeg -i video_Name.mp4 -vf vfrdet -f null -

and look at the final line. If it is not zero, then it means it is variable refresh rate. Example:

[Parsed_vfrdet_0 @ 0x56518fa3f380] VFR:0.400005 (15185/22777) min: 1801 max: 3604)

To go around this issue, specify fpsnum and fpsden in inference_config.py

clip = core.ffms2.Source(source='input.mkv', fpsnum = 24000, fpsden = 1001, cache=False)

or convert everything to constant framerate with ffmpeg.

ffmpeg -i video_input.mkv -fps_mode cfr -crf 10 -c:a copy video_out.mkv

or use my vfr_to_cfr.py to process a folder.

<div id='benchmarks'/>

Benchmarks

Warnings:

Compact (2x)480p720p1080p
rx470 vs+ncnn (np+no tile+tta off)2.71.60.6
1070ti vs+ncnn (np+no tile+tta off)4.220.9
1070ti (ONNX-TRT+FrameEval)126.12.8
1070ti (C++ TRT+FrameEval+num_streams=6)146.73
3060ti (ONNX-TRT+FrameEval)?7.13.2
3060ti (C++ TRT+FrameEval+num_streams=5)?15.977.83
3060ti VSGAN 2x?3.61.77
3060ti ncnn (Windows binary) 2x?4.21.2
3060ti Joey 2x?0.870.36
3070 (ONNX-TRT+FrameEval)207.553.36
3090¹ (ONNX-TRT+FrameEval)??6.7
3090² (vs+TensorRT8.4+C++ TRT+vs_threads=20+num_streams=20+opset15)1054721
2x3090² (vs+TensorRT8.4+C++ TRT+num_streams=22+opset15)1335523
V100 (Colab) (vs+CUDA)8.43.81.6
V100 (Colab) (vs+TensorRT8+ONNX-TRT+FrameEval)8.33.81.7
V100 (Colab High RAM) (vs+CUDA+FrameEval)29136
V100 (Colab High RAM) (vs+TensorRT7+ONNX-TRT+FrameEval)21125.5
V100 (Colab High RAM) (vs+TensorRT8.2GA+ONNX-TRT+FrameEval)21125.5
V100 (Colab High RAM) (vs+TensorRT8.4+C++ TRT+num-streams=15)??6.6
A100 (Colab) (vs+CUDA+FrameEval)40198.5
A100 (Colab) (vs+TensorRT8.2GA+ONNX-TRT+FrameEval)44219.5
A100 (Colab) (vs+TensorRT8.2GA+C++ TRT+ffmpeg+FrameEval+num_streams=50)52.7224.3711.84
A100 (Colab) (vs+TensorRT8.2GA) (C++ TRT+x264 (--opencl)+FrameEval+num_streams=50)57.1626.2512.42
A100 (Colab) (vs+onnx+FrameEval)26124.9
A100 (Colab) (vs+quantized onnx+FrameEval)26125.7
A100 (Colab) (jpg+CUDA)28.2 (9 Threads)28.2 (7 Threads)9.96 (4 Threads)
4090 (TRT9.3+num_streams=3+(fp16+bf16)+RGBH+op18)?? / 92.3*? / 41.5*
6700xt (vs_threads=4+mlrt ncnn)? / 7.7*? / 3.25*? / 1.45*
Compact (4x)480p720p1080p
1070ti TensorRT8 docker (ONNX-TensorRT+FrameEval)115.6X
3060ti TensorRT8 docker (ONNX-TensorRT+FrameEval)?6.12.7
3060ti TensorRT8 docker 2x (C++ TRT+FrameEval+num_streams=5)?115.24
3060ti VSGAN 4x?31.3
3060ti ncnn (Windows binary) 4x?0.850.53
3060ti Joey 4x?0.250.11
A100 (Colab) (vs+CUDA+FrameEval)125.62.9
A100 (Colab) (jpg+CUDA)??3 (4 Threads)
4090³ (TensorRT8.4GA+10 vs threads+fp16)?? / 56* (5 streams)? / 19.4* (2 streams)
UltraCompact (2x)480p720p1080p
4090 (TRT9.1+num_threads=4+num_streams=2+(fp16+bf16)+RGBH+op18)?? / 113.7*? / 52.7*
6700xt (vs_threads=4+mlrt ncnn)? / 14.5*? / 6.1*? / 2.76*
cugan (2x)480p720p1080p
1070ti (vs+TensorRT8.4+ffmpeg+C++ TRT+num_streams=2+no tiling+opset13)62.7OOM
V100 (Colab) (vs+CUDA+ffmpeg+FrameEval)73.1?
V100 (Colab High RAM) (vs+CUDA+ffmpeg+FrameEval)219.74
V100 (Colab High RAM) (vs+TensorRT8.4+ffmpeg+C++ TRT+num_streams=3+no tiling+opset13)30146
A100 (Colab High RAM) (vs+TensorRT8.4+x264 (--opencl)+C++ TRT+vs threads=8+num_streams=8+no tiling+opset13)53.824.410.9
3090² (vs+TensorRT8.4+ffmpeg+C++ TRT+vs_threads=8+num_streams=5+no tiling+opset13)793515
2x3090² (vs+TensorRT8.4+ffmpeg+C++ TRT+vs_threads=12+num_streams=5+no tiling+opset13)1315323
4090 (TRT9.1+num_threads=4+num_streams=2+(fp16+bf16)+RGBH+op18)?? / 51*? / 22.7*
6700xt (vs_threads=4+mlrt ncnn)? / 3.3*? / 1.3*OOM (512px tiling ? / 0.39*)
ESRGAN 4x (64mb) (23b+64nf)480p720p1080p
1070ti TensorRT8 docker (Torch-TensorRT+ffmpeg+FrameEval)0.50.2>0.1
3060ti TensorRT8 docker (Torch-TensorRT+ffmpeg+FrameEval)?0.70.29
3060ti Cupscale (Pytorch)?0.130.044
3060ti Cupscale (ncnn)?0.10.04
3060ti Joey?0.0950.043
V100 (Colab) (Torch-TensorRT8.2GA+ffmpeg+FrameEval)1.80.8?
V100 (Colab High VRAM) (C++ TensorRT8.2GA+x264 (--opencl)+FrameEval+no tiling)2.46OOM (OpenCL)OOM (OpenCL)
V100 (Colab High VRAM) (C++ TensorRT8.2GA+x264+FrameEval+no tiling)2.491.140.47
A100 (Colab) (Torch-TensorRT8.2GA+ffmpeg+FrameEval)5.62.61.1
3090² (C++ TRT+vs_threads=20+num_threads=2+no tiling+opset14)3.41.50.7
2x3090² (C++ TRT+vs_threads=20+num_threads=2+no tiling+opset14)7.03.21.5
4090 (TRT9.1+num_threads=4+num_streams=2+(fp16+bf16)+RGBS+op14)?? / 2.6*? / 1.2*

Note: The offical RealESRGAN-6b model uses 6 blocks for the anime model and uses the ESRGAN architecture.

RealESRGAN (4x) (6b+64nf)480p720p1080p
3060ti (vs+TensorRT8+ffmpeg+C++ TRT+num_streams=2)?1.70.75
V100 (Colab High RAM) (vs+TensorRT8.2GA+x264 (--opencl)+C++ TRT+num_streams=1+no tiling)6.823.15OOM (OpenCL)
V100 (Colab High RAM) (vs+TensorRT8.2GA+x264+C++ TRT+num_streams=1+no tiling)??1.39
A100 (vs+TensorRT8.2GA+x264 (--opencl)+C++ TRT+num_streams=3+no tiling)14.656.742.76
3090² (C++ TRT+vs_threads=20+num_threads=2+no tiling+opset14)114.82.3
2x3090² (C++ TRT+vs_threads=10+num_threads=2+no tiling+opset14)229.54.2
4090 (TRT9.1+num_threads=4+num_streams=2+(fp16+bf16)+RGBH+op18)?? / 8.8*? / 3.9*

Rife v2 refers to a custom implementation made by WolframRhodium. I would recommend to avoid int8 for 1080p, the warping looks a bit broken. int8 seems usable with 720p and looks closer to bf16/fp16. TRT10.0-10.2 is slower than 9.3 and thus not recommended. TRT10.3 fixed GridSample and thus is recommended again. Windows seems slower than Linux by quite a margin. Not all show major improvement with above 3 streams. There mostly seems to be no difference between level 3 and 5.

Rife4+vs (ensemble False)480p720p1080p
Rife 4.6---------------------
4090 rife4.6 (Win11 vs-ncnn+num_streams=3+RGBS)??? / 134.3*
4090 rife4.6 (Arch KDE vs-rife+TRT10 (level 5)+num_streams=3+RGBH)?? / 827.1*? / 357.9*
4090 rife4.6 (Win11 mlrt+TRT9.2 (level 3)+num_streams=3+RGBH)??? / 294.5*
4090 rife4.6 (Win11 VSGAN+TRT9.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op18)??? / 372.7*
4090 rife4.6 (Manjaro Gnome VSGAN+TRT9.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op18)?? / 1083.3*? / 469.9*
4090 rife4.6 v2 (Win11 mlrt+TRT9.2 (level 3)+num_streams=3+RGBH)??? / 442.4*
4090 rife4.6 v2 (Win11 mlrt+TRT9.2 (level 3)+num_streams=8+RGBH)??? / 480.2*
4090 rife4.6 v2 (Arch KDE VSGAN+TRT9.3 (level 5)+num_streams=3+RGBH+op16 (fp16 converted mlrt onnx))?? / 1228.4*? / 511*
4090 rife4.6 v2 (Pop!_OS VSGAN+TRT10.3 (level 5)+num_streams=3+RGBH+op16 (fp16 converted mlrt onnx))?? / 1364*? / 554.2*
Steam Deck rife4.6 (ncnn+RGBS)?? / 19.2*? / 8.8*
Rife 4.15---------------------
4090 rife4.15 (Win11 vs-ncnn+num_streams=3+RGBS)??? / 115.2*
4090 rife4.15 (Arch KDE vs-rife+TRT10 (level 5)+num_streams=3+RGBH)?? / 506.3*? / 204.2*
4090 rife4.15 (Win11 mlrt+TRT9.2 (level 3)+num_streams=3+RGBH)??? / 237.7*
4090 rife4.15 (Win11 VSGAN+TRT9.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op19)??? / 205*
4090 rife4.15 (Arch Gnome VSGAN (level 5)+TRT9.3+num_streams=3+(fp16+bf16)+RGBH+op19)??? / 245.5*
4090 rife4.15 v2 (Win11 mlrt+TRT9.2 (level 3)+num_streams=3+RGBH)??? / 276.8*
4090 rife4.15 v2 (Arch KDE VSGAN+TRT9.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op20)?? / 930.9*? / 360.1*
4090 rife4.15 v2 (Pop!_OS VSGAN+TRT10.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op20)?? / 954.8*? / 359.4*
Rife 4.15 (int8)---------------------
4090 rife4.15 v2 (Arch KDE VSGAN+TRT9.3 (level 5)+num_streams=3+(int8+fp16+bf16)+RGBH+op20)?? / 995.3*? / 424*
4090 rife4.15 v2 (Arch KDE VSGAN+TRT9.3 (level 5)+num_streams=8+(int8+fp16+bf16)+RGBH+op20)?? / 1117.6*? / 444.5*
Rife4+vs (ensemble True)480p720p1080p
Rife 4.6---------------------
4090 rife4.6 (Win11 vs-ncnn+num_streams=3+RGBS)??? / 89.5*
4090 rife4.6 (Arch KDE vs-rife+TRT10 (level 5)+num_streams=3+RGBH)?? / 649.6*? / 237.7*
4090 rife4.6 (Win11 mlrt+TRT9.3 (level 3)+num_streams=3)??? / 226.7*
4090 rife4.6 (Win11 VSGAN+TRT9.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op18)??? / 228.7*
4090 rife4.6 (Manjaro Gnome VSGAN+TRT9.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op18)?? / 671.4*? / 303.8*
4090 rife4.6 v2 (Win11 mlrt+TRT9.3 (level 3)+num_streams=3)??? / 251.8*
4090 rife4.6 v2 (Arch KDE VSGAN (level 5)+TRT9.3+num_streams=3+RGBH+op16 (fp16 converted mlrt onnx))?? / 843.8*? / 346.2*
Rife 4.15---------------------
4090 rife4.15 (Win11 vs-ncnn+num_streams=3+RGBS)??? / 67*
4090 rife4.15 (Arch KDE vs-rife+TRT10 (level 5)+num_streams=3+RGBH)?? / 339.6*? / 142.2*
4090 rife4.15 (Win11 mlrt+TRT9.2 (level 3)+num_streams=3+RGBH)??? / 133.4*
4090 rife4.15 (Win11 VSGAN+TRT9.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op19)??? / 139.8*
4090 rife4.15 (Manjaro Gnome VSGAN+TRT9.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op19)?? / 348.5*? / 149.6*
4090 rife4.15 v2 (Win11 mlrt+TRT9.2 (level 3)+num_streams=3+RGBH)??? / 147.3*
4090 rife4.15 v2 (Arch KDE VSGAN+TRT9.3 (level 5)+num_streams=3+(fp16+bf16)+RGBH+op20)?? / 463.1*? / 181.3*
Rife 4.15 (int8)---------------------
4090 rife4.15 v2 (Arch KDE VSGAN+TRT9.3 (level 5)+num_streams=3+(int8+fp16+bf16)+RGBH+op20)?? / 557.5*? / 210.6*
GMFSS_union480p720p1080p
4090 (num_threads=8, num_streams=3, RGBH, TRT8.6, matmul_precision=medium)?? / 44.6*? / 15.5*
GMFSS_fortuna_union480p720p1080p
4090 (num_threads=8, num_streams=2, RGBH, TRT8.6.1, matmul_precision=medium)?? / 50.4*? / 16.9*
4090 (num_threads=8, num_streams=2, RGBH, TRT8.6.1, matmul_precision=medium, @torch.compile(mode="default", fullgraph=True))?? / 50.6*? / 17*
DPIR480p720p1080p
4090 (TRT9.1+num_threads=4+num_streams=2+(fp16+bf16)+RGBH+op18)?? / 54*? / 24.4*
<div id='license'/>

License

This code uses code from other repositories, but the code I wrote myself is under BSD3.