Home

Awesome

RIFE for Nuke

Introduction

This project implements RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation for The Foundry's Nuke.

RIFE is a powerful frame interpolation neural network, capable of high-quality retimes and optical flow estimation.

This implementation allows RIFE to be used natively inside Nuke without any external dependencies or complex installations. It wraps the network in an easy-to-use Gizmo with controls similar to those in OFlow or Kronos.

Features

Examples

https://github.com/rafaelperez/RIFE-for-Nuke/assets/1684365/6b35a9ea-dee3-414f-9d99-6491ea3c0ff1

https://github.com/rafaelperez/RIFE-for-Nuke/assets/1684365/266f4733-4ed6-4806-accb-ae351d2318da

https://github.com/rafaelperez/RIFE-for-Nuke/assets/1684365/bac1cfd1-4877-438d-bbc8-26cda375dceb

https://github.com/rafaelperez/RIFE-for-Nuke/assets/1684365/6607f72c-1f1e-450d-b15d-d57c2d978bbe

Special thanks to:

Compatibility

Nuke 13.2v8+, tested on Linux and Windows.

⚠️ Nuke 13.2v7 - Linux unfortunately has a bug in the Inference node that throwns the error MLPlanarIop -> Unknown Error.

Installation

  1. Download and unzip the latest release from here.
  2. Copy the extracted Cattery folder to .nuke or your plugins path.
  3. In the toolbar, choose Cattery > Update or simply restart Nuke.

RIFE will then be accessible under the toolbar at Cattery > Optical Flow > RIFE.

cattery_menu_2

⚠️ Extra Steps for Nuke 13

  1. Add the path for RIFE to your init.py:
import nuke
nuke.pluginAddPath('./Cattery/RIFE')
  1. Add an menu item to the toolbar in your menu.py:
import nuke
toolbar = nuke.menu("Nodes")
toolbar.addCommand('Cattery/Optical Flow/RIFE', 'nuke.createNode("RIFE")', icon="RIFE.png")

Options

rife_nuke

Model

RIFE.cat uses the latest model from Practical RIFE, version v4.14 (2024.01.08).

The principal model IFNet has been modified for compatibility with TorchScript, allowing the model to be compiled into a .cat file (./nuke/Cattery/RIFE/RIFE.cat).

This makes it transformable into a native Nuke's inference node through the CatFileCreator.

For more detailed information about the training data and technical specifics, please consult the original repository.

Compiling the Model

To retrain or modify the model for use with Nuke's CatFileCreator, you'll need to convert it into the PyTorch format .pt. Below are the primary methods to achieve this:

Cloud-Based Compilation (Recommended for Nuke 14+)

Google Colaboratory offers a free, cloud-based development environment ideal for experimentation or quick modifications. It's important to note that Colaboratory uses Python 3.10, which is incompatible with the PyTorch version (1.6.0) required by Nuke 13.

For those targetting Nuke 14 or 15, Colaboratory is a convenient choice.

This Google Colab link:

https://colab.research.google.com/drive/10TDRhwYiC9-pmNzi97BjVHFj9-br_GZ6

provides a basic setup for compiling the TorchScript RIFE.pt model directly on Google's servers.

Local Compilation (Required for Nuke 13+)

Compiling the model locally gives you full control over the versions of Python, PyTorch, and CUDA you use. Setting up older versions, however, can be challenging.

For Nuke 13, which requires PyTorch 1.6.0, using Docker is highly recommended. This recommendation stems from the lack of official PyTorch package support for CUDA 11.

Fortunately, Nvidia offers Docker images tailored for various GPUs. The Docker image version 20.07 is specifically suited for PyTorch 1.6.0 + CUDA 11 requirements.

Access to these images requires registration on Nvidia's NGC Catalog.

Once Docker is installed on your system, execute the following command to initiate a terminal within the required environment. You can then clone the repository and run python nuke_rife.py to compile the model.

docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:20.07-py3

For projects targeting Nuke 14+, which requires PyTorch 1.12, the Docker image version 22.05 is recommended:

docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:22.05-py3

For more information on selecting the appropriate Python, PyTorch, and CUDA combination, refer to Nvidia's Framework Containers Support Matrix.

License and Acknowledgments

RIFE.cat is licensed under the MIT License, and is derived from https://github.com/megvii-research/ECCV2022-RIFE.

While the MIT License permits commercial use of RIFE, the dataset used for its training may be under a non-commercial license.

This license does not cover the underlying pre-trained model, associated training data, and dependencies, which may be subject to further usage restrictions.

Consult https://github.com/megvii-research/ECCV2022-RIFE and https://github.com/hzwer/Practical-RIFE for more information on associated licensing terms.

Users are solely responsible for ensuring that the underlying model, training data, and dependencies align with their intended usage of RIFE.cat.

Citation

@inproceedings{huang2022rife,
  title={Real-Time Intermediate Flow Estimation for Video Frame Interpolation},
  author={Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2022}
}