Home

Awesome

VALI

VALI is a video analytics and processing project for python. It’s set of C++ libraries and Python bindings which provide full HW acceleration for video processing tasks such as decoding, encoding, transcoding and GPU-accelerated color space and pixel format conversions.

VALI also supports DLPack and can share memory with all the modules which supports DLPack (e. g. share decoded surfaces with torch).

Documentation

https://romanarzumanyan.github.io/VALI

Prerequisites

VALI works on Linux(tested on Ubuntu 22.04) and Windows

Licensing

VALI sources are available under Apache 2 license.

Wheels contain FFMpeg libraries downloaded from https://github.com/BtbN/FFmpeg-Builds/releases that are licensed under LGPLv2.1.

This software uses code of <a href=http://ffmpeg.org>FFmpeg</a> licensed under the <a href=http://www.gnu.org/licenses/old-licenses/lgpl-2.1.html>LGPLv2.1</a> and its source can be downloaded <a href=https://github.com/BtbN/FFmpeg-Builds>here</a>

Install from PyPi (Linux only for now)

python3 -m pip install python_vali

All information below is relevant if you want to build VALI on your local machine.

Linux

Ubuntu 22.04 is recommended.

Install dependencies

apt install -y \          
          wget \
          build-essential \
          git
Install CUDA Toolkit
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt-get install -y cuda
# Ensure nvcc to your $PATH (most commonly already done by the CUDA installation)
export PATH=/usr/local/cuda/bin:$PATH
Install locally with pip
# Update git submodules
git submodule update --init --recursive
pip3 install .

To check whether VALI is correctly installed run the following Python script

import python_vali as vali

If using Docker via Nvidia Container Runtime, please make sure to enable the video driver capability: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html#driver-capabilities via the NVIDIA_DRIVER_CAPABILITIES environment variable in the container or the --gpus command line parameter (e.g. docker run -it --rm --gpus 'all,"capabilities=compute,utility,video"' nvidia/cuda:12.1.0-base-ubuntu22.04).

Windows

# Update git submodules
git submodule update --init --recursive
pip install .

To check whether VALI is correctly installed run the following Python script

cuda_path = os.environ["CUDA_PATH"]
os.add_dll_directory(os.path.join(cuda_path, "bin"))
import python_vali as vali

Docker

For convenience, we provide a Docker images located at docker that you can use to easily install all dependencies (docker and nvidia-docker are required)

DOCKER_BUILDKIT=1 sudo docker build \
  --tag vali-gpu \
  --file docker/Dockerfile \
  --build-arg PIP_INSTALL_EXTRAS=torch .

docker run -it --rm --gpus=all vali-gpu

PIP_INSTALL_EXTRAS can be any subset listed under project.optional-dependencies in pyproject.toml.

Offline documentation

A documentation for VALI can be generated from this repository:

pip install . # install VALI
pip install sphinx  # install documentation tool sphinx
cd docs
make html

You can then open _build/html/index.html with your browser.

Community Support

Please use project's Discussions page for that.

OnPullRequest