Home

Awesome

TIDL - TI Deep Learning Product

TIDL is a comprehensive software product for acceleration of Deep Neural Networks (DNNs) on TI's embedded devices. It supports heterogeneous execution of DNNs across cortex-A based MPUs, TI’s latest generation C7x DSP and TI's DNN accelerator (MMA). TIDL is released as part of TI's Software Development Kit (SDK) along with additional computer vision functions and optimized libraries including OpenCV. TIDL is available on a variety of embedded devices from Texas Instruments.

TIDL is a fundamental software component of TI’s Edge AI solution. TI's Edge AI solution simplifies the whole product life cycle of DNN development and deployment by providing a rich set of tools and optimized libraries. DNN based product development requires two main streams of expertise:

TI's Edge AI solution provides the right set of tools for both of these categories:

The figure below illustrates the work flow of DNN development and deployment on TI devices:

TI EdgeAI Work Flow

EdgeAI TIDL Tools

<!-- TOC --> <!-- /TOC -->

Introduction

TIDL provides multiple deployment options with industry defined inference engines as listed below. These inference engines are being referred as Open Source Runtimes (OSRT) in this document.

** AM68PA has cortex-A72 as its MPU, refer to the device TRM to know which cortex-A MPU it contains.

These heterogeneous execution enables:

  1. OSRT as the top level inference for user applications
  2. Offloading subgraphs to C7x/MMA for accelerated execution with TIDL
  3. Runs optimized code on ARM core for layers that are not supported by TIDL

Edge AI TIDL Tools provided in this repository supports model compilation and model inference. The diagram below illustrates the TFLite based work flow as an example. ONNX Runtime and TVM/Neo-AI Runtime also follows similar work flow.

<p align="center"> <img width = 500 src="./docs/images/tflrt_work_flow.png"> </p>

The below table covers the supported operations with this repository on X86_PC and TI's development board.

<div align="center">
OperationX86_PCTI SOCPython APICPP API
Model Compilation:heavy_check_mark::x::heavy_check_mark::x:
Model Inference:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark:
</div>

What IS Supported

What IS NOT Supported

Supported Devices

<div align="center">
Device Family(Product)Environment VariableHardware Acceleration
AM62am62:x:
AM62Aam62a:heavy_check_mark:
AM67Aam67a:heavy_check_mark:
AM68PAam68pa:heavy_check_mark:
AM68Aam68a:heavy_check_mark:
AM69Aam69a:heavy_check_mark:
J721E (TDA4VM)am68pa:heavy_check_mark:
J721S2 (TDA4AL, TDA4VL)am68a:heavy_check_mark:
J722Sam67a:heavy_check_mark:
J784S4 (TDA4AP, TDA4VP,<br /> TDA4AH, TDA4VH)am69a:heavy_check_mark:
</div>

Setup

Note Please select / checkout to the tag compatible with the SDK version that you are using with the TI's Evaluation board before continuing on the below steps. Refer to SDK Version compatibility Table for the tag of your SDK version

<p align="center"> <kbd> <img src="./docs/images/git_tag.png" /> </kbd> </p>

Pre-requisites to setup on x86_PC

<div align="center">
OSPython Version
Ubuntu 22.043.10
</div>

Setup on X86_PC

  sudo apt-get install libyaml-cpp-dev libglib2.0-dev

Note source in the setup command is important as this script is exporting all required environment variables. Without this, user may encounter some compilation/runtime issues

git clone https://github.com/TexasInstruments/edgeai-tidl-tools.git
cd edgeai-tidl-tools
git checkout <TAG Compatible with your SDK version>
# Supported SOC name strings am62, am62a, am68a, am68pa, am69a, am67a
export SOC=<Your SOC name>
source ./setup.sh
cd edgeai-tidl-tools
export SOC=<Your SOC name>
export TIDL_TOOLS_PATH=$(pwd)/tidl_tools
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TIDL_TOOLS_PATH
export ARM64_GCC_PATH=$(pwd)/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu

Validate and Benchmark out-of-box examples

Compile and Validate on X86_PC

mkdir build && cd build
cmake ../examples && make -j && cd ..
source ./scripts/run_python_examples.sh
python3 ./scripts/gen_test_report.py
model-artifacts/
models/
output_images/
output_binaries/
test_report_pc_${soc}.csv
Image ClassificationObject detectionSemantic Segmentation
<img width="512" height="256" src="./docs/images/out_viz_cls.jpg"><img width="512" height="256" src="./docs/images/out_viz_od.jpg"><img width="512" height="256" src="./docs/images/out_viz_ss.jpg">

Benchmark on TI SOC

 git clone https://github.com/TexasInstruments/edgeai-tidl-tools.git
 cd edgeai-tidl-tools
 git checkout <TAG Compatible with your SDK version>
 export SOC=<Your SOC name>
 export TIDL_TOOLS_PATH=$(pwd)
# scp -r <pc>/edgeai-tidl-tools/model-artifacts/  <dev board>/edgeai-tidl-tool/
# scp -r <pc>/edgeai-tidl-tools/models/  <dev board>/edgeai-tidl-tool/
mkdir build && cd build
cmake ../examples && make -j && cd ..
python3 ./scripts/gen_test_report.py

Compile and Benchmark Custom Model

User Guide

License

Please see the license under which this repository is made available: LICENSE