Home

Awesome

<table class="sphinxhide"> <tr width="100%"> <td align="center"><img src="https://raw.githubusercontent.com/Xilinx/Image-Collateral/main/xilinx-logo.png" width="30%"/><h1>Vitis-AI™ Tutorials</h1> <a href="https://www.xilinx.com/products/design-tools/vitis.html">See Vitis™ Development Environment on xilinx.com</br></a> <a href="https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html">See Vitis-AI™ Development Environment on xilinx.com</a> </td> </tr> </table> <table> <thead> <tr> <th width="35%" align="center"><h3><b>Tutorial Name</b></hr></th> <th width="15%" align="center"><h3><b>Latest Supported Vitis AI Version</b></hr></th> <th width="50%" align="center"><h3><b>Description</b></hr></th> </tr> </thead> <tbody> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/3.5/Tutorials/RESNET18/">Running ResNet18 CNN Through Vitis AI 3.5 Flow for ML</a> </td> <td align="center">3.5</td> <td>In this Deep Learning (DL) tutorial, you will take a public domain CNN like ResNet18, already trained on the ImageNet dataset, and run it through the Vitis AI 3.5 stack to run ML inference on FPGA devices. You will use Keras on Tensorflow 2.x. Supported boards are: ZCU104, ZCU102, VCK190, VEK280 and Alveo V70. </td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/3.5/Tutorials/PyTorch-ResNet18/">ResNet18 in PyTorch from Vitis AI Library</a> </td> <td align="center">3.5</td> <td>In this Deep Learning (DL) tutorial, you will take the ResNet18 CNN, from the Vitis AI 3.5 PyTorch Library, and use it to classify the different colors of the "car object" inside images by running the inference application on FPGA devices. Supported boards are: ZCU104, ZCU102, VCK190, VEK280 and Alveo V70. </td> </tr> <tr> <td> <a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/3.5/Tutorials/TF2-Vitis-AI-Optimizer/">TensorFlow2 Vitis AI Optimizer: Getting Started</a> </td> <td align="center">3.5</td> <td>Get started with the <a href="https://docs.xilinx.com/r/en-US/ug1414-vitis-ai/Vitis-AI-Optimizer">Vitis AI Optimizer (release 3.5)</a> in the TensorFlow2 (TF2) environment with Keras.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/3.0/Tutorials/Keras_GoogleNet_ResNet/">Deep Learning with Custom GoogleNet and ResNet in Keras and Xilinx Vitis AI</a></td> <td align="center">3.0</td> <td>Quantize in fixed point some custom CNNs and deploy them on the Xilinx ZCU102 board, using Keras and the Xilinx7Vitis AI tool chain based on TensorFlow (TF).</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/3.0/Tutorials/pytorch-subgraphs/">Partitioning Vitis AI SubGraphs on CPU/DPU</a></td> <td align="center">3.0</td> <td>Learn how to deploy a CNN on the Xilinx <a href="https://www.xilinx.com/products/boards-and-kits/vck190.html">VCK190</a> board using Vitis AI.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/3.0/Tutorials/Keras_FCN8_UNET_segmentation">FCN8 and UNET Semantic Segmentation with Keras and Xilinx Vitis AI</a></td> <td align="center">3.0</td> <td>Train the FCN8 and UNET Convolutional Neural Networks (CNNs) for Semantic Segmentation in Keras adopting a small custom dataset, quantize the floating point weights files to an 8-bit fixed point representation, and then deploy them on the Xilinx ZCU102 board using Vitis AI.</td> </tr> <tr> <td> <a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/3.0/Tutorials/18-mpsocdpu-pre-post-pl-acc/">Pre- and Post-processing Accelerators for Semantic Segmentation with Unet CNN on MPSoC DPU</a> </td> <td align="center">3.0</td> <td>A complete example of how using the <a href="https://github.com/Xilinx/Vitis-AI/tree/3.0/demo/Whole-App-Acceleration">WAA</a> flow targeting the MPSoC ZCU102 board. </td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/2.5/Tutorials/Kaggle_ImageNet/">Using the Kaggle ImageNet Subset for Training Neural Networks</a></td> <td align="center">2.5</td> <td>Demonstrates how to use the Kaggle ImageNet Subset for training neural networks for developers and enthusiasts with a non-edu domain who are unable to obtain the ImageNet dataset directly.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/2.5/Tutorials/RFModulation_Recognition/">RF Modulation Recognition with Vitis AI</a></td> <td align="center">2.5</td> <td>Discusses using Deep Neural Networks to perform automatic modulation recognition so that the receiver may be able to detect and demodulate the signal without this explicit knowledge of the modulation type and encoding method.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/2.0/Tutorials/Vitis-AI-Vivado-TRD/README.md">Leveraging the Vitis™ AI DPU in the Vivado® Workflow</a></td> <td align="center">2.0</td> <td>Build the Vitis AI Targeted Reference Design (TRD) using the Vivado flow and learn how to build a PetaLinux image from the ZCU102 BSP that is provided in the TRD archive.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/2.0/Tutorials/caffe_cats_vs_dogs/README.md">Quantization and Pruning of AlexNet CNN trained in Caffe with Cats-vs-Dogs dataset</a></td> <td align="center">2.0</td> <td>Train, prune, and quantize a modified version of the AlexNet convolutional neural network (CNN) with the Kaggle Dogs vs. Cats dataset in order to deploy it on the Xilinx® ZCU102 board.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/2.0/Tutorials/Vitis-AI-on-VCK5000-ES-Board/">Vitis AI on VCK5000 Card</a></td> <td align="center">2.0</td> <td>Start from card installation and go through a step-by-step workflow to run the first Vitis AI sample on a VCK5000 card.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/2.0/Tutorials/VCK190_CUSTOM_LAMBDA_OP/">VCK190 Custom Lambda Operator</a></td> <td align="center">2.0</td> <td>The general concept behind the custom operator flow is to make Vitis AI and the DPU more extensible—both for supporting custom layers as well as framework layers that are currently unsupported in the toolchain. The custom operator flow enables you to define layers which are unsupported, and ultimately deploy those layers either on the CPU or an accelerator.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/2.0/Tutorials/kv260_lidar_cam_fusion/">LIDAR + Camera Fusion on KV260</a></td> <td align="center">2.0</td> <td>Shows you how to install Ubuntu on the KV260 then build ROS, bring in multiple sensors, and deploy FPGA-accelerated neural network to process the data before displaying the data using RViz. All of this is possible without ever using FPGA tools!</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Introduction/README.md">Introduction to Vitis AI</a></td> <td align="center">1.4</td> <td>This tutorial puts in practice the concepts of FPGA acceleration of Machine Learning and illustrates how to<br> quickly get started deploying both pre-optimized and customized ML models on Xilinx devices.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/02-MNIST_classification_tf/README.md">MNIST Classification using Vitis AI and TensorFlow</a></td> <td align="center">1.4</td> <td>Learn the Vitis AI TensorFlow design process for creating a compiled ELF file that is ready for deployment on the Xilinx DPU accelerator from a simple network model built using Python. This tutorial uses the MNIST test dataset.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/03-using_densenetx/README.md">Using DenseNetX on the Xilinx DPU Accelerator</a></td> <td align="center">1.4</td> <td>Learn about the Vitis AI TensorFlow design process and how to go from a Python description of the network model to running a compiled model on the Xilinx DPU accelerator.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.3/Design_Tutorials/06-densenetx_DPUv3">Using DenseNetX on the Xilinx Alveo U50 Accelerator Card</a></td> <td align="center">1.3</td> <td>Implement a convolutional neural network (CNN) and run it on the DPUv3E accelerator IP.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/07-yolov4-tutorial/readme.md">Vitis AI YOLOv4</a></td> <td align="center">1.4</td> <td>Learn how to train, evaluate, convert, quantize, compile, and deploy YOLOv4 on Xilinx devices using Vitis AI.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/08-tf2_flow/README.md">TensorFlow2 and Vitis AI design flow</a></td> <td align="center">1.4</td> <td>Learn about the TF2 flow for Vitis AI. In this tutorial, you'll be trained on TF2, including conversion of a dataset into TFRecords, optimization with a plug-in, and compiling and execution on a Xilinx ZCU102 board or Xilinx Alveo U50 Data Center Accelerator card.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/09-mnist_pyt/README.md">PyTorch flow for Vitis AI</a></td> <td align="center">1.4</td> <td>Introduces the Vitis AI TensorFlow design process and illustrates how to go from a python description of the network model to running a compiled model on a Xilinx evaluation board.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/10-RF_modulation_recognition/README.md">RF Modulation Recognition with TensorFlow 2</a></td> <td align="center">1.4</td> <td>Machine learning applications are certainly not limited to image processing! Learn how to apply machine learning with Vitis AI to the recognition of RF modulation from signal data.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/11-tf2_var_autoenc/README.md">Denoising Variational Autoencoder with TensorFlow2 and Vitis-AI</a></td> <td align="center">1.4</td> <td>The Xilinx DPU can accelerate the execution of many different types of operations and layers that are commonly found in convolutional neural networks but occasionally we need to execute models that have fully custom layers. One such layer is the sampling function of a convolutional variational autoencoder. The DPU can accelerate the convolutional encoder and decoder but not the statistical sampling layer - this must be executed in software on a CPU. This tutorial will use the variational autoencoder as an example of how to approach this situation.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/12-Alveo-U250-TF2-Classification/README.md">Alveo U250 TF2 Classification</a></td> <td align="center">1.4</td> <td>Demonstrates image classification using the Alveo U250 card with Vitis AI 1.4 and the Tensorflow 2.x framework.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/13-vdpu-pre-post-pl-acc/README.md">Pre- and Post-processing PL Accelerators for ML with Versal DPU</a></td> <td align="center">1.4</td> <td>A complete example of how using the <a href="https://github.com/Xilinx/Vitis-AI/tree/master/demo/Whole-App-Acceleration">WAA</a> flow with Vitis 2020.2 targeting the VCK190 PP board.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/14-caffe-ssd-pascal/README.md">Caffe SSD</a></td> <td align="center">1.4</td> <td>The topics covered in this tutorial include training, quantizing, and compiling SSD using PASCAL VOC 2007/2012 datasets, the Caffe framework, and Vitis AI tools. The model is then deployed on a Xilinx® ZCU102 target board and could also be deployed on other Xilinx development board targets (For example, Kria Starter Kit, ZCU104, and VCK190).</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/15-caffe-segmentation-cityscapes/README.md">ML Caffe Segmentation</a></td> <td align="center">1.4</td> <td>Describes how to train, quantize, compile, and deploy various segmentation networks using Vitis AI, including ENet, ESPNet, FPN, UNet, and a reduced compute version of UNet that we'll call Unet-lite. The training dataset used for this tutorial is the Cityscapes dataset, and the Caffe framework is used for training the models.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/16-profiler_introduction/README.md">Introduction Tutorial to the Vitis AI Profiler</a></td> <td align="center">1.4</td> <td>Introduces the the Vitis AI Profiler tool flow and will illustrates how to profile an example from the Vitis AI runtime (VART).</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/17-PyTorch-CityScapes-Pruning/README.md">PyTorch CityScapes Pruning</a></td> <td align="center">1.4</td> <td>The following is a tutorial for using the Vitis AI Optimizer to prune the Vitis AI Model Zoo FPN Resnet18 segmentation model and a publicly available UNet model against a reduced class version of the Cityscapes dataset. The tutorial aims to provide a starting point and demonstration of the PyTorch pruning capabilities for the segmentation models.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Feature_Tutorials/tf2_quant_fine_tune/README.md">Fine-Tuning TensorFlow2 quantized model</a></td> <td align="center">1.4</td> <td>Learn how to implement the Vitis-AI quantization fine-tuning for TensorFlow2.3.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Feature_Tutorials/Vitis-AI-based-Deployment-Flow-on-VCK190/README.md">Vitis AI based Deployment Flow on VCK190</a></td> <td align="center">1.4</td> <td>DPU integration with VCK190 production platform.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Feature_Tutorials/04-tensorflow-ai-optimizer/README.md">TensorFlow AI Optimizer Example Using Low-level Coding Style</a></td> <td align="center">1.4</td> <td>Use AI Optimizer for TensorFlow to prune an AlexNet CNN by 80% while maintaining the original accuracy.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.3/Feature_Tutorials/01-freezing_a_keras_model">Freezing a Keras Model for use with Vitis AI (UG1380)</a></td> <td align="center">1.3</td> <td>Freeze a Keras model by generating a binary protobuf (.pb) file.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.3/Feature_Tutorials/02-profiling-example">Profiling a CNN Using DNNDK or VART with Vitis AI (UG1487)</a></td> <td align="center">1.3</td> <td>Profile a CNN application running on the ZCU102 target board with Vitis AI.</td> </tr> <tr> <td><a href="https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.3/Feature_Tutorials/03-edge-to-cloud">Moving Seamlessly between Edge and Cloud with Vitis AI (UG1488)</a></td> <td align="center">1.3</td> <td>Compile and run the same identical design and application code on either the Alveo U50 data center accelerator card or the Zynq UltraScale+™ MPSoC ZCU102 evaluation board.</td> </tr> </tbody> </table> </hr> <p class="sphinxhide" align="center"><sub>Copyright © 2022–2023 Advanced Micro Devices, Inc</sub></p> <p class="sphinxhide" align="center"><sup><a href="https://www.amd.com/en/corporate/copyright">Terms and Conditions</a></sup></p>