Home

Awesome

<div align="center"> <a href="http://camma.u-strasbg.fr/"> <img src="https://github.com/CAMMA-public/rendezvous/raw/main/files/CammaLogo.png" width="18%"> </a> </div> <br/>

PyTorch TensorFlow

Attention Tripnet: Exploiting attention mechanisms for the recognition of surgical action triplets in endoscopic videos

<i>C.I. Nwoye and N. Padoy</i>

This repo contains an ablation model of Rendezvous network, known as Attention Tripnet. <br /> Read on ArXiv Journal Publication

<br />

Introduction

Recognising action as a triplet of subject, verb, and object provides truly fine-grained and comprehensive information on surgical activities. In the natural vision, it model as <subject, verb, object> representing the Human Object Interaction (HOI). In Surgical Computer Vision, the information is presented as <instrument, verb, target>. Triplet recognition involves simultaneous recognition of all three triplet components and correctly establishing the data association between them.

A lot of efforts has been made to recognize surgical triplets directly from videos. The predominant ones include Tripnet and Rendezvous models leveraging class activations and attention mechanisms.

<br /> <img src="files/attentiontripnet.png" width="98%" >

Fig 1: Architecture of Attention Tripnet.

The first effort at exploiting attention mechanisms in this Rendezvous led to the development of a Class Activation Guided Attention Mechanism (CAGAM) to better detect the verb and target components of the triplet, which are instrument-centric. CAGAM is a form of spatial attention mechanism that propagates attention from a known to an unknown context features thereby enhancing the unknown context for relevant pattern discovery. Usually the known context feature is a class activation map (CAM). In this work, CAGAM explicitly uses tool type and location information to highlight discriminative features for verbs and targets respectively. Integrating CAGAM in the state-of-the-art Tripnet model results in a new model that is now known as Attention Tripnet with improved performance.

<br /> <img src="files/cagam.png" width="98%" >

Fig 2: Overview of CAGAM.

<br />

Model Overview

The Attention-Tripnet model is composed of:

<br />

We hope this repo will help researches/engineers in the development of surgical action recognition systems. For algorithm development, we provide training data, baseline models and evaluation methods to make a level playground. For application usage, we also provide a small video demo that takes raw videos as input without any bells and whistles.

<br />

Performance

Results Table

Components APAssociation AP
AP<sub>I</sub>AP<sub>V</sub>AP<sub>T</sub>AP<sub>IV</sub>AP<sub>IT</sub>AP<sub>IVT</sub>
92.060.238.531.129.823.4
<br />

Video Demo

Usefulness of CAGAM is demonstrated at the second phase of the video:

<a href="https://www.youtube.com/watch?v=d_yHdJtCa98&t=61s"><img src="https://github.com/CAMMA-public/rendezvous/raw/main/files/vid.png" width="20.2%" ></a>

Available on Youtube.

<br />

Installation

Requirements

The model depends on the following libraries:

  1. sklearn
  2. PIL
  3. Python >= 3.5
  4. ivtmetrics
  5. Developer's framework:
    1. For Tensorflow version 1:
      • TF >= 1.10
    2. For Tensorflow version 2:
      • TF >= 2.1
    3. For PyTorch version:
      • Pyorch >= 1.10.1
      • TorchVision >= 0.11
<br />

System Requirements:

The code has been test on Linux operating system. It runs on both CPU and GPU. Equivalence of basic OS commands such as unzip, cd, wget, etc. will be needed to run in Windows or Mac OS.

<br />

Quick Start

<br />

Dataset Zoo

<br />

Data Preparation

<br />

Evaluation Metrics

The ivtmetrics computes AP for triplet recognition. It also support the evaluation of the recognition of the triplet components.

pip install ivtmetrics

or

conda install -c nwoye ivtmetrics

Usage guide is found on pypi.org.

<br />

Running the Model

The code can be run in a trianing mode (-t) or testing mode (-e) or both (-t -e) if you want to evaluate at the end of training :

<br />

Training on CholecT45/CholecT50 Dataset

Simple training on CholecT50 dataset:

python run.py -t  --data_dir="/path/to/dataset" --dataset_variant=cholect50 --version=1

You can include more details such as epoch, batch size, cross-validation and evaluation fold, weight initialization, learning rates for all subtasks, etc.:

python3 run.py -t -e  --data_dir="/path/to/dataset" --dataset_variant=cholect45-crossval --kfold=1 --epochs=180 --batch=64 --version=2 -l 1e-2 1e-3 1e-4 --pretrain_dir='path/to/imagenet/weights'

All the flags can been seen in the run.py file. The experimental setup of the published model is contained in the paper.

<br />

Testing

python3 run.py -e --data_dir="/path/to/dataset" --dataset_variant=cholect45-crossval --kfold 3 --batch 32 --version=1 --test_ckpt="/path/to/model-k3/weights"
<br />

Training on Custom Dataset

Adding custom datasets is quite simple, what you need to do are:

<br />

Model Zoo

PyTorch

NetworkBaseResolutionDatasetData splitLink
Attention TripnetResNet-18LowCholecT50RDV[Download]
Attention TripnetResNet-18LowCholecT50ChallengeDownload
Attention TripnetResNet-18HighCholecT50ChallengeDownload
Attention TripnetResNet-18LowCholecT50crossval k1Download
Attention TripnetResNet-18LowCholecT50crossval k2Download
Attention TripnetResNet-18LowCholecT50crossval k3Download
Attention TripnetResNet-18LowCholecT50crossval k4Download
Attention TripnetResNet-18LowCholecT50crossval k5Download
Attention TripnetResNet-18LowCholecT45crossval k1Download
Attention TripnetResNet-18LowCholecT45crossval k2Download
Attention TripnetResNet-18LowCholecT45crossval k3Download
Attention TripnetResNet-18LowCholecT45crossval k4Download
Attention TripnetResNet-18LowCholecT45crossval k5Download

<br />

TensorFlow v1

NetworkBaseResolutionDatasetData splitLink
Attention TripnetResNet-18HighCholecT50RDV[Download]
Attention TripnetResNet-18HighCholecT50Challenge[Download]
<br />

TensorFlow v2

NetworkBaseResolutionDatasetData splitLink
Attention TripnetResNet-18HighCholecT50RDV[Download]
Attention TripnetResNet-18LowCholecT50RDV[Download]
Attention TripnetResNet-18HighCholecT50Challenge[Download]

Model weights are released periodically because some training are in progress.

<br /><br />

License

This code, models, and datasets are available for non-commercial scientific research purposes provided by CC BY-NC-SA 4.0 LICENSE attached as LICENSE file. By downloading and using this code you agree to the terms in the LICENSE. Third-party codes are subject to their respective licenses.

<br />

Related Resources

<b> </b> <br />

Citation

If you find this repo useful in your project or research, please consider citing the relevant publications:

@article{nwoye2021rendezvous,
  title={Rendezvous: Attention Mechanisms for the Recognition of Surgical Action Triplets in Endoscopic Videos},
  author={Nwoye, Chinedu Innocent and Yu, Tong and Gonzalez, Cristians and Seeliger, Barbara and Mascagni, Pietro and Mutter, Didier and Marescaux, Jacques and Padoy, Nicolas},
  journal={Medical Image Analysis},
  volume={78},
  pages={102433},
  year={2022}
}
@article{nwoye2022data,
  title={Data Splits and Metrics for Benchmarking Methods on Surgical Action Triplet Datasets},
  author={Nwoye, Chinedu Innocent and Padoy, Nicolas},
  journal={arXiv preprint arXiv:2204.05235},
  year={2022}
}
@article{nwoye2021rendezvous,
  title={Rendezvous: Attention Mechanisms for the Recognition of Surgical Action Triplets in Endoscopic Videos},
  author={Nwoye, Chinedu Innocent and Yu, Tong and Gonzalez, Cristians and Seeliger, Barbara and Mascagni, Pietro and Mutter, Didier and Marescaux, Jacques and Padoy, Nicolas},
  journal={Medical Image Analysis},
  volume={78},
  pages={102433},
  year={2022}
}
@inproceedings{nwoye2020recognition,
   title={Recognition of instrument-tissue interactions in endoscopic videos via action triplets},
   author={Nwoye, Chinedu Innocent and Gonzalez, Cristians and Yu, Tong and Mascagni, Pietro and Mutter, Didier and Marescaux, Jacques and Padoy, Nicolas},
   booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)},
   pages={364--374},
   year={2020},
   organization={Springer}
}
@article{nwoye2022cholectriplet2021,
  title={CholecTriplet2021: a benchmark challenge for surgical action triplet recognition},
  author={Nwoye, Chinedu Innocent and Alapatt, Deepak and Vardazaryan, Armine ... Gonzalez, Cristians and Padoy, Nicolas},
  journal={arXiv preprint arXiv:2204.04746},
  year={2022}
}

This repo is maintained by CAMMA. Comments and suggestions on models are welcomed. Check this page for updates.