Home

Awesome

Trusted Multi-View Classification

This repository contains the code of our ICLR'2021 paper Trusted Multi-View Classification [中文介绍] [中文讲解] and the code of our IEEE TPAMI'2022 paper Trusted Multi-View Classification with Dynamic Evidential Fusion. We will gradually improve and enhance the code. Here we provide a demo and detailed instructions for constructing trustworthy multi-view/multi-modal classification algorithm.

Quick Start

To convert your networks into a trusted multimodal classification model, it is better to refer to the following steps:

This method is also suitable for other scenarios that require trusted integration, such as Ensemble Learning, Multi-View Learning.

Citation

If you find TMC helps your research, please cite our paper:

@inproceedings{
han2021trusted,
title={Trusted Multi-View Classification},
author={Zongbo Han and Changqing Zhang and Huazhu Fu and Joey Tianyi Zhou},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=OOsR8BzCnl5}
}
@article{han2022trusted,
  title={Trusted Multi-View Classification with Dynamic Evidential Fusion},
  author={Han, Zongbo and Zhang, Changqing and Fu, Huazhu and Zhou, Joey Tianyi},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2022},
  publisher={IEEE}
}

Acknowledgement

We thank the authors of EDL. Other loss functions except for cross entropy to quantify classification uncertainty are also provided in EDL.

Questions?

Please report any bugs and I will get to them ASAP. For any additional questions, feel free to email zongbo AT tju DOT edu DOT cn.

Related works

There are many interesting works inspired by this paper and the following are related follow-up works: