Home

Awesome

Self-Weighted Contrastive Fusion for Deep Multi-View Clustering

Authors: Song Wu, Yan Zheng, Yazhou Ren, Jing He, Xiaorong Pu, shudong Huang, Zhifeng Hao, Lifang He.

This repository contains the code and data of our paper published in IEEE Transactions on Multimedia (TMM)Self-Weighted Contrastive Fusion for Deep Multi-View Clustering.

<!-- > [Self-Weighted Contrastive Fusion for Deep Multi-View Clustering](https://ieeexplore.ieee.org/document/10499831) -->

1. Workflow of SCMVC

<img src="https://github.com/SongwuJob/SCMVC/blob/main/figures/workflow.png" width="900" />

The framework of SCMVC. We propose a hierarchical network architecture to separate the consistency objective from the reconstruction objective. Specifically, the feature learning autoencoders first project the raw data into a low-dimensional latent space $\mathbf{Z}$. Then, two feature MLPs learn view-consensus features $\mathbf{R}$ and global features $\mathbf{H}$, respectively. Particularly, a novel self-weighting method adaptively strengthens useful views in feature fusion, and weakens unreliable views, to implement multi-view contrastive fusion.

2.Requirements

3.Datasets

4.Usage

Paper:

Self-Weighted Contrastive Fusion for Deep Multi-View Clustering: https://ieeexplore.ieee.org/document/10499831.

To test the trained model, run:

python test.py

To train a new model, run:

python train.py

The experiments are conducted on a Windows PC with Intel (R) Core (TM) i5-9300H CPU@2.40 GHz, 16.0 GB RAM, and TITAN X GPU (12 GB caches).

5.Experiment Results

we compare our proposed SCMVC with 10 state-of-the-art multi-view clustering methods:

<img src="https://github.com/SongwuJob/SCMVC/blob/main/figures/performance.png" width="900" /> <img src="https://github.com/SongwuJob/SCMVC/blob/main/figures/view_change.png" width="900" />

6.Acknowledgments

Our proposed SCMVC are inspired by MFLVC, GCFAggMVC, and SEM. Thanks for these valuable works.

7.Citation

If you use our code or datasets in this repository for your research, please cite our papers.

@ARTICLE{10499831,
  author={Wu, Song and Zheng, Yan and Ren, Yazhou and He, Jing and Pu, Xiaorong and Huang, Shudong and Hao, Zhifeng and He, Lifang},
  journal={IEEE Transactions on Multimedia}, 
  title={Self-Weighted Contrastive Fusion for Deep Multi-View Clustering}, 
  year={2024},
  volume={},
  number={},
  pages={1-13},
  doi={10.1109/TMM.2024.3387298}
}

@article{xu2024self,
  title={Self-weighted contrastive learning among multiple views for mitigating representation degeneration},
  author={Xu, Jie and Chen, Shuo and Ren, Yazhou and Shi, Xiaoshuang and Shen, Hengtao and Niu, Gang and Zhu, Xiaofeng},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  year={2024}
}

@InProceedings{Xu_2022_CVPR,
    author    = {Xu, Jie and Tang, Huayi and Ren, Yazhou and Peng, Liang and Zhu, Xiaofeng and He, Lifang},
    title     = {Multi-Level Feature Learning for Contrastive Multi-View Clustering},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2022},
    pages     = {16051-16060}
}

If you have any problems, please contact me by songwu.work@outlook.com.