Home

Awesome

<p align="center"> <br> <img src="docs/images/3D-Speaker-logo.png" width="400"/> <br> <p> <div align="center"> <!-- [![Documentation Status](https://readthedocs.org/projects/easy-cv/badge/?version=latest)](https://easy-cv.readthedocs.io/en/latest/) -->

license <a href=""><img src="https://img.shields.io/badge/OS-Linux-orange.svg"></a> <a href=""><img src="https://img.shields.io/badge/Python->=3.8-aff.svg"></a> <a href=""><img src="https://img.shields.io/badge/Pytorch->=1.10-blue"></a>

</div>

<strong>3D-Speaker</strong> is an open-source toolkit for single- and multi-modal speaker verification, speaker recognition, and speaker diarization. All pretrained models are accessible on ModelScope. Furthermore, we present a large-scale speech corpus also called 3D-Speaker-Dataset to facilitate the research of speech representation disentanglement.

Benchmark

The EER results on VoxCeleb, CNCeleb and 3D-Speaker datasets for fully-supervised speaker verification.

ModelParamsVoxCeleb1-OCNCeleb3D-Speaker
Res2Net4.03 M1.56%7.96%8.03%
ResNet346.34 M1.05%6.92%7.29%
ECAPA-TDNN20.8 M0.86%8.01%8.87%
ERes2Net-base6.61 M0.84%6.69%7.21%
CAM++7.2 M0.65%6.78%7.75%
ERes2NetV217.8M0.61%6.14%6.52%
ERes2Net-large22.46 M0.52%6.17%6.34%

The DER results on public and internal multi-speaker datasets for speaker diarization.

TestDERpyannote.audioDiariZen_WavLM
Aishell-410.30%12.2%11.7%
Alimeeting19.73%24.4%17.6%
AMI_SDM21.76%22.4%15.4%
VoxConverse11.75%11.3%28.39%
Meeting-CN_ZH-118.91%22.37%32.66%
Meeting-CN_ZH-212.78%17.86%18%

Quickstart

Install 3D-Speaker

git clone https://github.com/modelscope/3D-Speaker.git && cd 3D-Speaker
conda create -n 3D-Speaker python=3.8
conda activate 3D-Speaker
pip install -r requirements.txt

Running experiments

# Speaker verification: ERes2NetV2 on 3D-Speaker dataset
cd egs/3dspeaker/sv-eres2netv2/
bash run.sh
# Speaker verification: CAM++ on 3D-Speaker dataset
cd egs/3dspeaker/sv-cam++/
bash run.sh
# Speaker verification: ECAPA-TDNN on 3D-Speaker dataset
cd egs/3dspeaker/sv-ecapa/
bash run.sh
# Self-supervised speaker verification: SDPN on VoxCeleb dataset
cd egs/voxceleb/sv-sdpn/
bash run.sh
# Audio and multimodal Speaker diarization:
cd egs/3dspeaker/speaker-diarization/
bash run_audio.sh
bash run_video.sh
# Language identification
cd egs/3dspeaker/language-idenitfication
bash run.sh

Inference using pretrained models from Modelscope

All pretrained models are released on Modelscope.

# Install modelscope
pip install modelscope
# ERes2Net trained on 200k labeled speakers
model_id=iic/speech_eres2net_sv_zh-cn_16k-common
# ERes2NetV2 trained on 200k labeled speakers
model_id=iic/speech_eres2netv2_sv_zh-cn_16k-common
# CAM++ trained on 200k labeled speakers
model_id=iic/speech_campplus_sv_zh-cn_16k-common
# Run CAM++ or ERes2Net inference
python speakerlab/bin/infer_sv.py --model_id $model_id
# Run batch inference
python speakerlab/bin/infer_sv_batch.py --model_id $model_id --wavs $wav_list

# SDPN trained on VoxCeleb
model_id=iic/speech_sdpn_ecapa_tdnn_sv_en_voxceleb_16k
# Run SDPN inference
python speakerlab/bin/infer_sv_ssl.py --model_id $model_id

# Run diarization inference
python speakerlab/bin/infer_diarization.py --wav [wav_list OR wav_path] --out_dir $out_dir
# Enable overlap detection
python speakerlab/bin/infer_diarization.py --wav [wav_list OR wav_path] --out_dir $out_dir --include_overlap --hf_access_token $hf_access_token

Overview of Content

What‘s new :fire:

Contact

If you have any comment or question about 3D-Speaker, please contact us by

License

3D-Speaker is released under the Apache License 2.0.

Acknowledge

3D-Speaker contains third-party components and code modified from some open-source repos, including: <br> Speechbrain, Wespeaker, D-TDNN, DINO, Vicreg, TalkNet-ASD , Ultra-Light-Fast-Generic-Face-Detector-1MB, pyannote.audio

Citations

If you find this repository useful, please consider giving a star :star: and citation :t-rex::

@article{chen20243d,
  title={3D-Speaker-Toolkit: An Open Source Toolkit for Multi-modal Speaker Verification and Diarization},
  author={Chen, Yafeng and Zheng, Siqi and Wang, Hui and Cheng, Luyao and others},
  booktitle={ICASSP},
  year={2025}
}