Home

Awesome

speechmetrics

This repository is a wrapper around several freely available implementations of objective metrics for estimating the quality of speech signals. It includes both relative and absolute metrics, which means metrics that do or do not need a reference signal, respectively.

If you find speechmetrics useful, you are welcome to cite the original papers for the corresponding metrics, since this is just a wrapper around the implementations that were kindly provided by the original authors.

Please let me know if you think of some metric with available python implementation that could be included here!

Installation

As of our recent tests, installation goes smoothly on ubuntu, but there may be some compiler errors for pypesq on iOs.

Note that the mosnet seems to be incompatible with numpy >= 1.24

pip install numpy==1.23.4
pip install git+https://github.com/aliutkus/speechmetrics#egg=speechmetrics

Usage

speechmetrics has been designed to be easily used in a modular way. All you need to do is to specify the actual metrics you want to use and it will load them.

The process is to:

  1. Load the metrics you want with the load function from the root of the package, that takes two arguments:

    • metrics: str or list of str the available metrics that match this argument will be automatically loaded. This matching is relative to the structure of the speechmetrics package. For instance:
      • 'absolute' will match all absolute metrics
      • 'absolute.srmr' or 'srmr' will only match SRMR
      • '' will match all
    • window: float or None gives the length in seconds of the windows on which to compute the actual scores. If None, the whole signals will be considered.
      my_metrics = speechmetrics.load('relative', window=5)
  2. Just call the object returned by load with your estimated file (and your reference in case of relative metrics.)
    scores = my_metrics(path_to_estimate, path_to_reference)
    Numpy arrays are also supported, but the corresponding sampling rate needs to be specified
    scores = my_metrics(estimate_array, reference_array, rate=sampling_rate)

WARNING: The convention for relative metrics is to provide estimate first, and reference second.
This is the opposite as the general convention.
=> The advantage is: you can still call absolute metrics with the same code, they will just ignore the reference.

Example

# the case of absolute metrics
import speechmetrics
window_length = 5 # seconds
metrics = speechmetrics.load('absolute', window_length)
scores = metrics(path_to_audio_file)

# the case of relative metrics
metrics = speechmetrics.load(['bsseval', 'sisdr'], window_length)
scores = metrics(path_to_estimate_file, path_to_reference)

# mixed case, still works
metrics = speechmetrics.load(['bsseval', 'mosnet'], window_length)
scores = metrics(path_to_estimate_file, path_to_reference)

Available metrics

Absolute metrics (absolute)

MOSNet (absolute.mosnet or mosnet)

dimensionless, higher is better. 0=very bad, 5=very good

As provided by the authors of MOSNet: Deep Learning based Objective Assessment for Voice Conversion. Original github here

@article{lo2019mosnet,
title={MOSNet: Deep Learning based Objective Assessment for Voice Conversion},
author={Lo, Chen-Chou and Fu, Szu-Wei and Huang, Wen-Chin and Wang, Xin and Yamagishi, Junichi and Tsao, Yu and Wang, Hsin-Min},
journal={arXiv preprint arXiv:1904.08352},
year={2019} }

SRMR (absolute.srmr or srmr)

dimensionless ratio, higher is better. 0=very bad, 1=very good

As provided by the SRMR Toolbox, implemented by @jfsantos.

Relative metrics (relative)

BSSEval (relative.bsseval or bsseval)

expressed in dB, higher is better.

As presented in this paper and freely available in the official museval page, corresponds to BSSEval v4. There are 3 submetrics handled here: SDR, SAR, ISR.

@InProceedings{SiSEC18,
author="St{"o}ter, Fabian-Robert and Liutkus, Antoine and Ito, Nobutaka",
title="The 2018 Signal Separation Evaluation Campaign",
booktitle="Latent Variable Analysis and Signal Separation: 14th International Conference, LVA/ICA 2018, Surrey, UK",
year="2018",
pages="293--305"
}

PESQ (relative.pesq or pesq)

dimensionless, higher is better. 0=very bad, 5=very good

Wide band PESQ. As implemented there by @ludlows. Pranay Manocha: "[This implementation] matches with a very old matlab implementation of Phillip Loizou’s book. (I personally verified that)"

NBPESQ (relative.nb_pesq or nb_pesq)

dimensionless, higher is better. 0=very bad, 5=very good

Narrow band PESQ. As implemented there by @vBaiCai.

STOI (relative.stoi or stoi)

dimensionless correlation coefficient, higher is better. 0=very bad, 1=very good

As implemented by @mpariente here

SISDR: Scale-invariant SDR (relative.sisdr or sisdr)

expressed in dB, higher is better.

As described in the following paper and implemented by @Jonathan-LeRoux here