Home

Awesome

README

This repo will contain the code for ICASSP 2019, speaker identifcation (http://www.robots.ox.ac.uk/~vgg/research/speakerID/).

This repo contains a Keras implementation of the paper,
Utterance-level Aggregation For Speaker Recognition In The Wild (Xie et al., ICASSP 2019) (Oral).

**New challenge on speaker recognition: The VoxCeleb Speaker Recognition Challenge (VoxSRC).

Dependencies

There seems to be a bug in this version of librosa such that loading wav files is cripplingly slow (1 second per short file), you can replace read_wav with read_wav_fast in utils.py to fix this, but be aware that the result is that the sample rate is not constant.

Data

The dataset used for the experiments are

Training the model

To train the model on the Voxceleb2 dataset, you can run

Model

Testing the model

To test a specific model on the voxceleb1 dataset, for example, the model trained with ResNet34s trained by adam with softmax, and feature dimension 512

Fine Tuning the model

The weights provided do not include the weights of the final prediction layer, so one needs to randomly initialise this with network.load_weights(os.path.join(args.resume), by_name=True, skip_mismatch=True) in main.py

python main.py --net resnet34s --gpu 0 --ghost_cluster 2 --vlad_cluster 8 --batch_size 16 --lr 0.001 --warmup_ratio 0.1 --optimizer adam --epochs 128 --multiprocess 8 --loss softmax --resume=../model/gvlad_softmax/resnet34_vlad8_ghost2_bdim512_deploy/weights.h5

Note that --data_path /path_to_your_dataset/dataset/ can be used to point to your own dataset, but you will need to write a small function in toolkits.py to return the corresponding datalist file contents.

Licence

The code and mode are available to download for commercial/research purposes under a Creative Commons Attribution 4.0 International License(https://creativecommons.org/licenses/by/4.0/).

  Downloading this code implies agreement to follow the same conditions for any modification 
  and/or re-distribution of the dataset in any form.

  Additionally any entity using this code agrees to the following conditions:

  THIS CODE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
  IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
  TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
  PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
  HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
  EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
  PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
  PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
  LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
  NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
  SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

  Please cite the papers below if you make use of the dataset and code.

Citation

@InProceedings{Xie19,
  author       = "W. Xie, A. Nagrani, J. S. Chung, A. Zisserman",
  title        = "Utterance-level Aggregation For Speaker Recognition In The Wild.",
  booktitle    = "ICASSP, 2019",
  year         = "2019",
}

@Article{Nagrani19,
  author       = "A. Nagrani, J. S. Chung, W. Xie, A. Zisserman",
  title        = "VoxCeleb: Large-scale Speaker Verification in the Wild.",
  journal      = "Computer Speech & Language",
  year         = "2019",
}