Home

Awesome

Implementation of "Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition" (CDP)

Introduction

Original paper: Xiaohang Zhan, Ziwei Liu, Junjie Yan, Dahua Lin, Chen Change Loy, "Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition", ECCV 2018

Project Page: http://mmlab.ie.cuhk.edu.hk/projects/CDP/

You can use this code for:

  1. State-of-the-art face clustering in linear complexity.
  2. High efficiency generic clustering.
  3. Plugging the pair-to-cluster module into your clustering algorithm.

Dependency

Usage

  1. Clone the repo.

    git clone git@github.com:XiaohangZhan/cdp.git
    cd cdp
    

Using ready-made data for face clustering

  1. Download the data from Google Drive or Baidu Yun with passwd u8vz , to the repo root, and uncompress it.

    tar -xf data.tar.gz
    
  2. Make sure the structure looks like the following:

    cdp/data/
    cdp/data/labeled/emore_l200k/
    cdp/data/unlabeled/emore_u200k/
    # ... other directories and files ...
    
  3. Run CDP

    • Single model case:

      python -u main.py --config experiments/emore_u200k_single/config.yaml
      
    • Multi-model voting case (committee size: 4):

      python -u main.py --config experiments/emore_u200k_cmt4/config.yaml
      
    • Multi-model mediator case (committee size: 4):

      # edit `experiments/emore_u200k_cmt4/config.yaml` as following:
      # strategy: mediator
      python -u main.py --config experiments/emore_u200k_cmt4/config.yaml
      
  4. Collect the results

    Take Multi-model mediator case for example, the results are stored in experiments/emore_u200k_cmt4/output/k15_mediator_111_th0.9915/sz600_step0.05/meta.txt. The order is the same as that in data/unlabeled/emore_u200k/list.txt. The samples labeled as -1 are discarded by CDP. You may assign them with new unique labels if you must use them.

Using your own data

  1. Create your data directory, e.g. mydata

    mkdir data/unlabeled/mydata
    
  2. Prepare your data list as list.txt and copy it to the directory. If the data is not along with a list file, just make a dummy one, and make sure the length of the list is equal to the number of examples.

  3. (optional) If you want to evaluate the performance on your data, prepare the meta file as meta.txt and copy it to the directory.

  4. Prepare your feature files. Extract face features corresponding to the list.txt with your trained face recognition models, and save it as binary files via feature.tofile("xxx.bin") in numpy. The features should satisfy Cosine Similarity condition. Finally link/copy them to data/unlabeled/mydata/features/. We recommand renaming the feature files using model names, e.g., resnet18.bin. CDP works for single model case, but we recommend you to use multiple models (i.e., preparing multiple feature files extracted from different models) with mediator for better results.

  5. The structure should look like:

    cdp/data/unlabeled/mydata/
    cdp/data/unlabeled/mydata/list.txt
    cdp/data/unlabeled/mydata/meta.txt (optional)
    cdp/data/unlabeled/mydata/features/
    cdp/data/unlabeled/mydata/features/*.bin
    

    (You do not need to prepare knn files.)

  6. Prepare the config file. Please refer to the examples in experiments/

    mkdir experiments/myexp
    cp experiments/emore_u200k_cmt4/config.yaml experiments/myexp/
    # edit experiments/myexp/config.yaml to fit your case.
    # you may need to change `base`, `committee`, `data_name`, etc.
    
  7. If you want to use mediator mode, please also prepare the training set, i.e., the features extracted using the same face recognition model as step 4, as well as the meta file containing labels. Organize them in data/labeled/mydata/ similarly to data/labeled/emore_l200k/.

  8. Tips for paramters adjusting

    • Modify threshold to obtain roughly balanced precision and recall to achieve higher fscore.
    • Higher threshold results in higher precision and lower recall.
    • Larger max_sz results in lower precision and higher recall.

Using single model API for generic clustering

Using isoloated pair-to-cluster module

Run Baselines

Evaluation Results

  1. Data

    • emore_u200k (images: 200K, identities: 2,577)
    • emore_u600k (images: 600K, identities: 8,436)
    • emore_u1.4m (images: 1.4M, identities: 21,433)

    (These datasets are not the one in the paper which cannot be released, but the relative results are similar.)

  2. Baselines

    • emore_u200k
    method#clustersprec, recall, fscoretotal time
    * kmeans (ncluster=2577)257794.24, 74.89, 83.45618.1s
    * MiniBatchKMeans (ncluster=2577)257789.98, 87.86, 88.91122.8s
    * Spectral (ncluster=2577)257797.42, 97.05, 97.2412.1h
    * HAC (ncluster=2577, knn=30)257797.74, 88.02, 92.625.65h
    FastHAC (distance=0.7, method=single)4676799.79, 53.18, 69.381.66h
    DBSCAN (eps=0.75, nim_samples=10)5281399.52, 65.52, 79.026.87h
    HDBSCAN (min_samples=10)3135499.35, 75.99, 86.114.87h
    KNN DBSCAN (knn=80, min_samples=10)3926697.54, 74.42, 84.4360.5s
    ApproxRankOrder (knn=20, th=10)8515052.96, 16.93, 25.6686.4s
    • emore_u600k
    method#clustersprec, recall, fscoretotal time
    * kmeans (ncluster=8436)8436fail (out of memory)-
    * MiniBatchKMeans (ncluster=8436)843681.64, 86.58, 84.042265.6s
    * Spectral (ncluster=8436)8436fail (out of memory)-
    * HAC (ncluster=8436, knn=30)843695.39, 86.28, 90.6060.9h
    FastHAC (distance=0.7, method=single)9494998.75, 68.49, 80.8816.3h
    DBSCAN (eps=0.75, nim_samples=10)17488699.02, 61.95, 76.2279.6h
    HDBSCAN (min_samples=10)12427999.01, 69.31, 81.5447.9h
    KNN DBSCAN (knn=80, min_samples=10)13306196.60, 70.97, 81.82644.5s
    ApproxRankOrder (knn=30, th=10)30402265.56, 8.139, 14.48626.9s

    Note: Methods marked * are reported with their theoretical upper bound results, since they need number of clusters as input. We use the values from the ground truth to obtain the results. For each method, we adjust the parameters to achieve the best performance.

  3. CDP (in linear time !!!)

    • emore_u200k
    strategy#modelsettingprec, recall, fscoreknn timecluster timetotal time
    vote1k15_accept0_th0.6689.35, 88.98, 89.1614.8s7.7s22.5s
    vote5k15_accept4_th0.60593.36, 92.91, 93.1378.7s6.0s84.7s
    mediator5k15_110_th0.993894.06, 92.45, 93.2578.7s77.7s156.4s
    mediator5k15_111_th0.992596.66, 94.93, 95.7978.7s100.2s178.9s
    • emore_u600k
    strategy#modelsettingprec, recall, fscoreknn timecluster timetotal time
    vote1k15_accept0_th0.66588.19, 85.33, 86.7460.8s24s84.8s
    vote5k15_accept4_th0.60590.21, 89.9, 90.05309.4s18.3s327.7s
    mediator5k15_110_th0.98590.43, 89.13, 89.78309.4s184.2s493.6s
    mediator5k15_111_th0.98296.55, 91.98, 94.21309.4s246.3s555.7s
    • emore_u1.4m
    strategy#modelsettingprec, recall, fscoreknn timecluster timetotal time
    vote1k15_accept0_th0.6889.49, 81.25, 85.17187.5s47.7s235.2s
    vote5k15_accept4_th0.6290.63, 87.32, 88.95967.0s44.3s1011.3s
    mediator5k15_110_th0.9993.67, 84.43, 88.81967.0s406.9s1373.9s
    mediator5k15_111_th0.98295.29, 90.97, 93.08967.0s584.7s1551.7s

    Note:

    • For mediator, 110 means using relationship and affinity; 111 means using relationship, affinity and structure.

    • The results may not be exactly reproduced, because there is randomness in knn search by NMSLIB.

    • Experiments are performed on a server with 48 CPU cores, 8 TITAN XP, 252G memory.

Face recognition framework

You may use this framework to train/evaluate face recognition models and extract features.

url: https://github.com/XiaohangZhan/face_recognition_framework

Bibtex

@inproceedings{zhan2018consensus,
  title={Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition},
  author={Zhan, Xiaohang and Liu, Ziwei and Yan, Junjie and Lin, Dahua and Loy, Chen Change},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={568--583},
  year={2018}
}