Home

Awesome

Multimodal Contrastive Learning with Tabular and Imaging Data

Please cite our CVPR paper, Best of Both Worlds: Multimodal Contrastive Learning with Tabular and Imaging Data, if this code was helpful.

@InProceedings{Hager_2023_CVPR,
    author    = {Hager, Paul and Menten, Martin J. and Rueckert, Daniel},
    title     = {Best of Both Worlds: Multimodal Contrastive Learning With Tabular and Imaging Data},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {23924-23935}
}

If you want an overview of the paper checkout:

Instructions

Install environment using conda env create --file environment.yaml.

To run, execute python run.py.

Arguments - Command Line

If pretraining, pass pretrain=True and datatype={imaging|multimodal|tabular} for the desired pretraining type. multimodal uses our strategy from the paper, tabular uses SCARF, and imaging can be specified with the loss argument. Default is SimCLR, other options are byol, simsiam, and barlowtwins.

If you do not pass pretrain=True, the model will train fully supervised with the data modality specified in datatype, either tabular or imaging.

You can evaluate a model by passing the path to the final pretraining checkpoint with the argument checkpoint={PATH_TO_CKPT}. After pretraining, a model will be evaluated with the default settings (frozen eval, lr=1e-3).

Arguments - Hydra

All argument defaults can be set in hydra yaml files found in the configs folder.

Most arguments are set to those in the paper and work well out of the box. Default model is ResNet50.

Code is integrated with weights and biases, so set wandb_project and wandb_entity in config.yaml.

Path to folder containing data is set through the data_base argument and then joined with filenames set in the dataset yamls. Best strategy is to take dvm_all_server.yaml as a template and fill in the appropriate filenames.

Data

The UKBB data is semi-private. You can apply for access here.

The DVM cars dataset is open-access and can be found here.

Processing steps for the DVM dataset can be found here.

The exact data splits used in the paper are saved in the data folder.