Home

Awesome

<p align="center"> MindDiffuser </p>

This is the official code for the paper "MindDiffuser: Controlled Image Reconstruction from Human Brain Activity with Semantic and Structural Diffusion"[ACMMM2023] (https://dl.acm.org/doi/10.1145/3581783.3613832)

<p align="center"> Schematic diagram of MindDiffuser </p>

<br>

<p align="center"> Algorithm diagram of MindDiffuser </p>

<p align="center"> A brief comparison of image reconstruction results </p>

<p align="center"> Reconstruction results of MindDiffuser on multiple subjects </p>

<p align="center"> Experiments </p>

<p align="center"> Interpretability analysis </p>

During the feature decoding process, we use L2-regularized linear regression model to automatically select voxels to fit three types of feature: semantic feature 𝑐, detail feature 𝑧, and structural feature 𝑍𝐶𝐿𝐼𝑃. We ultilize pycortex to project the weights of each voxel in the fitted model onto the corresponding 3D coordinates in the visual cortex.

<p> Steps to reproduce MindDiffuser </p>

Please scan the QR code below to obtain the pre-processed experimental data.

百度网盘 提取码:qlkx

If you are pressed for time or unable to reproduce my work, you can also directly extract the reconstruction results of MindDiffuser on subjects 1, 2, 5, and 7 from Baidu Netdisk for comparison.

百度网盘 提取码:izxl

<p> Preliminaries </p>

This code was developed and tested with:

<p> Dataset downloading and preparation </p>

NSD dataset http://naturalscenesdataset.org/ <br> Data preparation https://github.com/styvesg/nsd <br>

<p> Model downloading and preparation </p>

First, set up the conda enviroment as follows:<br>

conda env create -f environment_1.yml  # create conda env
conda activate MindDiffuser          # activate conda env  <br>

<p> Feature extraction </p>

cd your_folder
python Feature extractor/Semantic_feature_extraction.py
python Feature extractor/detail_extracttion.py
python Feature extractor/Structural_feature_extraction.py
python Feature extractor/Structural_feature_selection.py

<p> Feature decoding </p>

cd your_folder
python Feature decoding/Semantic_feature_decoding.py
python Feature decoding/Structural_feature_decoding.py
python Feature decoding/detail_decoding.py

<p> Image reconstruction </p>

cd your_folder
python Image reconstruction/Reconstruction.py

<p> Reproduce the results of "High-resolution image reconstruction with latent diffusion models from human brain activity"(CVPR2023) </p>

After extracting and decoding the features, run the following code:

cd your_folder
python Reproduce Takagi's results/image_reconstruction.py

<p> Reproduce the results of "Reconstruction of Perceived Images from fMRI Patterns and Semantic Brain Exploration using Instance-Conditioned GANs" </p>

After configuring the environment and codes provided by Ozcelik, run the following codes:

cd your_folder
python Reproduce Ozcelik's results/extract_features.py
python Reproduce Ozcelik's results/train_regression.py
python Reproduce Ozcelik's results/reconstruct_images.py

<p> Cite </p>

Please cite our paper if you use this code in your own work:<br>

@inproceedings{10.1145/3581783.3613832,
author = {Lu, Yizhuo and Du, Changde and Zhou, Qiongyi and Wang, Dianpeng and He, Huiguang},
title = {MindDiffuser: Controlled Image Reconstruction from Human Brain Activity with Semantic and Structural Diffusion},
year = {2023},
isbn = {9798400701085},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3581783.3613832},
doi = {10.1145/3581783.3613832},
booktitle = {Proceedings of the 31st ACM International Conference on Multimedia},
pages = {5899–5908},
numpages = {10},
keywords = {fmri, brain-computer interface (bci), probabilistic diffusion model, controlled image reconstruction},
location = {Ottawa ON, Canada},
series = {MM '23}
}