Home

Awesome

<img align="right" width="40%" height="40%" src="https://github.com/LukasMosser/PorousMediaGan/blob/master/misc/render_transp.png"/>

PorousMediaGAN

Implementation and data repository for Reconstruction of three-dimensional porous media using generative adversarial neural networks

Authors

Lukas Mosser Twitter
Olivier Dubrule
Martin J. Blunt
Department of Earth Science and Engineering, Imperial College London

Results

Cross-sectional views of the three trained models

Methodology

Process Overview

Instructions

Pre-requisites

pip install jupyter
luarocks install hdf5
luarocks install dpnn
pip install h5py
pip install tifffile
git clone https://github.com/LukasMosser/PorousMediaGAN
cd PorousMediaGAN

Pre-trained model (Pytorch version only)

We have included a pre-trained model used for the Berea sandstone example in the paper in the repository.

python generator.py --seed 42 --imageSize 64 --ngf 32 --ndf 16 --nz 512 --netG [path to generator checkpoint].pth --experiment berea --imsize 9 --cuda --ngpu 1

Use the modifier --imsize to generate the size of the output images.
--imsize 1 corresponds to the training image size Replace [path to generator checkpoint].pth with the path to the provided checkpoint e.g. checkpoints\berea\berea_generator_epoch_24.pth
Generating realizations was tested on GPU and CPU and is very fast even for large reconstructions.

Training

We highly recommend a modern Nvidia GPU to perform training.
All models were trained on Nvidia K40 GPUs.
Training on a single GPU takes approximately 24 hours.
To create the training image dataset from the full CT image perform the following steps:

cd ./data/berea/original/raw
#unzip using your preferred unzipper
unzip berea.zip
python create_training_images.py --image berea.tif --name berea --edgelength 64 --stride 32 --target_dir berea_ti

This will create the sub-volume training images as an hdf5 format which can then be used for training.

python main.py --dataset 3D --dataroot [path to training images] --imageSize 64 --batchSize 128 --ngf 64 --ndf 16 --nz 512 --niter 1000 --lr 1e-5 --workers 2 --ngpu 2 --cuda 

Additional Training Data

High-resolution CT scan data of porous media has been made publicly available via the Department of Earth Science and Engineering, Imperial College London and can be found here

Data Analysis

We use a number of jupyter notebooks to analyse samples during and after training.

Image Morphological parameters

We have used the image analysis software Fiji to analyse generated samples using MorpholibJ.
The images can be loaded as tiff files and analysed using MorpholibJ\Analyze\Analyze Particles 3D.

Results

We additionally provide the results used to create our publication in analysis.

Citation

If you use our code for your own research, we would be grateful if you cite our publication ArXiv

@article{pmgan2017,
	title={Reconstruction of three-dimensional porous media using generative adversarial neural networks},
	author={Mosser, Lukas and Dubrule, Olivier and Blunt, Martin J.},
	journal={arXiv preprint arXiv:1704.03225},
	year={2017}
}

Acknowledgement

The code used for our research is based on DCGAN for the torch version and the pytorch example on how to implement a GAN.
Our dataloader has been modified from DCGAN.

O. Dubrule thanks Total for seconding him as a Visiting Professor at Imperial College.