Home

Awesome

About

dcm2niiXL is a script for running dcm2niix in parallel. It is useful for extra large (XL) datasets. Relative to other converters, dcm2niix is fast. However, the basic tool is optimized for converting moderate sized datasets on a local machine (where disk access is very fast). In contrast, dcm2niiXL spawns multiple dcm2niix converters and can offer better cluster performance in situations where disk access is slow.

To be clear, for most users the classic dcm2niix is the best choice. The dcm2niiXL wrapper is for the specific case of very large datasets that are stored in multiple subdirectories and systems with slower disk access. This script is for advanced users, and you should read the requirements and limitations section carefully.

Usage

You use dcm2niiXL just like dcm2niix.

dcm2niiXL -f %p_%s -i y -z y -o ~/tst/out ~/dcm_qa/In/

Installation

dcm2niiXL is a simple shell script that calls specially compiled versions of dcm2niix. Just download the files and place them in your path.

Requirements and Limitations

This script is for advanced users and specific situations. You should read this section carefully. In many situations dcm2niix will be faster, and it is always less complicated.

A Simpler Alternative: Parallel Compression

The bulk of this page describes running multiple single-threaded instances of dcm2niix on a dataset. This requires the user to make sure that the data is carefully sorted so that each thread processes a whole dataset. An easier approach is to ensure you have fast parallel compression. The rationale is simple: if you create compressed .nii.gz images the software will spend most of its time compressing your data rather than converting your data. Therefore, a good strategy is to use the parallel pigz to accelerate the compression stage. The data below illustrates this on a six core (12 thread) computer where the single-threaded cloud-flare library is ten times slower than saving raw data to disk, while pigz is just 3.2 times slower. In this test, we converted about an hours worth of MRI data. A slow spinning hard-disk was used (as is typical for servers). The raw data required 2.234 seconds to save, but in the table we have scaled everything relative to this time (so this uncompressed time is listed as 1).

One consideration for optimal performance. Versions of dcm2niix from v1.0.20181225 include a new optimal compression option (-z o) for Unix computers. This requires you to have a version of pigz installed that is 2.3.4 or later. This option pipes data directly to pigz. In contrast, the conventional usage of pigz (-z y) saves the raw data to disk and has pigz read this data from disk to save a compressed version. This two-stage approach carries a huge penalty for slow disks.

MethodSpeed
-z n (no compression: raw .nii files)1.0
-z o (optimal piped pigz)3.2
-z y (yes pigz)4.4
-z i (internal, compiled to cloudflare)10.1
-z i (internal, compiled to zlib)20.6

Compiling dcm2niix for accelerated gzip creation

The following command should create a dcm2niix executable in the dcm2niix\build\bin folder that uses the accelerated cloudflare library.

git clone https://github.com/rordenlab/dcm2niix.git
cd dcm2niix
mkdir build && cd build
cmake -DZLIB_IMPLEMENTATION=Cloudflare -DUSE_JPEGLS=ON -DUSE_OPENJPEG=ON ..
make