Home

Awesome

code metrics

The tarp Utility

Tarfiles are commonly used for storing large amounts of data in an efficient, sequential access, compressed file format, in particular for deep learning applications. For processing and data transformation, people usually unpack them, operate over the files, and tar up the result again.

The tarp utility is a port of the Python tarproc utilities to Go. The tarp utility is a single executable, a "Swiss army knife" for dataset transformations.

Available commands are:

For tarp cat, sources and destinations can be ZMQ URLs (specified using zpush/zpull, zpub/zsub, or zr versions that reverse connect/bind). This permits very large sorting, processing, and shuffling networks to be set up (Kubernetes is a good platform for this).

Commands consistently take/require a "-o" for the output in order to avoid accidental file clobbering. You can specify "-" if you want to output to stdout.

Installation

The tarp command line utility is a standard Golang command line program. You need to install Go. Afterwards, you can install tarp with:

$ go get -v github.com/tmbdev/tarp/tarp

Alternatively, you can also install from a local clone:

git clone https://github.com/tmbdev/tarp.git
cd tarp
make bin/tarp
sudo make install

Examples

Download a dataset from Google Cloud, shuffle it, and split it into shards containing 1000 training samples each:

gsutil cat gs://bucket/file.tar | tarp sort - -o - | tarp split -c 1000 -o 'output-%06d.tar'

Create a dataset for images stored in directories whose names represent class labels, creates shards consisting of 1000 images each, and upload them to Google cloud:

for classdir in *; do
    test -d $classdir || continue
    for image in $classdir/*.png; do
        imageid=$(basename $image .png)
        echo "$imageid.txt text:$classdir"
        echo "$imageid.png file:$image"
    done
done |
sort |
tarp create -o - - |
tarp split -c 1000 -o 'dataset-%06d.tar' \
    -p 'gsutil cp %s gs://mybucket/; rm %s'

(Note that in an actual application, you probably want to shuffle the samples in the text file you create after the sort command.)

Internals

Internally, data processing is handled using goroutines and channels passing around samples. Samples are simple key/value stores of type map[string][]byte. Most processing steps are pipeline elements. The general programming style is:

func ProcessSamples(parameters...) func(inch Pipe, outch Pipe) {
	return func(inch Pipe, outch Pipe) {
		...
		for sample := range inch {
			...
		}
		...
		close(outch)
	}
}

Note that unlike simple Golang pipeline examples, the caller allocates the output channel; this gives code building pipelines out of processing stages a bit more control. Furthermore, construction of pipeline elements involves an outer and an inner function ("currying"). This lets us write pipelines more naturally. For example, you can write code like this:

source := TarSource(fname)
sink := TarSink(fname)
pipeline := Pipeline(
	SliceSamples(0, 100),
	LogProgress(10, "progress"),
	RenameSamples(renamings, false)
)
Processing(source, pipeline, sink)

The main processing library is in the datapipes subdirectory; tests for the library functions are also found here (run with go test in that subdirectory). The toplevel command and its subcommands are defined in cmd. Tests for the command line functions can be executed with ./run-tests from the top of the source tree.

Status

This is fairly new software. The command line interface is fairly stable, but the internal APIs may still change substantially.

Future work: