Home

Awesome

FTOOLS: A faster Stata for large datasets

GitHub release (latest by date) GitHub Release Date GitHub commits since latest release (by date) StataMin DOI


Introduction

Some of the most common Stata commands (collapse, merge, sort, etc.) are not designed for large datasets. This package provides alternative implementations that solves this problem, speeding up these commands by 3x-10x:

collapse benchmark

Other user commands that are very useful for speeding up Stata with large datasets include:

ftools can also be used to speed up your own commands. For more information, see this presentation from the 2017 Stata Conference (slides 14 and 15 show how to create faster alternatives to unique and xmiss with only a couple lines of code). Also, see help ftools for the detailed documentation.

Details

ftools is two things:

  1. A list of Stata commands optimized for large datasets, replacing commands such as: collapse, contract, merge, egen, sort, levelsof, etc.
  2. A Mata class (Factor) that focuses on working with categorical variables. This class is what makes the above commands fast, and is also what powers reghdfe

Currently the following commands are implemented:

Usage

* Stata usage:
sysuse auto

fsort turn
fegen id = group(turn trunk)
fcollapse (sum) price (mean) gear, by(turn foreign) freq

* Advanced: creating the .mlib library:
ftools, compile

* Mata usage:
sysuse auto, clear
mata: F = factor("turn")
mata: F.keys, F.counts
mata: sorted_price = F.sort(st_data(., "price"))

Other features include:

Benchmarks

(see the test folder for the details of the tests and benchmarks)

egen group

Given a dataset with 20 million obs. and 5 variables, we create the following variable, and create IDs based on that:

gen long x = ceil(uniform()*5000)

Then, we compare five different variants of egen group:

MethodMinAvg
egen id = group(x)49.1751.26
fegen id = group(x)1.441.53
fegen id = group(x), method(hash0)1.411.60
fegen id = group(x), method(hash1)8.879.35
fegen id = group(x), method(stata)34.7335.43

Our variant takes roughly 3% of the time of egen group. If we were to choose a more complex hash method, it would take 18% of the time. We also report the most efficient method based in Stata (that uses bysort), which is still significantly slower than our Mata approach.

Notes:

collapse

On a dataset of similar size, we ran collapse (sum) y1-y15, by(x3) where x3 takes 100 different values:

MethodTime% of Collapse
collapse … , fast81.87100%
sumup56.1869%
fcollapse … , fast38.5447%
fcollapse … , fast pool(5)28.3235%
tab ...9.3911%

We can see that fcollapse takes roughly a third of the time of collapse (although it uses more memory when moving data from Stata to Mata). As a comparison, tabulating the data (one of the most efficient Stata operations) takes 11% of the time of collapse.

Alternatively, the pool(#) option will use very little memory (similar to collapse) at also very good speeds.

Notes:

collapse: alternative benchmark

We can run a more complex query, collapsing means and medians instead of sums, also with 20mm obs.:

MethodTime% of Collapse
collapse … , fast81.06100%
sumup67.0583%
fcollapse … , fast30.9338%
fcollapse … , fast pool(5)33.8542%
tab8.0610%

(Note: sumup might be better for medium-sized datasets, although some benchmarking is needed)

And we can see that the results are similar.

join (and fmerge)

Similar to merge but avoids sorting the datasets. It is faster than merge for datasets larger than ~ 100,000 obs., and for datasets above 1mm obs. it takes a third of the time.

Benchmark:

MethodTime% of merge
merge28.89100%
join/fmerge8.6930%

fisid

Similar to isid, but allowing for if in and on the other hand not allowing for using and sort.

In very large datasets, it takes roughly a third of the time of isid.

flevelsof

Provides the same results as levelsof.

In large datasets, takes up to 20% of the time of levelsof.

fsort

At this stage, you would need a significantly large dataset (50 million+) for fsort to be faster than sort.

MethodAvg. 1Avg. 2
sort id62.5271.15
sort id, stable63.7465.72
fsort id55.467.62

The table above shows the benchmark on a 50 million obs. dataset. The unstable sorting is slightly slower (col. 1) or slighlty faster (col. 2) than the fsort approach. On the other hand, a stable sort is clearly slower than fsort (which always produces a stable sort)

Installation

Stable Version

Within Stata, type:

cap ado uninstall ftools
ssc install ftools

Dev Version

With Stata 13+, type:

cap ado uninstall ftools
net install ftools, from(https://github.com/sergiocorreia/ftools/raw/master/src/)

For older versions, first download and extract the zip file, and then run

cap ado uninstall ftools
net install ftools, from(SOME_FOLDER)

Where SOME_FOLDER is the folder that contains the stata.toc and related files.

Compiling the mata library

In case of a Mata error, try typing ftools to create the Mata library (lftools.mlib).

Installing local versions

To install from a git fork, type something like:

cap ado uninstall ftools
net install ftools, from("C:/git/ftools/src")
ftools, compile

(Changing "C:/git/" to your own folder)

Dependencies

The fcollapse function requires the moremata package for some the median and percentile stats:

ssc install moremata

Users of Stata 11 and 12 need to install the boottest package:

ssc install boottest

FAQ:

"What features is this missing?"

"How can this be faster than existing commands?"

Existing commands (e.g. sort) are often compiled and don't have to move data from Stata to Mata and viceversa. However, they use inefficient algorithms, so for datasets large enough, they are slower. In particular, creating identifiers can be an ~O(N) operation if we use hashes instead of sorting the data (see the help file). Similarly, once the identifiers are created, sorting other variables by these identifiers can be done as an O(N) operation instead of O(N log N).

"But I already tried to use Mata's asarray and it was much slower"

Mata's asarray() has a key problem: it is very slow with hash collisions (which you see a lot in this use case). Thus, I avoid using asarray() and instead use hash1() to create a hash table with open addressing (see a comparision between both approaches here).

Updates