Home

Awesome

CI

A C++ implementation of a fast and memory efficient HAT-trie

Trie implementation based on the "HAT-trie: A Cache-conscious Trie-based Data Structure for Strings." (Askitis Nikolas and Sinha Ranjan, 2007) paper. For now, only the pure HAT-trie has been implemented, the hybrid version may arrive later. Details regarding the HAT-trie data structure can be found here.

The library provides an efficient and compact way to store a set or a map of strings by compressing the common prefixes. It also allows to search for keys that match a prefix. Note though that the default parameters of the structure are geared toward optimizing exact searches, if you do a lot of prefix searches you may want to reduce the burst threshold through the burst_threshold method.

It's a well adapted structure to store a large number of strings.

<p align="center"> <img src="https://tessil.github.io/images/hat-trie.png" width="600px" /> </p>

For the array hash part, the array-hash project is used and included in the repository.

The library provides two classes: tsl::htrie_map and tsl::htrie_set.

Overview

Thread-safety and exception guarantees are similar to the STL containers.

Hash function

The default hash function used by the structure depends on the presence of std::string_view. If it is available, std::hash<std::string_view> is used, otherwise a simple FNV-1a hash function is used to avoid any dependency.

If you can't use C++17 or later, we recommend to replace the hash function with something like CityHash, MurmurHash, FarmHash, ... for better performances. On the tests we did, CityHash64 offers a ~20% improvement on reads compared to FNV-1a.

#include <city.h>

struct str_hash {
    std::size_t operator()(const char* key, std::size_t key_size) const {
        return CityHash64(key, key_size);
    }
};

tsl::htrie_map<char, int, str_hash> map;

The std::hash<std::string> can't be used efficiently as the structure doesn't store any std::string object. Any time a hash would be needed, a temporary std::string would have to be created.

Benchmark

Wikipedia dataset

The benchmark consists in inserting all the titles from the main namespace of the Wikipedia archive into the data structure, check the used memory space after the insert (including potential memory fragmentation) and search for all the titles again in the data structure. The peak memory usage during the insert process is also measured with time(1).

Each title is associated with an int (32 bits). All the hash based structures use CityHash64 as hash function. For the tests marked with reserve, the reserve function is called beforehand to avoid any rehash.

Note that tsl::hopscotch_map, std::unordered_map, google::dense_hash_map and spp::sparse_hash_map use std::string as key which imposes a minimum size of 32 bytes (on x64) even if the key is only one character long. Other structures may be able to store one-character keys with 1 byte + 8 bytes for a pointer (on x64).

The benchmark was compiled with GCC 6.3 and ran on Debian Stretch x64 with an Intel i5-5200u and 8Go of RAM. Best of 20 runs was taken.

The code of the benchmark can be found on Gist.

Unsorted

The enwiki-20170320-all-titles-in-ns0.gz dataset is alphabetically sorted. For this benchmark, we first shuffle the dataset through shuf(1) to avoid a biased sorted dataset.

LibraryData structurePeak memory (MiB)Memory (MiB)Insert (ns/key)Read (ns/key)
tsl::htrie_mapHAT-trie405.22402.25643.10250.87
tsl::htrie_map <br/> max_load_factor=4HAT-trie471.85468.50638.66212.90
tsl::htrie_map <br/> max_load_factor=2HAT-trie569.76566.52630.61201.10
tsl::htrie_map <br/> max_load_factor=1HAT-trie713.44709.81645.76190.87
cedar::daDouble-array trie1269.681254.411102.93557.20
cedar::da ORDERED=falseDouble-array trie1269.801254.411089.78570.13
cedar::daDouble-array reduced trie1183.071167.791076.68645.79
cedar::da ORDERED=falseDouble-array reduced trie1183.141167.851065.43641.98
cedar::daDouble-array prefix trie498.69496.541096.90628.01
cedar::da ORDERED=falseDouble-array prefix trie498.65496.601048.40628.94
hat-trie<sup>1</sup> (C)HAT-trie504.07501.50917.49261.00
qp trie (C)QP trie941.23938.171349.251281.46
crit-bit trie (C)Crit-bit trie1074.961071.982930.422869.74
JudySL (C)Judy array631.09628.37884.29803.58
JudyHS (C)Judy array723.44719.47476.79417.15
tsl::array_mapArray hash table823.54678.73603.94138.24
tsl::array_map <br>with reserveArray hash table564.26555.91249.52128.28
tsl::hopscotch_mapHash table1325.831077.99368.26119.49
tsl::hopscotch_map <br>with reserveHash table1080.511077.98240.58119.91
google::dense_hash_mapHash table2319.401677.11466.60138.87
google::dense_hash_map <br>with reserveHash table1592.511589.99259.56120.40
spp::sparse_hash_mapSparse hash table918.67917.10769.00175.59
spp::sparse_hash_map <br>with reserveSparse hash table913.35910.65427.22159.08
std::unordered_mapHash table1249.051246.60590.88173.58
std::unordered_map <br>with reserveHash table1212.231209.71350.33178.70
  1. As the hash function can't be passed in parameter, the code of the library itself is modified to use CityHash64.
Sorted

The key are inserted and read in alphabetical order.

LibraryData structurePeak memory (MiB)Memory (MiB)Insert (ns/key)Read (ns/key)
tsl::htrie_mapHAT-trie396.10393.22255.7668.08
tsl::htrie_map <br/> max_load_factor=4HAT-trie465.02461.80248.8859.23
tsl::htrie_map <br/> max_load_factor=2HAT-trie543.99541.21230.1353.50
tsl::htrie_map <br/> max_load_factor=1HAT-trie692.29689.70243.8449.22
cedar::daDouble-array trie1269.581254.41278.5154.72
cedar::da ORDERED=falseDouble-array trie1269.661254.41264.4356.02
cedar::daDouble-array reduced trie1183.011167.78254.6069.18
cedar::da ORDERED=falseDouble-array reduced trie1183.031167.78241.4569.67
cedar::daDouble-array prefix trie621.59619.38246.8857.83
cedar::da ORDERED=falseDouble-array prefix trie621.59619.38187.9858.56
hat-trie<sup>2</sup> (C)HAT-trie521.25518.52503.0186.40
qp trie (C)QP trie940.65937.66392.86190.19
crit-bit trie (C)Crit-bit trie1074.871071.98430.04347.60
JudySL (C)Judy array616.95614.27279.07114.47
JudyHS (C)Judy array722.29719.47439.66372.25
tsl::array_mapArray hash table826.98682.99612.31139.16
tsl::array_map <br>with reserveArray hash table565.37555.35246.55126.32
tsl::hopscotch_mapHash table1331.871078.02375.19118.08
tsl::hopscotch_map <br>with reserveHash table1080.511077.97238.93117.20
google::dense_hash_mapHash table2325.271683.07483.95137.09
google::dense_hash_map <br>with reserveHash table1592.541589.99257.22113.71
spp::sparse_hash_mapSparse hash table920.96918.70772.03176.64
spp::sparse_hash_map <br>with reserveSparse hash table914.84912.47422.85158.73
std::unordered_mapHash table1249.091246.65594.85173.54
std::unordered_map <br>with reserveHash table1212.211209.71347.40176.49
  1. As the hash function can't be passed in parameter, the code of the library itself is modified to use CityHash64.

Dr. Askitis dataset

The benchmark consists in inserting all the words from the "Distinct Strings" dataset of Dr. Askitis into the data structure, check the used memory space and search for all the words from the "Skew String Set 1" dataset (where a string can be present multiple times) in the data structure. Note that the strings in this dataset have a quite short average and median key length (which may not be a realistic use case compared to the Wikipedia dataset used above). It's similar to the one on the cedar homepage.

The benchmark protocol is the same as for the Wikipedia dataset.

LibraryData structurePeak memory (MiB)Memory (MiB)Insert (ns/key)Read (ns/key)
tsl::htrie_mapHAT-trie604.76601.79485.4577.80
tsl::htrie_map <br/> max_load_factor=4HAT-trie768.10764.98491.7875.48
tsl::htrie_map <br/> max_load_factor=2HAT-trie1002.42999.34496.7872.53
tsl::htrie_map <br/> max_load_factor=1HAT-trie1344.981341.97520.6672.45
cedar::daDouble-array trie1105.451100.05682.2571.98
cedar::da ORDERED=falseDouble-array trie1105.471100.05668.7571.95
cedar::daDouble-array reduced trie941.16926.04684.3879.11
cedar::da ORDERED=falseDouble-array reduced trie941.16925.98672.1479.02
cedar::daDouble-array prefix trie714.58712.59831.7175.83
cedar::da ORDERED=falseDouble-array prefix trie714.66712.31786.9375.89
hat-trie<sup>3</sup> (C)HAT-trie786.93784.32743.3493.58
qp trie (C)QP trie1800.021797.21987.95428.51
crit-bit trie (C)Crit-bit trie2210.522207.641986.191109.88
JudySL (C)Judy array1025.591023.11535.02202.36
JudyHS (C)Judy array1002.50999.97456.09148.36
tsl::array_mapArray hash table1308.081031.67545.8246.41
tsl::array_map <br>with reserveArray hash table979.44921.363244.1945.74
tsl::hopscotch_mapHash table2336.391611.54288.7047.05
tsl::hopscotch_map <br>with reserveHash table1614.221611.64220.6746.39
google::dense_hash_mapHash table3913.642636.31317.6643.62
google::dense_hash_map <br>with reserveHash table2638.192635.68227.5843.09
spp::sparse_hash_mapSparse hash table1419.691417.61586.2656.00
spp::sparse_hash_map <br>with reserveSparse hash table1424.211421.69392.7655.73
std::unordered_mapHash table2112.662110.19554.02105.05
std::unordered_map <br>with reserveHash table2053.952051.67309.06109.89
  1. As the hash function can't be passed in parameter, the code of the library itself is modified to use CityHash64.

Installation

To use the library, just add the include directory to your include path. It is a header-only library.

If you use CMake, you can also use the tsl::hat_trie exported target from the CMakeLists.txt with target_link_libraries.

# Example where the hat-trie project is stored in a third-party directory
add_subdirectory(third-party/hat-trie)
target_link_libraries(your_target PRIVATE tsl::hat_trie)  

The code should work with any C++11 standard-compliant compiler and has been tested with GCC 4.8.4, Clang 3.5.0 and Visual Studio 2015.

To run the tests you will need the Boost Test library and CMake.

git clone https://github.com/Tessil/hat-trie.git
cd hat-trie/tests
mkdir build
cd build
cmake ..
cmake --build .
./tsl_hat_trie_tests

Usage

The API can be found here. If std::string_view is available, the API changes slightly and can be found here.

Example

#include <iostream>
#include <string>
#include <tsl/htrie_map.h>
#include <tsl/htrie_set.h>


int main() {
    /*
     * Map of strings to int having char as character type. 
     * There is no support for wchar_t, char16_t or char32_t yet, 
     * but UTF-8 strings will work fine.
     */
    tsl::htrie_map<char, int> map = {{"one", 1}, {"two", 2}};
    map["three"] = 3;
    map["four"] = 4;
    
    map.insert("five", 5);
    map.insert_ks("six_with_extra_chars_we_ignore", 3, 6);
    
    map.erase("two");
    
    /*
     * Due to the compression on the common prefixes, the letters of the string 
     * are not always stored contiguously. When we retrieve the key, we have to 
     * construct it.
     * 
     * To avoid a heap-allocation at each iteration (when SSO doesn't occur), 
     * we reuse the key_buffer to construct the key.
     */
    std::string key_buffer;
    for(auto it = map.begin(); it != map.end(); ++it) {
        it.key(key_buffer);
        std::cout << "{" << key_buffer << ", " << it.value() << "}" << std::endl;
    }
    
    /*
     * If you don't care about the allocation.
     */
    for(auto it = map.begin(); it != map.end(); ++it) {
        std::cout << "{" << it.key() << ", " << *it << "}" << std::endl;
    }
    
    
    
    
    tsl::htrie_map<char, int> map2 = {{"apple", 1}, {"mango", 2}, {"apricot", 3},
                                      {"mandarin", 4}, {"melon", 5}, {"macadamia", 6}};
    
    // Prefix search
    auto prefix_range = map2.equal_prefix_range("ma");
    
    // {mandarin, 4} {mango, 2} {macadamia, 6}
    for(auto it = prefix_range.first; it != prefix_range.second; ++it) {
        std::cout << "{" << it.key() << ", " << *it << "}" << std::endl;
    }
    
    // Find longest match prefix.
    auto longest_prefix = map2.longest_prefix("apple juice");
    if(longest_prefix != map2.end()) {
        // {apple, 1}
        std::cout << "{" << longest_prefix.key() << ", " 
                  << *longest_prefix << "}" << std::endl;
    }
    
    // Prefix erase
    map2.erase_prefix("ma");
    
    // {apricot, 3} {melon, 5} {apple, 1}
    for(auto it = map2.begin(); it != map2.end(); ++it) {
        std::cout << "{" << it.key() << ", " << *it << "}" << std::endl;
    }
    
    
    
    
    tsl::htrie_set<char> set = {"one", "two", "three"};
    set.insert({"four", "five"});
    
    // {one} {two} {five} {four} {three}
    for(auto it = set.begin(); it != set.end(); ++it) {
        it.key(key_buffer);
        std::cout << "{" << key_buffer << "}" << std::endl;
    }
} 

Serialization

The library provides an efficient way to serialize and deserialize a map or a set so that it can be saved to a file or send through the network. To do so, it requires the user to provide a function object for both serialization and deserialization.

struct serializer {
    // Must support the following types for U: std::uint64_t, float and T if a map is used.
    template<typename U>
    void operator()(const U& value);
    void operator()(const CharT* value, std::size_t value_size);
};
struct deserializer {
    // Must support the following types for U: std::uint64_t, float and T if a map is used.
    template<typename U>
    U operator()();
    void operator()(CharT* value_out, std::size_t value_size);
};

Note that the implementation leaves binary compatibility (endianness, float binary representation, size of int, ...) of the types it serializes/deserializes in the hands of the provided function objects if compatibility is required.

More details regarding the serialize and deserialize methods can be found in the API.

#include <cassert>
#include <cstdint>
#include <fstream>
#include <type_traits>
#include <tsl/htrie_map.h>


class serializer {
public:
    serializer(const char* file_name) {
        m_ostream.exceptions(m_ostream.badbit | m_ostream.failbit);
        m_ostream.open(file_name);
    }
    
    template<class T,
             typename std::enable_if<std::is_arithmetic<T>::value>::type* = nullptr>
    void operator()(const T& value) {
        m_ostream.write(reinterpret_cast<const char*>(&value), sizeof(T));
    }
    
    void operator()(const char* value, std::size_t value_size) {
        m_ostream.write(value, value_size);
    }

private:
    std::ofstream m_ostream;
};

class deserializer {
public:
    deserializer(const char* file_name) {
        m_istream.exceptions(m_istream.badbit | m_istream.failbit | m_istream.eofbit);
        m_istream.open(file_name);
    }
    
    template<class T,
             typename std::enable_if<std::is_arithmetic<T>::value>::type* = nullptr>
    T operator()() {
        T value;
        m_istream.read(reinterpret_cast<char*>(&value), sizeof(T));
        
        return value;
    }
    
    void operator()(char* value_out, std::size_t value_size) {
        m_istream.read(value_out, value_size);
    }

private:
    std::ifstream m_istream;
};


int main() {
    const tsl::htrie_map<char, std::int64_t> map = {{"one", 1}, {"two", 2}, 
                                                    {"three", 3}, {"four", 4}};
    
    
    const char* file_name = "htrie_map.data";
    {
        serializer serial(file_name);
        map.serialize(serial);
    }
    
    {
        deserializer dserial(file_name);
        auto map_deserialized = tsl::htrie_map<char, std::int64_t>::deserialize(dserial);
        
        assert(map == map_deserialized);
    }
    
    {
        deserializer dserial(file_name);
        
        /**
         * If the serialized and deserialized map are hash compatibles (see conditions in API), 
         * setting the argument to true speed-up the deserialization process as we don't have 
         * to recalculate the hash of each key. We also know how much space each bucket needs.
         */
        const bool hash_compatible = true;
        auto map_deserialized = 
            tsl::htrie_map<char, std::int64_t>::deserialize(dserial, hash_compatible);
        
        assert(map == map_deserialized);
    }
}
Serialization with Boost Serialization and compression with zlib

It's possible to use a serialization library to avoid some of the boilerplate if the types to serialize are more complex.

The following example uses Boost Serialization with the Boost zlib compression stream to reduce the size of the resulting serialized file.

#include <boost/archive/binary_iarchive.hpp>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/iostreams/filter/zlib.hpp>
#include <boost/iostreams/filtering_stream.hpp>
#include <boost/serialization/split_free.hpp>
#include <boost/serialization/utility.hpp>
#include <cassert>
#include <cstdint>
#include <fstream>
#include <tsl/htrie_map.h>


template<typename Archive>
struct serializer {
    Archive& ar;
    
    template<typename T>
    void operator()(const T& val) { ar & val; }
    
    template<typename CharT>
    void operator()(const CharT* val, std::size_t val_size) {
        ar.save_binary(reinterpret_cast<const void*>(val), val_size*sizeof(CharT));
    }   
};

template<typename Archive>
struct deserializer {
    Archive& ar;
    
    template<typename T>
    T operator()() { T val; ar & val; return val; }
    
    template<typename CharT>
    void operator()(CharT* val_out, std::size_t val_size) {
        ar.load_binary(reinterpret_cast<void*>(val_out), val_size*sizeof(CharT));
    }  
};

namespace boost { namespace serialization {
template<class Archive, class CharT, class T>
void serialize(Archive & ar, tsl::htrie_map<CharT, T>& map, const unsigned int version) {
    split_free(ar, map, version); 
}

template<class Archive, class CharT, class T>
void save(Archive & ar, const tsl::htrie_map<CharT, T>& map, const unsigned int version) {
    serializer<Archive> serial{ar};
    map.serialize(serial);
}


template<class Archive, class CharT, class T>
void load(Archive & ar, tsl::htrie_map<CharT, T>& map, const unsigned int version) {
    deserializer<Archive> deserial{ar};
    map = tsl::htrie_map<CharT, T>::deserialize(deserial);
}
}}


int main() {
    const tsl::htrie_map<char, std::int64_t> map = {{"one", 1}, {"two", 2}, 
                                                    {"three", 3}, {"four", 4}};
    
    
    const char* file_name = "htrie_map.data";
    {
        std::ofstream ofs;
        ofs.exceptions(ofs.badbit | ofs.failbit);
        ofs.open(file_name, std::ios::binary);
        
        boost::iostreams::filtering_ostream fo;
        fo.push(boost::iostreams::zlib_compressor());
        fo.push(ofs);
        
        boost::archive::binary_oarchive oa(fo);
        
        oa << map;
    }
    
    {
        std::ifstream ifs;
        ifs.exceptions(ifs.badbit | ifs.failbit | ifs.eofbit);
        ifs.open(file_name, std::ios::binary);
        
        boost::iostreams::filtering_istream fi;
        fi.push(boost::iostreams::zlib_decompressor());
        fi.push(ifs);
        
        boost::archive::binary_iarchive ia(fi);
     
        tsl::htrie_map<char, std::int64_t> map_deserialized;   
        ia >> map_deserialized;
        
        assert(map == map_deserialized);
    }
}

License

The code is licensed under the MIT license, see the LICENSE file for details.