Home

Awesome

Author: Barak Shoshany DOI: 10.1016/j.softx.2024.101687 arXiv:2105.00613 License: MIT Language: C++17 / C++20 / C++23 GitHub stars GitHub forks GitHub release Vcpkg version Meson WrapDB Conan version Open in Visual Studio Code

BS::thread_pool: a fast, lightweight, modern, and easy-to-use C++17 / C++20 / C++23 thread pool library

By Barak Shoshany
Email: baraksh@gmail.com
Website: https://baraksh.com/
GitHub: https://github.com/bshoshany

This is the complete documentation for v5.0.0 of the library, released on 2024-12-19.

Introduction

Motivation

Multithreading is essential for modern high-performance computing. Since C++11, the C++ standard library has included built-in low-level multithreading support using constructs such as std::thread. However, std::thread creates a new thread each time it is called, which can have a significant performance overhead. Furthermore, it is possible to create more threads than the hardware can handle simultaneously, potentially resulting in a substantial slowdown.

The library presented here contains a C++ thread pool class, BS::thread_pool, which avoids these issues by creating a fixed pool of threads once and for all, and then continuously reusing the same threads to perform different tasks throughout the lifetime of the program. By default, the number of threads in the pool is equal to the maximum number of threads that the hardware can run in parallel.

The user submits tasks to be executed into a queue. Whenever a thread becomes available, it retrieves the next task from the queue and executes it. The pool optionally produces an std::future for each task, which allows the user to wait for the task to finish executing and/or obtain its eventual return value, if applicable. Threads and tasks are autonomously managed by the pool in the background, without requiring any input from the user aside from submitting the desired tasks.

The design of this library is guided by four important principles. First, compactness: the entire library consists of just one self-contained header file, with no other components or dependencies. Second, portability: the library only utilizes the C++ standard library, without relying on any compiler extensions or 3rd-party libraries, and is therefore compatible with any modern standards-conforming C++ compiler on any platform, as long as it supports C++17 or later. Third, ease of use: the library is extensively documented, and programmers of any level should be able to use it right out of the box.

The fourth and final guiding principle is performance: each and every line of code in this library was carefully designed with maximum performance in mind, and performance was tested and verified on a variety of compilers and platforms. Indeed, the library was originally designed for use in the author's own computationally-intensive scientific computing projects, running both on high-end desktop/laptop computers and high-performance computing nodes.

Among the available C++ thread pool libraries, BS::thread_pool occupies the crucial middle ground between small bare-bones thread pool classes that offer rudimentary functionality and are only suitable for simple programs, and very large libraries that offer many advanced features but consist of multiple components and dependencies and involve complex APIs that require a substantial time investment to learn. BS::thread_pool was designed for users who want a simple and lightweight header-only library that is easy to learn and use, and can be readily incorporated into existing or new projects, but do not want to compromise on performance or functionality.

Obtaining the library is quick and easy; it can be downloaded manually from the GitHub repository, or installed automatically using a variety of package managers and build systems. The library can be imported either as a traditional header-only library, or as a modern C++20 module. BS::thread_pool has undergone extensive testing on multiple platforms and is actively used by thousands of C++ developers worldwide for a wide range of applications, from scientific computing to game development.

Overview of features

Getting started

Installing the library

To install BS::thread_pool, simply download the latest release from the GitHub repository, place the header file BS_thread_pool.hpp from the include folder in the desired folder, and include it in your program:

#include "BS_thread_pool.hpp"

The thread pool will now be accessible via the BS::thread_pool class. For an even quicker installation, you can download the header file itself directly at this URL; no additional files are required, as the library is a single-header library.

This library is also available on various package managers and build system, including vcpkg, Conan, Meson, and CMake. Please see below for more details.

If C++20 features are available, the library can also be imported as a C++20 module, in which case #include "BS_thread_pool.hpp" should be replaced with import BS.thread_pool;. This requires one additional file, and the module must be compiled before it can be used; please see detailed instructions below.

Compiling and compatibility

This library officially supports C++17, C++20, and C++23. If compiled with C++20 and/or C++23 support, the library will make use of newly available features for maximum performance and usability. However, the library is fully compatible with C++17, and should successfully compile on any C++17 standard-compliant compiler, on all operating systems and architectures for which such a compiler is available.

Compatibility was verified using the bundled test program BS_thread_pool_test.cpp, compiled using the bundled Python scripts test_all.py and compile_cpp.py with native extensions enabled, importing the library as a C++20 module where applicable, and importing the C++23 Standard Library as a module where applicable, on a 24-core (8P+16E) / 32-thread Intel i9-13900K CPU, using the following compilers, C++ standard libraries, and platforms:

As this library requires C++17 features, the code must be compiled with C++17 support:

For maximum performance, it is recommended to compile with all available compiler optimizations:

As an example, to compile the test program BS_thread_pool_test.cpp with compiler optimizations, it is recommended to use the following commands:

If your compiler and codebase support C++20 and/or C++23, it is recommended to enable them in order to allow the library access to the latest features:

In addition, if C++20 features are available, the library can be imported as a module; instructions for doing so are provided below.

Constructors

The default constructor creates a thread pool with as many threads as the hardware can handle concurrently, as reported by the implementation via std::thread::hardware_concurrency(). This is usually determined by the number of cores in the CPU. If a core is hyperthreaded, it will count as two threads. For example:

// Constructs a thread pool with as many threads as are available in the hardware.
BS::thread_pool pool;

Optionally, a number of threads different from the hardware concurrency can be specified as an argument to the constructor. However, note that adding more threads than the hardware can handle will not improve performance, and in fact will most likely hinder it. This option exists in order to allow using fewer threads than the hardware concurrency, in cases where you wish to leave some threads available for other processes. For example:

// Constructs a thread pool with only 12 threads.
BS::thread_pool pool(12);

Usually, when the thread pool is used, a program's main thread should only submit tasks to the thread pool and wait for them to finish, and should not perform any computationally intensive tasks on its own. If this is the case, it is recommended to use the default value for the number of threads. This ensures that all the threads available in the hardware will be put to work while the main thread waits.

However, if the main thread also performs computationally intensive tasks, it may be beneficial to use one fewer thread than the hardware concurrency, leaving one hardware thread available for the main thread. Furthermore, if more than one thread pool is used in the program simultaneously, the total number of thread across all pools should not exceed the hardware concurrency.

Getting and resetting the number of threads in the pool

The member function get_thread_count() returns the number of threads in the pool. This will be equal to std::thread::hardware_concurrency() if the default constructor was used.

It is generally unnecessary to change the number of threads in the pool after it has been created, since the whole point of a thread pool is that you only create the threads once. However, if needed, this can be done, safely and on-the-fly, using the reset() member function.

reset() will wait for all currently running tasks to be completed, but will leave the rest of the tasks in the queue. Then it will destroy the thread pool and create a new one with the desired new number of threads, as specified in the function's argument (or the hardware concurrency if no argument is given). The new thread pool will then resume executing the tasks that remained in the queue and any newly submitted tasks.

The member function get_thread_ids() returns a vector containing the unique identifiers for each of the pool's threads, as obtained by std::thread::get_id(). These values are not so useful on their own, but can be used to identify and distinguish between threads, or for allocating resources.

Submitting tasks to the queue

Submitting tasks with no arguments and receiving a future

In this section we will learn how to submit a task with no arguments, but potentially with a return value, to the queue. Once a task has been submitted, it will be executed as soon as a thread becomes available. Tasks are executed in the order that they were submitted (first-in, first-out), unless task priority is enabled (see below).

For example, if the pool has 8 threads and an empty queue, and we submitted 16 tasks, then we should expect the first 8 tasks to be executed in parallel, with the remaining tasks being picked up by the threads one by one as each thread finishes executing its first task, until no tasks are left in the queue.

The member function submit_task() is used to submit tasks to the queue. It takes exactly one input, the task to submit. This task must be a function with no arguments, but it can have a return value.

submit_task() returns an std::future associated to the task. If the submitted task has a return value of type T, then the future will be of type std::future<T>, and will be set to the task's return value when the task finishes its execution. If the submitted task does not have a return value, then the future will be an std::future<void>, which will not contain any value, but may still be used to wait for the task to finish.

To wait until the task finishes, use the member function wait() of the future. To obtain the return value, use the member function get(), which will also automatically wait for the task to finish if it hasn't yet. Here is a simple example:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <future>             // std::future
#include <iostream>           // std::cout

int the_answer()
{
    return 42;
}

int main()
{
    BS::thread_pool pool;
    std::future<int> my_future = pool.submit_task(the_answer);
    std::cout << my_future.get() << '\n';
}

In this example we submitted the function the_answer(), which returns an int. The member function submit_task() of the pool therefore returned an std::future<int>. We then used used the get() member function of the future to get the return value, and printed it out.

In addition to submitting a pre-defined function, we can also use a lambda expression to quickly define the task on-the-fly. Rewriting the previous example in terms of a lambda expression, we get:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <future>             // std::future
#include <iostream>           // std::cout

int main()
{
    BS::thread_pool pool;
    std::future<int> my_future = pool.submit_task([]{ return 42; });
    std::cout << my_future.get() << '\n';
}

Here, the lambda expression []{ return 42; } has two parts:

  1. An empty capture clause, denoted by []. This signifies to the compiler that a lambda expression is being defined.
  2. A code block { return 42; } that simply returns the value 42.

It is generally simpler and faster to submit lambda expressions rather than pre-defined functions, especially due to the ability to capture local variables, which we will discuss in the next section.

Of course, tasks do not have to return values. In the following example, we submit a function with no return value and then using the future to wait for it to finish executing:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <chrono>             // std::chrono
#include <future>             // std::future
#include <iostream>           // std::cout
#include <thread>             // std::this_thread

int main()
{
    BS::thread_pool pool;
    const std::future<void> my_future = pool.submit_task(
        []
        {
            std::this_thread::sleep_for(std::chrono::milliseconds(500));
        });
    std::cout << "Waiting for the task to complete... ";
    my_future.wait();
    std::cout << "Done." << '\n';
}

Here we split the lambda into multiple lines to make it more readable. The command std::this_thread::sleep_for(std::chrono::milliseconds(500)) instructs the task to simply sleep for 500 milliseconds, simulating a computationally-intensive task.

Submitting tasks with arguments and receiving a future

As stated in the previous section, tasks submitted using submit_task() cannot have any arguments. However, it is easy to submit tasks with argument either by wrapping the function in a lambda or using lambda captures directly. The following is an example of submitting a pre-defined function with arguments by wrapping it in a lambda:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <future>             // std::future
#include <iostream>           // std::cout

double multiply(const double lhs, const double rhs)
{
    return lhs * rhs;
}

int main()
{
    BS::thread_pool pool;
    std::future<double> my_future = pool.submit_task(
        []
        {
            return multiply(6, 7);
        });
    std::cout << my_future.get() << '\n';
}

As you can see, to pass the arguments to multiply() we simply called multiply(6, 7) explicitly inside a lambda. If the arguments are not literals, we can use the lambda capture clause to capture the arguments from the local scope:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <future>             // std::future
#include <iostream>           // std::cout

double multiply(const double lhs, const double rhs)
{
    return lhs * rhs;
}

int main()
{
    BS::thread_pool pool;
    constexpr double first = 6;
    constexpr double second = 7;
    std::future<double> my_future = pool.submit_task(
        [first, second]
        {
            return multiply(first, second);
        });
    std::cout << my_future.get() << '\n';
}

We could even get rid of the multiply() function entirely and just put everything inside a lambda, if desired:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <future>             // std::future
#include <iostream>           // std::cout

int main()
{
    BS::thread_pool pool;
    constexpr double first = 6;
    constexpr double second = 7;
    std::future<double> my_future = pool.submit_task(
        [first, second]
        {
            return first * second;
        });
    std::cout << my_future.get() << '\n';
}

Detaching and waiting for tasks

Usually, it is best to submit a task to the queue using submit_task(). This allows you to wait for the task to finish and/or get its return value later. However, sometimes a future is not needed, for example when you just want to "set and forget" a certain task, or if the task already communicates with the main thread or with other tasks without using futures, such as via condition variables.

In such cases, you may wish to avoid the overhead involved in assigning a future to the task, in order to increase performance. This is called "detaching" the task, as the task detaches from the main thread and runs independently.

Detaching tasks is done using the detach_task() member function, which allows you to detach a task to the queue without generating a future for it. As with submit_task(), the task must have no arguments, but you can pass arguments by wrapping it in a lambda, as shown in the previous section. However, tasks executed via detach_task() cannot have a return value, as there would be no way for the main thread to retrieve that value.

Since detach_task() does not return a future, there is no built-in way for the user to know when the task finishes executing. You must manually ensure that the task finishes executing before trying to use anything that depends on its output. Otherwise, bad things will happen!

BS::thread_pool provides the member function wait() to facilitate waiting for all the tasks in the queue to complete, whether they were detached or submitted with a future. The wait() member function works similarly to the wait() member function of std::future. Consider, for example, the following code:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <chrono>             // std::chrono
#include <iostream>           // std::cout
#include <thread>             // std::this_thread

int main()
{
    BS::thread_pool pool;
    int result = 0;
    pool.detach_task(
        [&result]
        {
            std::this_thread::sleep_for(std::chrono::milliseconds(100));
            result = 42;
        });
    std::cout << result << '\n';
}

This program first defines a local variable named result and initializes it to 0. It then detaches a task in the form of a lambda expression. Note that the lambda captures result by reference, as indicated by the & in front of it. This means that the task can modify result, and any such modification will be reflected in the main thread.

The task changes result to 42, but it first sleeps for 100 milliseconds. When the main thread prints out the value of result, the task has not yet had time to modify its value, since it is still sleeping. Therefore, the program will actually print out the initial value 0, which is not what we want.

To wait for the task to complete, we must use the wait() member function after detaching it:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <chrono>             // std::chrono
#include <iostream>           // std::cout
#include <thread>             // std::this_thread

int main()
{
    BS::thread_pool pool;
    int result = 0;
    pool.detach_task(
        [&result]
        {
            std::this_thread::sleep_for(std::chrono::milliseconds(100));
            result = 42;
        });
    pool.wait();
    std::cout << result << '\n';
}

Now the program will print out the value 42, as expected. Note, however, that wait() will wait for all the tasks in the queue, including any other tasks that were potentially submitted before or after the one we care about. If we want to wait just for one task, submit_task() would be a better choice.

Waiting for submitted or detached tasks with a timeout

Sometimes you may wish to wait for the tasks to complete, but only for a certain amount of time, or until a specific point in time. For example, if the tasks have not yet completed after some time, you may wish to let the user know that there is a delay.

For tasks submitted with futures using submit_task(), this can be achieved using two member functions of std::future:

In both cases, the functions will return std::future_status::ready if the future is ready, meaning the task is finished and its return value, if any, has been obtained. However, they will return std::future_status::timeout if the future is not yet ready when the timeout has expired.

Here is an example:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <chrono>             // std::chrono
#include <future>             // std::future
#include <iostream>           // std::cout
#include <thread>             // std::this_thread

int main()
{
    BS::thread_pool pool;
    const std::future<void> my_future = pool.submit_task(
        []
        {
            std::this_thread::sleep_for(std::chrono::milliseconds(1000));
            std::cout << "Task done!\n";
        });
    while (true)
    {
        if (my_future.wait_for(std::chrono::milliseconds(200)) != std::future_status::ready)
            std::cout << "Sorry, the task is not done yet.\n";
        else
            break;
    }
}

The output should look similar to this:

Sorry, the task is not done yet.
Sorry, the task is not done yet.
Sorry, the task is not done yet.
Sorry, the task is not done yet.
Task done!

For detached tasks, since we do not have futures for them, we cannot use this method. However, BS::thread_pool has two member functions, also named wait_for() and wait_until(), which similarly wait for a specified duration or until a specified time point, but do so for all tasks (whether submitted or detached). Instead of an std::future_status, the thread pool's wait functions returns true if all tasks finished running, or false if the duration expired or the time point was reached but some tasks are still running.

Here is the same example as above, using detach_task() and pool.wait_for():

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <chrono>             // std::chrono
#include <iostream>           // std::cout
#include <thread>             // std::this_thread

int main()
{
    BS::thread_pool pool;
    pool.detach_task(
        []
        {
            std::this_thread::sleep_for(std::chrono::milliseconds(1000));
            std::cout << "Task done!\n";
        });
    while (true)
    {
        if (!pool.wait_for(std::chrono::milliseconds(200)))
            std::cout << "Sorry, the task is not done yet.\n";
        else
            break;
    }
}

Class member functions as tasks

Let us consider the following program:

#include <iostream> // std::boolalpha, std::cout

class flag_class
{
public:
    [[nodiscard]] bool get_flag() const
    {
        return flag;
    }

    void set_flag(const bool arg)
    {
        flag = arg;
    }

private:
    bool flag = false;
};

int main()
{
    flag_class flag_object;
    flag_object.set_flag(true);
    std::cout << std::boolalpha << flag_object.get_flag() << '\n';
}

This program creates a new object flag_object of the class flag_class, sets the flag to true using the setter member function set_flag(), and then prints out the flag's value using the getter member function get_flag().

What if we want to submit the member function set_flag() as a task to the thread pool? We can simply wrap the entire statement flag_object.set_flag(true); in a lambda, and pass flag_object to the lambda by reference, as in the following example:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <iostream>           // std::boolalpha, std::cout

class flag_class
{
public:
    [[nodiscard]] bool get_flag() const
    {
        return flag;
    }

    void set_flag(const bool arg)
    {
        flag = arg;
    }

private:
    bool flag = false;
};

int main()
{
    BS::thread_pool pool;
    flag_class flag_object;
    pool.submit_task(
            [&flag_object]
            {
                flag_object.set_flag(true);
            })
        .wait();
    std::cout << std::boolalpha << flag_object.get_flag() << '\n';
}

Of course, this will also work with detach_task(), if we call wait() on the pool itself instead of on the returned future.

Note that in this example, instead of getting a future from submit_task() and then waiting for that future, we simply called wait() on that future straight away. This is a common way of waiting for a task to complete if we have nothing else to do in the meantime. Note also that we passed flag_object by reference to the lambda, since we want to set the flag on that same object, not a copy of it.

Another thing you might want to do is call a member function from within the object itself, that is, from another member function. This follows a similar syntax, except that you must also capture this (i.e. a pointer to the current object) in the lambda. Here is an example:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <iostream>           // std::boolalpha, std::cout

BS::thread_pool pool;

class flag_class
{
public:
    [[nodiscard]] bool get_flag() const
    {
        return flag;
    }

    void set_flag(const bool arg)
    {
        flag = arg;
    }

    void set_flag_to_true()
    {
        pool.submit_task(
                [this]
                {
                    set_flag(true);
                })
            .wait();
    }

private:
    bool flag = false;
};

int main()
{
    flag_class flag_object;
    flag_object.set_flag_to_true();
    std::cout << std::boolalpha << flag_object.get_flag() << '\n';
}

Note that in this example we defined the thread pool as a global object, so that it is accessible outside the main() function. Although we could have, in theory, passed a reference to the thread pool in our call to set_flag_to_true(), that would be very cumbersome to do if multiple different functions need to use the same thread pool. Defining the thread pool as a global object is common practice, as it allows all functions to access the same thread pool without having to pass it around as an argument.

Parallelizing loops

Automatic parallelization of loops

One of the most common and effective methods of parallelization is splitting a loop into smaller sub-loops and running them in parallel. It is most effective in "embarrassingly parallel" computations, such as vector or matrix operations, where each iteration of the loop is completely independent of every other iteration.

For example, if we are summing up two vectors of 1000 elements each, and we have 10 threads, we could split the summation into 10 blocks of 100 elements each, and run all the blocks in parallel, potentially increasing performance by up to a factor of 10.

BS::thread_pool can automatically parallelize loops, making it very easy to implement many parallel algorithms without having to worry about the details. To see how this works, consider the following generic loop:

for (T i = start; i < end; ++i)
    loop(i);

where:

This loop may be automatically parallelized and submitted to the thread pool's queue using the member function submit_loop(), which has the follows syntax:

pool.submit_loop(start, end, loop, num_blocks);

where:

The thread pool's internal algorithm ensures that each of the blocks has one of two sizes, differing by 1, with the larger blocks always first, so that the tasks are as evenly distributed as possible, to optimize performance. For example, if the range [0, 100) is split into 15 blocks, the result will be 10 blocks of size 7, which will be submitted first, and 5 blocks of size 6.

Each block will be submitted to the thread pool's queue as a separate task. Therefore, a loop that is split into 3 blocks will be split into 3 individual tasks, which may run in parallel. If there is only one block, then the entire loop will run as one task, and no parallelization will take place.

To parallelize the generic loop above, we use the following commands:

BS::multi_future<void> loop_future = pool.submit_loop(start, end, loop, num_blocks);
loop_future.wait();

submit_loop() returns an object of the helper class BS::multi_future<T>. This is essentially a specialization of std::vector<std::future<T>> with additional member functions. Each of the num_blocks blocks will have an std::future<T> assigned to it, and all these futures will be stored inside the returned BS::multi_future<T>. When loop_future.wait() is called, the main thread will wait until all tasks generated by submit_loop() finish executing, and only those tasks - not any other tasks that also happen to be in the queue. This is essentially the role of the BS::multi_future<T> class: to wait for a specific group of tasks, in this case the tasks running the loop blocks.

As a simple example, the following code calculates and prints a table of squares of all integers from 0 to 99:

#include <cstddef>  // std::size_t
#include <iomanip>  // std::setw
#include <iostream> // std::cout

int main()
{
    constexpr std::size_t max = 100;
    std::size_t squares[max];
    for (std::size_t i = 0; i < max; ++i)
        squares[i] = i * i;
    for (std::size_t i = 0; i < max; ++i)
        std::cout << std::setw(2) << i << "^2 = " << std::setw(4) << squares[i] << ((i % 5 != 4) ? " | " : "\n");
}

We can parallelize it as follows:

#include "BS_thread_pool.hpp" // BS::multi_future, BS::thread_pool
#include <cstddef>            // std::size_t
#include <iomanip>            // std::setw
#include <iostream>           // std::cout

int main()
{
    BS::thread_pool pool(10);
    constexpr std::size_t max = 100;
    std::size_t squares[max];
    const BS::multi_future<void> loop_future = pool.submit_loop(0, max,
        [&squares](const std::size_t i)
        {
            squares[i] = i * i;
        });
    loop_future.wait();
    for (std::size_t i = 0; i < max; ++i)
        std::cout << std::setw(2) << i << "^2 = " << std::setw(4) << squares[i] << ((i % 5 != 4) ? " | " : "\n");
}

Since there are 10 threads, and we omitted the num_blocks argument, the loop will be divided into 10 blocks, each calculating 10 squares.

As a side note, notice that here we parallelized the calculation of the squares, but we did not parallelize printing the results. This is for two reasons:

  1. We want to print out the squares in ascending order, and we have no guarantee that the blocks will be executed in the correct order. This is very important; you must never expect that the parallelized loop will execute at the same order as the non-parallelized loop.
  2. If we did print out the squares from within the parallel tasks, we would get a huge mess, since all 10 blocks would print to the standard output at once. Later we will see how to synchronize printing to a stream from multiple tasks at the same time.

Optimizing the number of blocks

The most important factor to consider when parallelizing loops is the number of blocks num_blocks to split the loop into. Naively, it may seem that the number of blocks should simply be equal to the number of threads in the pool, but that is usually not the optimal choice. Inevitably, some blocks will finish before other blocks; if there is only one block per thread, then any threads that have already finished executing their blocks will remain idle until the rest of the blocks are done, wasting many CPU cycles.

It is therefore generally better to use a larger number of blocks than the number of threads, to ensure that all threads work at maximum capacity. On the other hand, parallelization with too many blocks will eventually suffer from diminishing returns due to increased overhead. A good rule of thumb is to use a number of blocks equal to the square of the number of threads, but this is not necessarily the optimal number in all cases.

In the end, the optimal number of blocks will always depend on the specific algorithm being parallelized and the total number of indices in the loop, and may differ between different compilers, operating systems, and hardware configurations. For best performance, it is strongly recommended to do your own benchmarks to find the optimal number of blocks for your particular use case; see the benchmarks code in the bundled test program for an example of how to do this.

Finally, note that the discussion here only pertains to situations where the parallelized loop is the only thing running in the pool. If there are many other tasks running in parallel from other sources, then you probably do not need to worry about idle time, since the threads will be kept busy by the other tasks anyway.

Common index types

Let us now consider a subtlety regarding the types of the start and end indices. In the example above, the start index is 0, which is of type int, while the end index is max, which is of type std::size_t. These two types are not compatible, as they are both of different signedness and (on a 64-bit system) of different bit width. In such cases, submit_loop() uses a custom type trait BS::common_index_type to determine the common type of the indices.

The common index type of two signed integers or two unsigned integers is the larger of the integers, while the common index type of a signed and an unsigned integer is a signed integer that can hold the full ranges of both integers. (This is in contrast to std::common_type, which would choose the unsigned integer in the latter case, causing a loop with a negative start index and an unsigned end index to fail due to integer overflow.)

The exception to this rule is when one of the integers is a 64-bit unsigned integer, and the other is a signed integer (of any bit width), since there is no fundamental signed type that can hold the full ranges of both integers. In this case, we choose a 64-bit unsigned integer as the common index type, since the most common scenario where this might happen is when the indices go from 0 to an index of type std::size_t - as in our example in the previous section.

However, it is important to note that this will fail if the first index is in fact negative. Therefore, only in the edge case where one index is a negative integer and the other is of an unsigned 64-bit integer type such as std::size_t, the user must cast both indices explicitly to the desired common type. In all other cases, this is handled automatically behind the scenes using BS::common_index_type.

Parallelizing loops without futures

Just as in the case of detach_task() vs. submit_task(), sometimes you may want to parallelize a loop, but you don't need it to return a BS::multi_future. In this case, you can save the overhead of generating the futures (which can be significant, depending on the number of blocks) by using detach_loop() instead of submit_loop(), with the same arguments.

For example, we could detach the loop of squares example above as follows:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <cstddef>            // std::size_t
#include <iomanip>            // std::setw
#include <iostream>           // std::cout

int main()
{
    BS::thread_pool pool(10);
    constexpr std::size_t max = 100;
    std::size_t squares[max];
    pool.detach_loop(0, max,
        [&squares](const std::size_t i)
        {
            squares[i] = i * i;
        });
    pool.wait();
    for (std::size_t i = 0; i < max; ++i)
        std::cout << std::setw(2) << i << "^2 = " << std::setw(4) << squares[i] << ((i % 5 != 4) ? " | " : "\n");
}

Warning: Since detach_loop() does not return a BS::multi_future, there is no built-in way for the user to know when the loop finishes executing. You must use either wait() as we did here, or some other method such as condition variables, to ensure that the loop finishes executing before trying to use anything that depends on its output. Otherwise, bad things will happen! If the loop is the only thing running in the pool, then generally detach_loop() followed by wait() is the optimal choice in terms of performance.

Parallelizing individual indices vs. blocks

We have seen that detach_loop() and submit_loop() execute the function loop(i) for each index i in the loop. However, behind the scenes, the loop is split into blocks, and each block executes the loop() function multiple times. Each block has an internal loop of the form (where T is the type of the indices):

for (T i = start; i < end; ++i)
    loop(i);

The start and end indices of each block are determined automatically by the pool. For example, in the previous section, the loop from 0 to 100 was split into 10 blocks of 10 indices each: start = 0 to end = 10, start = 10 to end = 20, and so on; the blocks are not inclusive of the last index, since the for loop has the condition i < end and not i <= end.

However, this also means that the loop() function is executed multiple times per block. This generates additional overhead due to the multiple function calls. For short loops, this should not affect performance. However, for very long loops, with millions of indices, the performance cost may be significant.

For this reason, the thread pool library provides two additional member functions for parallelizing loops: detach_blocks() and submit_blocks(). While detach_loop() and submit_loop() execute a function loop(i) once per index but multiple times per block, detach_blocks() and submit_blocks() execute a function block(start, end) only once per block.

The main advantage of this method is increased performance, but the main disadvantage is slightly more complicated code. In particular, the user must define the loop from start to end manually within each block, ensuring that all the indices in the block are handled. Here is the previous example again, this time using detach_blocks():

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <cstddef>            // std::size_t
#include <iomanip>            // std::setw
#include <iostream>           // std::cout

int main()
{
    BS::thread_pool pool(10);
    constexpr std::size_t max = 100;
    std::size_t squares[max];
    pool.detach_blocks(0, max,
        [&squares](const std::size_t start, const std::size_t end)
        {
            for (std::size_t i = start; i < end; ++i)
                squares[i] = i * i;
        });
    pool.wait();
    for (std::size_t i = 0; i < max; ++i)
        std::cout << std::setw(2) << i << "^2 = " << std::setw(4) << squares[i] << ((i % 5 != 4) ? " | " : "\n");
}

Note how the block function takes two arguments, and includes the internal loop. Also, since we are using detach_blocks(), we must wait for the loop to finish executing using wait(). Alternatively, we could have used submit_blocks() and waited on the returned BS::multi_future<void> object.

Generally, compiler optimizations should be able to make detach_loop() and submit_loop() perform roughly the same as detach_blocks() and submit_blocks(). However, detach_blocks() and submit_blocks() are always going to be inherently faster, at the cost of being slightly more complicated to use. In addition, having low-level control of each block can allow for further optimizations, such as allocating resources per block instead of per index. As usual, you should perform your own benchmarks to see which option works best for your particular use case.

Loops with return values

As mentioned above, unlike submit_task(), the member function submit_loop() only takes loop functions with no return value. The reason is that each block is running the loop function multiple times, so a return value would not make sense. In contrast, submit_blocks() allows the block function to have a return value, as each block can return a unique value.

The block function will be executed once for each block, but the blocks are managed by the thread pool, with the user only able to select the number of blocks, but not the range of each block. Therefore, there is limited usability in returning one value per block. However, for cases where this is desired, such as for summation or some sorting algorithms, submit_blocks() does accept functions with return values, in which case it returns a BS::multi_future<T> object where T is the type of the return value.

Here's an example of a function template summing all elements of type T in a given range:

#include "BS_thread_pool.hpp" // BS::multi_future, BS::thread_pool
#include <cstdint>            // std::uint64_t
#include <future>             // std::future
#include <iostream>           // std::cout

BS::thread_pool pool;

template <typename T>
T sum(T min, T max)
{
    BS::multi_future<T> loop_future = pool.submit_blocks(
        min, max + 1,
        [](const T start, const T end)
        {
            T block_total = 0;
            for (T i = start; i < end; ++i)
                block_total += i;
            return block_total;
        },
        100);
    T result = 0;
    for (std::future<T>& future : loop_future)
        result += future.get();
    return result;
}

int main()
{
    std::cout << sum<std::uint64_t>(1, 1'000'000);
}

Note that we needed to specify the type T explicitly as std::uint64_t, that is, an unsigned 64-bit integer, as the result, 500,000,500,000, would not fit in a 32-bit integer.

Here we used the fact that BS::multi_future<T> is a specialization of std::vector<std::future<T>>, so we can use a range-based for loop to iterate over the futures, and use the get() member function of each future to get its value. The values of the futures will be the partial sums from each block, so when we add them up, we will get the total sum. Note that we divided the loop into 100 blocks, so there will be 100 futures in total, each with the partial sum of 10,000 numbers.

The range-based for loop will likely start before the loop finished executing, and each time it calls a future, it will get the value of that future if it is ready, or it will wait until the future is ready and then get the value. This increases performance, since we can start summing the results without waiting for the entire loop to finish executing first - we only need to wait for individual blocks.

If we did want to wait until the entire loop finishes before summing the results, we could have used the get() member function of the BS::multi_future<T> object itself, which returns an std::vector<T> with the values obtained from each future. In that case, the sum could be obtained after calling submit_blocks(), for example using std::reduce, as follows:

#include "BS_thread_pool.hpp" // BS::multi_future, BS::thread_pool
#include <cstdint>            // std::uint64_t
#include <iostream>           // std::cout
#include <numeric>            // std::reduce
#include <vector>             // std::vector

BS::thread_pool pool;

template <typename T>
T sum(T min, T max)
{
    BS::multi_future<T> loop_future = pool.submit_blocks(
        min, max + 1,
        [](const T start, const T end)
        {
            T block_total = 0;
            for (T i = start; i < end; ++i)
                block_total += i;
            return block_total;
        },
        100);
    std::vector<T> partial_sums = loop_future.get();
    T result = std::reduce(partial_sums.begin(), partial_sums.end());
    return result;
}

int main()
{
    std::cout << sum<std::uint64_t>(1, 1'000'000);
}

Parallelizing sequences

The member functions detach_loop(), submit_loop(), detach_blocks(), and submit_blocks() parallelize a loop by splitting it into blocks, and submitting each block as an individual task to the queue, with each such task iterating over all the indices in the corresponding block's range, which can be numerous. However, sometimes we have a loop with a small number of indices, or more generally, a sequence of tasks enumerated by some index. In such cases, we can avoid the overhead of splitting into blocks and simply submit each individual index as its own independent task to the pool's queue.

This can be done with detach_sequence() and submit_sequence(). The syntax of these functions is similar to detach_loop() and submit_loop(), except that they don't have the num_blocks argument at the end. The sequence function must take only one argument, the index.

As usual, detach_sequence() detaches the tasks and does not return a future, so you must use wait() if you need to wait for the entire sequence to finish executing, while submit_sequence() returns a BS::multi_future. If the tasks in the sequence return values, then the futures will contain those values, otherwise they will be void futures.

Here is a simple example, where each task in the sequence calculates the factorial of its index:

#include "BS_thread_pool.hpp" // BS::multi_future, BS::thread_pool
#include <cstdint>            // std::uint64_t
#include <iostream>           // std::cout
#include <vector>             // std::vector

std::uint64_t factorial(const std::uint64_t n)
{
    std::uint64_t result = 1;
    for (std::uint64_t i = 2; i <= n; ++i)
        result *= i;
    return result;
}

int main()
{
    BS::thread_pool pool;
    constexpr std::uint64_t max = 20;
    BS::multi_future<std::uint64_t> sequence_future = pool.submit_sequence(0, max + 1, factorial);
    std::vector<std::uint64_t> factorials = sequence_future.get();
    for (std::uint64_t i = 0; i < max + 1; ++i)
        std::cout << i << "! = " << factorials[i] << '\n';
}

Note how the factorials of each index are stored in the BS::multi_future, and can be obtained as a vector using get(); each element of the vector is equal to the factorial of the element's index, calculated by its own individual task in the sequence.

Warning: Since each index in the sequence will be submitted as a separate task, detach_sequence() and submit_sequence() should only be used if the number of indices is small (say, within 1-2 orders of magnitude of the number of threads), and each index performs a substantial computation on its own. If you submit a sequence of 1 million indices, each performing a 1 ms calculation, the overhead of submitting each index as a separate task would far outweigh the benefits of parallelization.

More about BS::multi_future

The helper class BS::multi_future<T>, which we have been using throughout this section, provides a convenient way to collect and access groups of futures. While a BS::multi_future<T> object is created automatically by the pool when parallelizing loops, you can also use it to store futures manually, such as those obtained from submit_task() or by other means. BS::multi_future<T> is a specialization of std::vector<std::future<T>>, so it should be used in a similar way:

However, BS::multi_future<T> also has additional member functions that are aimed specifically at handling futures:

Aside from using BS::multi_future<T> to track the execution of parallelized loops, it can also be used, for example, whenever you have several different groups of tasks and you want to track the execution of each group individually.

Utility classes

Synchronizing printing to a stream with BS::synced_stream

When printing to an output stream from multiple threads in parallel, the output may become garbled. For example, try running this code:

#include "BS_thread_pool.hpp" // BS::thread_pool
#include <iostream>           // std::cout

BS::thread_pool pool;

int main()
{
    pool.submit_sequence(0, 5,
            [](const unsigned int i)
            {
                std::cout << "Task no. " << i << " executing.\n";
            })
        .wait();
}

The output will be a mess similar to this:

Task no. Task no. Task no. 3 executing.
0 executing.
Task no. 41 executing.
Task no. 2 executing.
 executing.

The reason is that, although each individual insertion to std::cout is thread-safe, there is no mechanism in place to ensure subsequent insertions from the same thread are printed contiguously.

The thread pool utility class BS::synced_stream is designed to eliminate such synchronization issues. The stream to print to should be passed as a constructor argument. If no argument is supplied, std::cout will be used:

// Construct a synced stream that will print to std::cout.
BS::synced_stream sync_out;
// Construct a synced stream that will print to the output stream my_stream.
BS::synced_stream sync_out(my_stream);

The member function print() takes an arbitrary number of arguments, which are inserted into the stream one by one, in the order they were given. println() does the same, but also prints a newline character \n at the end, for convenience. A mutex is used to synchronize this process, so that any other calls to print() or println() using the same BS::synced_stream object must wait until the previous call has finished.

As an example, this code:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool

BS::synced_stream sync_out;
BS::thread_pool pool;

int main()
{
    pool.submit_sequence(0, 5,
            [](const unsigned int i)
            {
                sync_out.println("Task no. ", i, " executing.");
            })
        .wait();
}

Will print out:

Task no. 0 executing.
Task no. 1 executing.
Task no. 2 executing.
Task no. 3 executing.
Task no. 4 executing.

Warning: Always create the BS::synced_stream object before the BS::thread_pool object, as we did in this example. When the BS::thread_pool object goes out of scope, it waits for the remaining tasks to be executed. If the BS::synced_stream object goes out of scope before the BS::thread_pool object, then any tasks using the BS::synced_stream will crash. Since objects are destructed in the opposite order of construction, creating the BS::synced_stream object before the BS::thread_pool object ensures that the BS::synced_stream is always available to the tasks, even while the pool is destructing.

Most stream manipulators defined in the headers <ios> and <iomanip>, such as std::setw (set the character width of the next output), std::setprecision (set the precision of floating point numbers), and std::fixed (display floating point numbers with a fixed number of digits), can be passed as arguments to print() and println(), and will have the same effect as inserting them to the associated stream.

The only exceptions are the flushing manipulators std::endl and std::flush, which will not work because the compiler will not be able to figure out which template specializations to use. Instead, use BS::synced_stream::endl and BS::synced_stream::flush. Here is an example:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <cmath>              // std::sqrt
#include <iomanip>            // std::setprecision, std::setw
#include <ios>                // std::fixed

BS::synced_stream sync_out;
BS::thread_pool pool;

int main()
{
    sync_out.print(std::setprecision(10), std::fixed);
    pool.submit_sequence(0, 16,
            [](const unsigned int i)
            {
                sync_out.print("The square root of ", std::setw(2), i, " is ", std::sqrt(i), ".", BS::synced_stream::endl);
            })
        .wait();
}

Note, however, that BS::synced_stream::endl should only be used if flushing is desired; otherwise, a newline character should be used instead. As with std::endl, using BS::synced_stream::endl too often will cause a performance hit, as it will force the stream to flush the buffer every time it is called.

If desired, BS::synced_stream can also synchronize printing into more than one stream at a time. To facilitate this, we can pass a list of output streams to the constructor. For example, the following program will print the same output to both std::cout and a log file:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <fstream>            // std::ofstream
#include <iostream>           // std::cout

BS::thread_pool pool;

int main()
{
    std::ofstream log_file("task.log");
    BS::synced_stream sync_out(std::cout, log_file);
    pool.submit_sequence(0, 5,
            [&sync_out](const unsigned int i)
            {
                sync_out.println("Task no. ", i, " executing.");
            })
        .wait();
}

Note that we must wait on the future before the main() function ends, as otherwise the log file may be destructed before the tasks finish executing. If we used detach_sequence(), which does not return a future, we would have to call pool.wait() in the last line.

In this example we did not create the BS::synced_stream as a global object, since we wanted to pass the log file as a stream to the constructor. However, it is also possible to add streams to or remove streams from an existing BS::synced_stream object using the member functions add_stream() and remove_stream(). For example, in the following program, we create a BS::synced_stream global object with the default constructor, so that it prints to std::cout, but then we change out minds, remove std::cout from the list of streams, and add a log file instead:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <fstream>            // std::ofstream
#include <iostream>           // std::cout

BS::synced_stream sync_out;
BS::thread_pool pool;

int main()
{
    std::ofstream log_file("task.log");
    sync_out.remove_stream(std::cout);
    sync_out.add_stream(log_file);
    pool.submit_sequence(0, 5,
            [](const unsigned int i)
            {
                sync_out.println("Task no. ", i, " executing.");
            })
        .wait();
}

It is common practice to create a global BS::synced_stream object, so that it can be accessed from anywhere in the program, without having to pass it to every function that might want to print something to the stream. However, if you also have a global BS::thread_pool object, you must always make sure to define the global BS::synced_stream object before the global BS::thread_pool object, for the reasons explained in the warning above.

Internally, BS::synced_stream keeps the streams in an std::vector<std::ostream*>. The order in which the streams are added is also the order in which they will be printed to. For more precise control, you can use the member function get_streams() to get a reference to this vector, and manipulate it directly as you see fit.

Synchronizing tasks with BS::counting_semaphore and BS::binary_semaphore

The thread pool library provides two utility classes, BS::counting_semaphore and BS::binary_semaphore, which offer versatile synchronization primitives that can be used to synchronize tasks in a variety of ways. These classes are equivalent to the C++20 std::counting_semaphore and std::binary_semaphore, respectively, but are offered in the library as convenience polyfills for projects based on C++17. If C++20 features are available, the polyfills are not used, and instead are just aliases for the standard library classes.

Since BS::counting_semaphore and BS::binary_semaphore are identical in functionality to their standard library counterparts, we will not explain how to use them here. Instead, the user is referred to cppreference.com.

Managing tasks

Monitoring the tasks

Sometimes you may wish to monitor what is happening with the tasks you submitted to the pool. This may be done using these three member functions:

Note that get_tasks_total() == get_tasks_queued() + get_tasks_running(). These functions are demonstrated in the following program:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <chrono>             // std::chrono
#include <thread>             // std::this_thread

BS::synced_stream sync_out;
BS::thread_pool pool(4);

void sleep_half_second(const unsigned int i)
{
    std::this_thread::sleep_for(std::chrono::milliseconds(500));
    sync_out.println("Task ", i, " done.");
}

void monitor_tasks()
{
    sync_out.println(pool.get_tasks_total(), " tasks total, ", pool.get_tasks_running(), " tasks running, ", pool.get_tasks_queued(), " tasks queued.");
}

int main()
{
    pool.wait();
    pool.detach_sequence(0, 12, sleep_half_second);
    monitor_tasks();
    std::this_thread::sleep_for(std::chrono::milliseconds(750));
    monitor_tasks();
    std::this_thread::sleep_for(std::chrono::milliseconds(500));
    monitor_tasks();
    std::this_thread::sleep_for(std::chrono::milliseconds(500));
    monitor_tasks();
    pool.wait();
}

Assuming you have at least 4 hardware threads (so that 4 tasks can run concurrently), the output should be similar to:

12 tasks total, 0 tasks running, 12 tasks queued.
Task 0 done.
Task 1 done.
Task 2 done.
Task 3 done.
8 tasks total, 4 tasks running, 4 tasks queued.
Task 4 done.
Task 5 done.
Task 6 done.
Task 7 done.
4 tasks total, 4 tasks running, 0 tasks queued.
Task 8 done.
Task 9 done.
Task 10 done.
Task 11 done.
0 tasks total, 0 tasks running, 0 tasks queued.

The reason we called pool.wait() in the beginning is that when the thread pool is created, an initialization task runs in each thread, so if we don't wait, the first line would say there are 16 tasks in total, including the 4 initialization tasks. See below for more details. Of course, we also called pool.wait() at the end to ensure that all tasks have finished executing before the program ends.

Purging tasks

Consider a situation where the user cancels a multithreaded operation while it is still ongoing. Perhaps the operation was split into multiple tasks, and half of the tasks are currently being executed by the pool's threads, but the other half are still waiting in the queue.

The thread pool cannot terminate the tasks that are already running, as C++ does not provide that functionality (and in any case, abruptly terminating a task while it's running could have extremely bad consequences, such as memory leaks and data corruption). However, the tasks that are still waiting in the queue can be purged using the purge() member function.

Once purge() is called, any tasks still waiting in the queue will be discarded, and will never be executed by the threads. Please note that there is no way to restore the purged tasks; they are gone forever!

Consider for example the following program:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <chrono>             // std::chrono
#include <thread>             // std::this_thread

BS::synced_stream sync_out;
BS::thread_pool pool(4);

int main()
{
    pool.detach_sequence(0, 8,
        [](const unsigned int i)
        {
            std::this_thread::sleep_for(std::chrono::milliseconds(100));
            sync_out.println("Task ", i, " done.");
        });
    std::this_thread::sleep_for(std::chrono::milliseconds(50));
    pool.purge();
    pool.wait();
}

The program submit 8 tasks to the queue. Each task waits 100 milliseconds and then prints a message. The thread pool has 4 threads, so it will execute the first 4 tasks in parallel, and then the remaining 4. We wait 50 milliseconds, to ensure that the first 4 tasks have all started running. Then we call purge() to purge the remaining 4 tasks. As a result, these tasks never get executed. However, since the first 4 tasks are still running when purge() is called, they will finish uninterrupted; purge() only discards tasks that have not yet started running. The output of the program therefore only contains the messages from the first 4 tasks:

Task 0 done.
Task 1 done.
Task 2 done.
Task 3 done.

Please note that, as explained above, the thread pool cannot terminate running tasks on its own. If you need to do that, you must incorporate a mechanism into the task itself that will terminate the task safely. For example, you could create an atomic flag that the task checks periodically, terminating itself if the flag is set. Here is a simple example:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <chrono>             // std::chrono
#include <thread>             // std::this_thread

BS::synced_stream sync_out;
BS::thread_pool pool(4);

int main()
{
    std::atomic<bool> stop_flag = false;
    pool.detach_sequence(0, 8,
        [&stop_flag](const unsigned int i)
        {
            std::this_thread::sleep_for(std::chrono::milliseconds(100));
            if (stop_flag)
                return;
            sync_out.println("Task ", i, " done.");
        });
    std::this_thread::sleep_for(std::chrono::milliseconds(50));
    stop_flag = true;
    pool.purge();
    pool.wait();
}

This program will not print out any output, as the tasks will terminate themselves prematurely when stop_flag is set to true. In this case, we did not have to call purge(), but by doing so we prevented the other 4 tasks from being executed for no reason.

Exception handling

submit_task() catches any exceptions thrown by the submitted task and forwards them to the corresponding future. They can then be caught when invoking the get() member function of the future. For example:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <exception>          // std::exception
#include <future>             // std::future
#include <stdexcept>          // std::runtime_error

BS::synced_stream sync_out;
BS::thread_pool pool;

double inverse(const double x)
{
    if (x == 0)
        throw std::runtime_error("Division by zero!");
    return 1 / x;
}

int main()
{
    constexpr double num = 0;
    std::future<double> my_future = pool.submit_task(
        [num]
        {
            return inverse(num);
        });
    try
    {
        const double result = my_future.get();
        sync_out.println("The inverse of ", num, " is ", result, ".");
    }
    catch (const std::exception& e)
    {
        sync_out.println("Caught exception: ", e.what());
    }
}

The output will be:

Caught exception: Division by zero!

However, if you change num to any non-zero number, no exceptions will be thrown and the inverse will be printed.

It is important to note that wait() does not throw any exceptions; only get() does. Therefore, even if your task does not return anything, i.e. your future is an std::future<void>, you must still use get() on the future obtained from it if you want to catch exceptions thrown by it. Here is an example:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <exception>          // std::exception
#include <future>             // std::future
#include <stdexcept>          // std::runtime_error

BS::synced_stream sync_out;
BS::thread_pool pool;

void print_inverse(const double x)
{
    if (x == 0)
        throw std::runtime_error("Division by zero!");
    sync_out.println("The inverse of ", x, " is ", 1 / x, ".");
}

int main()
{
    constexpr double num = 0;
    std::future<void> my_future = pool.submit_task(
        [num]
        {
            print_inverse(num);
        });
    try
    {
        my_future.get();
    }
    catch (const std::exception& e)
    {
        sync_out.println("Caught exception: ", e.what());
    }
}

When using BS::multi_future<T> to handle multiple futures at once, exception handling works the same way: if any of the futures may throw exceptions, you may catch these exceptions when calling get(), even in the case of BS::multi_future<void>.

Note that if you use detach_task(), or any other detach member function, there is no way to catch exceptions thrown by the task, as a future will not be returned. In such cases, all exceptions thrown by the task will be silently ignored, to prevent program termination. If you need to catch exceptions in a detached task, you must do so within the task itself, as in this example:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <exception>          // std::exception
#include <stdexcept>          // std::runtime_error

BS::synced_stream sync_out;
BS::thread_pool pool;

double inverse(const double x)
{
    if (x == 0)
        throw std::runtime_error("Division by zero!");
    return 1 / x;
}

int main()
{
    constexpr double num = 0;
    pool.detach_task(
        [num]
        {
            try
            {
                const double result = inverse(num);
                sync_out.println("The inverse of ", num, " is ", result, ".");
            }
            catch (const std::exception& e)
            {
                sync_out.println("Caught exception: ", e.what());
            }
        });
    pool.wait();
}

If exceptions are explicitly disabled in your codebase, or if the feature-test macro __cpp_exceptions is undefined for any other reason, exception handling will be automatically disabled in the thread pool.

Getting information about the current thread

The class BS::this_thread provides functionality analogous to std::this_thread, that is, it allows a thread to reference itself. It contains the following static member functions:

An std::optional is an object that may or may not have a value. std::nullopt is a placeholder which indicates that the object does not have a value. To access an std::optional, you should first use std::optional::has_value() to check if it contains a value, and if so, use std::optional::value() to obtain that value. A shortcut for if (x.has_value()) is if (x), and a shortcut for x.value() is *x.

The reason that BS::this_thread::get_pool() returns a void* is that BS::thread_pool is a template. Once you obtain the pool pointer, you must cast it to the desired instantiation of the template if you want to use any member functions. Note that you have to cast it to the correct type; if you cast a pointer to a BS::light_thread_pool into a pointer to a BS::priority_thread_pool, for example, your program will have undefined behavior. (Please see the optional features section for more information about the template parameters and aliases.)

Here is an example illustrating all of the above:

#include "BS_thread_pool.hpp" // BS::light_thread_pool, BS::synced_stream, BS::this_thread
#include <atomic>             // std::atomic
#include <cstddef>            // std::size_t
#include <optional>           // std::optional
#include <thread>             // std::thread

BS::synced_stream sync_out;
BS::light_thread_pool p1;
BS::light_thread_pool p2;
std::atomic<char> ltr = 'A';

void check_this_thread(const char letter)
{
    const std::optional<void*> my_pool = BS::this_thread::get_pool();
    const std::optional<std::size_t> my_index = BS::this_thread::get_index();

    if (my_pool && my_index)
    {
        const std::size_t pool_number = *my_pool == &p1 ? 1 : 2;
        sync_out.println("Task ", letter, " is being executed by thread #", *my_index, " of pool #", pool_number, '.');
        static_cast<BS::light_thread_pool*>(*my_pool)->detach_task(
            [letter]
            {
                sync_out.println("-> Task ", ltr++, " was submitted by task ", letter, " using detach_task().");
            });
    }
    else
    {
        sync_out.println("Task ", letter, " is being executed by an independent thread, not in any thread pools.");
        std::thread(
            [letter]
            {
                sync_out.println("-> Task ", ltr++, " was submitted by task ", letter, " using a detached std::thread.");
            })
            .detach();
    }
}

int main()
{
    p1.submit_task(
          []
          {
              check_this_thread(ltr++);
          })
        .wait();
    p2.submit_task(
          []
          {
              check_this_thread(ltr++);
          })
        .wait();
    std::thread(
        []
        {
            check_this_thread(ltr++);
        })
        .join();
}

The output of this program will be similar to:

Task A is being executed by thread #3 of pool #1.
-> Task B was submitted by task A using detach_task().
Task C is being executed by thread #7 of pool #2.
-> Task D was submitted by task C using detach_task().
Task E is being executed by an independent thread, not in any thread pools.
-> Task F was submitted by task E using a detached std::thread.

In this example, we execute the task check_this_thread() in three different ways:

  1. By submitting it from the thread pool p1.
  2. By submitting it from the thread pool p2.
  3. By submitting it from an independent std::thread.

The task calls BS::this_thread::get_pool() and BS::this_thread::get_index() and receives two std::optional objects, my_pool and my_index. If both have a value (that is, evaluate to true), then the task knows it is running in a thread pool. The actual values are then obtained by "dereferencing" them: the pool pointer is *my_pool, and the thread index is *my_index.

The task deduces which pool it is running in by comparing the pointer *my_pool to the addresses of the pools p1 and p2. It also gets the index of the thread from *my_index. Finally, it detaches an additional task (without waiting for it, as that might cause a deadlock!) from its own pool by first casting the void* pointer to the correct type, which in this case is BS::light_thread_pool*, and then calling the detach_task() member function of that specific pool.

If my_pool and my_index do not have values (that is, evaluate to false), then the task knows it is running in an independent thread. In this case, it detaches the additional task using another independent thread.

Thread initialization functions

Sometimes, it is necessary to initialize the threads before they run any tasks. This can be done by submitting a proper initialization function to the BS::thread_pool constructor or to reset(), either as the only argument or as the second argument after the desired number of threads.

The thread initialization function must have no return value. It can either take one argument, the thread index of type std::size_t, or zero arguments. In the latter case, the function can use BS::this_thread::get_index() to find the thread index. In addition, the function can use BS::this_thread::get_pool() to find which pool its thread belongs to.

The initialization functions are effectively submitted as a set of special tasks, one per thread, which bypass the queue, but still count towards the number of running tasks. This means get_tasks_total() and get_tasks_running() will report that these tasks are running if they are checked immediately after the pool is initialized.

This is done so that the user has the option to either wait for the initialization functions to finish, by calling wait() on the pool, or just keep going. In either case, the initialization functions will always finish running before any tasks are executed by the corresponding thread, so there is no reason to wait for them to finish unless they have some side-effects that affect the main thread, or if they must finish running on all the threads before the pool starts executing any tasks.

Here is a simple example:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <random>             // std::mt19937_64, std::random_device

BS::synced_stream sync_out;
thread_local std::mt19937_64 twister;

int main()
{
    BS::thread_pool pool(
        []
        {
            twister.seed(std::random_device()());
        });
    pool.submit_sequence(0, 4,
            [](int)
            {
                sync_out.println("I generated a random number: ", twister());
            })
        .wait();
}

In this example, we create a thread_local Mersenne twister engine, meaning that each thread has its own independent engine. However, if we do not seed the engine, each thread would generate the exact same sequence of pseudo-random numbers. To remedy this, we pass an initialization function to the BS::thread_pool constructor which seeds the twister in each thread with the (hopefully) non-deterministic random number generator std::random_device.

Note that the lambda function we passed to submit_sequence() has the signature [](int), with an unnamed int argument, as it does not make use of the sequence index, which will be a number in the range [0, 4). This is an easy way to simply submit the same task multiple times.

Warning: Exceptions thrown by thread initialization functions must not throw any exceptions, as that will result in program termination. Any exceptions must be handled explicitly within the function.

Thread cleanup functions

Similarly to the thread initialization function, it is also possible to provide the pool with a cleanup function to run in each thread right before it is destroyed, which will happen when the pool is destructed or reset. Like the initialization function, the cleanup function must have no return value, and can either take one argument, the thread index of type std::size_t, or zero arguments. Each pool can have its own cleanup function, which is specified using the member function set_cleanup_func(). Here is an example:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::this_thread, BS::thread_pool
#include <chrono>             // std::chrono
#include <cstddef>            // std::size_t
#include <fstream>            // std::ofstream
#include <string>             // std::to_string
#include <thread>             // std::this_thread

thread_local std::ofstream log_file;
thread_local BS::synced_stream sync_out(log_file);
constexpr std::size_t threads = 4;

int main()
{
    BS::thread_pool pool(threads,
        [](const std::size_t idx)
        {
            log_file.open("thread_" + std::to_string(idx) + ".log");
        });
    pool.set_cleanup_func(
        []
        {
            log_file.close();
        });
    pool.submit_sequence(0, threads * 10,
            [](const std::size_t idx)
            {
                std::this_thread::sleep_for(std::chrono::milliseconds(50));
                sync_out.println("Task ", idx, " is running on thread ", *BS::this_thread::get_index(), '.');
            })
        .wait();
}

In this example, we create 4 threads, each of which has a separate thread-local BS::synced_stream object writing to its own log file of the form thread_N.log where N is the thread index. The initialization function, passed as an argument to the constructor, opens the log file. The cleanup function, set using set_cleanup_func(), closes the log file.

We submit 40 tasks to the queue using submit_sequence(), each of which prints a message to the log file indicating which thread it is running on. When the main() function exits and pool is destroyed, the cleanup function is called for each thread, ensuring that the log files are closed properly.

Warning: As with initialization functions, exceptions thrown by thread cleanup functions must not throw any exceptions, as that will result in program termination. Any exceptions must be handled explicitly within the function.

Passing task arguments by constant reference

In C++, it is often crucial to pass function arguments by reference or constant reference, instead of by value. This allows the function to access the object being passed directly, rather than creating a new copy of the object. We have already seen above that submitting an argument by reference is a simple matter of capturing it with a & in the lambda capture list. To submit by constant reference, we can use std::as_const() as in the following example:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <utility>            // std::as_const

BS::synced_stream sync_out;

void increment(int& x)
{
    ++x;
}

void print(const int& x)
{
    sync_out.println(x);
}

int main()
{
    BS::thread_pool pool;
    int n = 0;
    pool.submit_task(
            [&n]
            {
                increment(n);
            })
        .wait();
    pool.submit_task(
            [&n = std::as_const(n)]
            {
                print(n);
            })
        .wait();
}

The increment() function takes a reference to an integer, and increments that integer. Passing the argument by reference guarantees that n itself, in the scope of main(), will be incremented - rather than a copy of it in the scope of increment().

Similarly, the print() function takes a constant reference to an integer, and prints that integer. Passing the argument by constant reference guarantees that the variable will not be accidentally modified by the function, even though we are accessing n itself, rather than a copy. If we replace print() with increment(), the program won't compile, as increment() cannot take constant references.

Generally, it is not really necessary to pass arguments by constant reference, but it is more "correct" to do so, if we would like to guarantee that the variable being referenced is indeed never modified.

Optional features

Enabling features

The thread pool has some optional features, which are disabled by default to minimize overhead. They can be enabled by passing the appropriate template parameter to the BS::thread_pool class when creating the pool. The template parameter is a bitmask, so you can enable several features at once by combining them with the bitwise OR operator |. The bitmask flags are members of the BS::tp enumeration:

For example, to enable both task priority and pausing the pool, the thread pool object should be created like this:

BS::thread_pool<BS::tp::priority | BS::tp::pause> pool;

Convenience aliases are defined as follows:

There are no aliases with multiple features enabled; if this is desired, you must either pass the template parameter explicitly or define your own alias, and use the bitwise OR operator as shown above.

Note that, since optional features are enabled separately for each BS::thread_pool object, you can have multiple pools with different features enabled in the same program. For example, you can have one BS::light_thread_pool for tasks that do not need to be prioritized, and a separate BS::priority_thread_pool for tasks that do.

Setting task priority

Turning on the BS::tp::priority flag in the template parameter to BS::thread_pool enables task priority. In addition, the library defines the convenience alias BS::priority_thread_pool, which is equivalent to BS::thread_pool<BS::tp::priority>. When this feature is enabled, the static member priority_enabled will be set to true.

The priority of a task or group of tasks may then be specified as an additional argument (at the end of the argument list) to detach_task(), submit_task(), detach_blocks(), submit_blocks(), detach_loop(), submit_loop(), detach_sequence(), and submit_sequence(). If the priority is not specified, the default value will be 0.

The priority is a number of type BS::priority_t, which is a signed 8-bit integer, so it can have any value between -128 and +127. The tasks will be executed in priority order from highest to lowest. If priority is assigned to the block/loop/sequence parallelization functions, which submit multiple tasks, then all of these tasks will have the same priority.

The enumeration BS::pr contains some pre-defined priorities for users who wish to avoid magic numbers and enjoy better future-proofing. In order of decreasing priority, the pre-defined priorities are: BS::pr::highest, BS::pr::high, BS::pr::normal, BS::pr::low, and BS::pr::lowest.

Here is a simple example:

#include "BS_thread_pool.hpp" // BS::priority_thread_pool, BS::synced_stream

BS::synced_stream sync_out;
BS::priority_thread_pool pool(1);

int main()
{
    pool.detach_task(
        []
        {
            sync_out.println("This task will execute third.");
        },
        BS::pr::normal);
    pool.detach_task(
        []
        {
            sync_out.println("This task will execute fifth.");
        },
        BS::pr::lowest);
    pool.detach_task(
        []
        {
            sync_out.println("This task will execute second.");
        },
        BS::pr::high);
    pool.detach_task(
        []
        {
            sync_out.println("This task will execute first.");
        },
        BS::pr::highest);
    pool.detach_task(
        []
        {
            sync_out.println("This task will execute fourth.");
        },
        BS::pr::low);
}

This program will print out the tasks in the correct priority order. Note that for simplicity, we used a pool with just one thread, so the tasks will run one at a time. In a pool with 5 or more threads, all 5 tasks will actually run more or less at the same time, because, for example, the task with the second-highest priority will be picked up by another thread while the task with the highest priority is still running.

Of course, this is just a pedagogical example. In a realistic use case we may want, for example, to submit tasks that must be completed immediately with high priority so they skip over other tasks already in the queue, or background non-urgent tasks with low priority so they evaluate only after higher-priority tasks are done.

Task priority is facilitated using std::priority_queue, which has O(log n) complexity for storing new tasks, but only O(1) complexity for retrieving the next (i.e. highest-priority) task. This is in contrast with std::queue, used if priority is disabled, which both stores and retrieves with O(1) complexity.

Due to this, enabling the priority queue can incur a very slight decrease in performance, depending on the specific use case, which is why this feature is disabled by default. In other words, you gain functionality, but pay for it in performance. However, the difference in performance is never substantial, and compiler optimizations can often reduce it to a negligible amount.

Lastly, please note that when using the priority queue, tasks will not necessarily be executed in the same order they were submitted, even if they all have the same priority. This is due to the implementation of std::priority_queue as a binary heap, which means tasks are stored as a binary tree instead of sequentially.

Pausing the pool

Turning on the BS::tp::pause flag in the template parameter to BS::thread_pool enables pausing the pool. In addition, the library defines the convenience alias BS::pause_thread_pool, which is equivalent to BS::thread_pool<BS::tp::pause>. When this feature is enabled, the static member pause_enabled will be set to true.

This feature enables the member functions pause(), unpause(), and is_paused(). When you call pause(), the workers will temporarily stop retrieving new tasks out of the queue. However, any tasks already executed will keep running until they are done, since the thread pool has no control over the internal code of your tasks. If you need to pause a task in the middle of its execution, you must do that manually by programming your own pause mechanism into the task itself. To resume retrieving tasks, call unpause(). To check whether the pool is currently paused, call is_paused().

Here is an example:

#include "BS_thread_pool.hpp" // BS::pause_thread_pool, BS::synced_stream
#include <chrono>             // std::chrono
#include <thread>             // std::this_thread

BS::synced_stream sync_out;
BS::pause_thread_pool pool(4);

void sleep_half_second(const unsigned int i)
{
    std::this_thread::sleep_for(std::chrono::milliseconds(500));
    sync_out.println("Task ", i, " done.");
}

void check_if_paused()
{
    if (pool.is_paused())
        sync_out.println("Pool paused.");
    else
        sync_out.println("Pool unpaused.");
}

int main()
{
    pool.detach_sequence(0, 8, sleep_half_second);
    sync_out.println("Submitted 8 tasks.");
    std::this_thread::sleep_for(std::chrono::milliseconds(250));
    pool.pause();
    check_if_paused();
    std::this_thread::sleep_for(std::chrono::milliseconds(1000));
    sync_out.println("Still paused...");
    std::this_thread::sleep_for(std::chrono::milliseconds(1000));
    pool.detach_sequence(8, 12, sleep_half_second);
    sync_out.println("Submitted 4 more tasks.");
    sync_out.println("Still paused...");
    std::this_thread::sleep_for(std::chrono::milliseconds(1000));
    pool.unpause();
    check_if_paused();
}

Assuming you have at least 4 hardware threads, the output should be similar to:

Submitted 8 tasks.
Pool paused.
Task 0 done.
Task 1 done.
Task 2 done.
Task 3 done.
Still paused...
Submitted 4 more tasks.
Still paused...
Pool unpaused.
Task 4 done.
Task 5 done.
Task 6 done.
Task 7 done.
Task 8 done.
Task 9 done.
Task 10 done.
Task 11 done.

In this example, we initially submit a total of 8 tasks to the queue. The first 4 tasks start running immediately (only 4, since the pool has 4 threads). We wait for 250ms, and then pause. The tasks that are already running (for 500ms) will keep running until they finished; pausing has no effect on currently running tasks. However, the other 4 tasks will not be executed yet. While the pool is paused, we submit 4 more tasks to the queue, but they just wait at the end of the queue. When we unpause, the remaining 4 initial tasks are executed, followed by the 4 new tasks.

While the workers are paused, wait() will wait only for the running tasks instead of all tasks (otherwise it would wait forever). This is demonstrated by the following program:

#include "BS_thread_pool.hpp" // BS::pause_thread_pool, BS::synced_stream
#include <chrono>             // std::chrono
#include <thread>             // std::this_thread

BS::synced_stream sync_out;
BS::pause_thread_pool pool(4);

void sleep_half_second(const unsigned int i)
{
    std::this_thread::sleep_for(std::chrono::milliseconds(500));
    sync_out.println("Task ", i, " done.");
}

void check_if_paused()
{
    if (pool.is_paused())
        sync_out.println("Pool paused.");
    else
        sync_out.println("Pool unpaused.");
}

int main()
{
    pool.detach_sequence(0, 8, sleep_half_second);
    sync_out.println("Submitted 8 tasks. Waiting for them to complete.");
    pool.wait();
    pool.detach_sequence(8, 20, sleep_half_second);
    sync_out.println("Submitted 12 more tasks.");
    std::this_thread::sleep_for(std::chrono::milliseconds(250));
    pool.pause();
    check_if_paused();
    sync_out.println("Waiting for the ", pool.get_tasks_running(), " running tasks to complete.");
    pool.wait();
    sync_out.println("All running tasks completed. ", pool.get_tasks_queued(), " tasks still queued.");
    std::this_thread::sleep_for(std::chrono::milliseconds(1000));
    sync_out.println("Still paused...");
    std::this_thread::sleep_for(std::chrono::milliseconds(1000));
    sync_out.println("Still paused...");
    std::this_thread::sleep_for(std::chrono::milliseconds(1000));
    pool.unpause();
    check_if_paused();
    std::this_thread::sleep_for(std::chrono::milliseconds(250));
    sync_out.println("Waiting for the remaining ", pool.get_tasks_total(), " tasks (", pool.get_tasks_running(), " running and ", pool.get_tasks_queued(), " queued) to complete.");
    pool.wait();
    sync_out.println("All tasks completed.");
}

The output should be similar to:

Submitted 8 tasks. Waiting for them to complete.
Task 0 done.
Task 1 done.
Task 2 done.
Task 3 done.
Task 4 done.
Task 5 done.
Task 6 done.
Task 7 done.
Submitted 12 more tasks.
Pool paused.
Waiting for the 4 running tasks to complete.
Task 8 done.
Task 9 done.
Task 10 done.
Task 11 done.
All running tasks completed. 8 tasks still queued.
Still paused...
Still paused...
Pool unpaused.
Waiting for the remaining 8 tasks (4 running and 4 queued) to complete.
Task 12 done.
Task 13 done.
Task 14 done.
Task 15 done.
Task 16 done.
Task 17 done.
Task 18 done.
Task 19 done.
All tasks completed.

The first wait(), which was called while the pool was not paused, waited for all 8 tasks, both running and queued. The second wait(), which was called after pausing the pool, only waited for the 4 running tasks, while the other 8 tasks remained queued, and were not executed since the pool was paused. Finally, the third wait(), which was called after unpausing the pool, waited for the remaining 8 tasks, both running and queued.

Note that pausing the pool adds additional checks to the waiting and worker functions, which have a very small but non-zero overhead. This is why this feature is disabled by default.

Warning: If the thread pool is destroyed while paused, any tasks still in the queue will never be executed!

Avoiding wait deadlocks

Turning on the BS::tp::wait_deadlock_checks flag in the template parameter to BS::thread_pool enables wait deadlock checks. In addition, the library defines the convenience alias BS::wdc_thread_pool, which is equivalent to BS::thread_pool<BS::tp::wait_deadlock_checks>. When this feature is enabled, the static member wait_deadlock_checks_enabled will be set to true.

To understand why this feature is useful, consider the following program:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool

BS::synced_stream sync_out;
BS::thread_pool pool;

int main()
{
    pool.detach_task(
        []
        {
            pool.wait();
            sync_out.println("Done waiting.");
        });
}

This program creates a thread pool, and then detaches a task that waits for tasks in the same thread pool to complete. If you run this program, it will never print the message "Done waiting", because the task will wait for itself to complete. This causes a deadlock, and the program will wait forever.

Usually, in simple programs, this will never happen. However, in more complicated programs, perhaps ones running multiple thread pools in parallel, wait deadlocks could potentially occur. In such cases, wait deadlock checks may be useful. If enabled, wait(), wait_for(), and wait_until() will check whether the user tried to call them from within a thread of the same pool, and if so, they will throw the exception BS::wait_deadlock instead of waiting.

Here is an example:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::wdc_thread_pool

BS::synced_stream sync_out;
BS::wdc_thread_pool pool;

int main()
{
    pool.detach_task(
        []
        {
            try
            {
                pool.wait();
                sync_out.println("Done waiting.");
            }
            catch (const BS::wait_deadlock&)
            {
                sync_out.println("Error: Deadlock!");
            }
        });
}

This time, wait() will detect the deadlock, and will throw an exception, causing the output to be "Error: Deadlock!".

Wait deadlock checks are disabled by default because wait deadlocks are not something that happens often, and the check adds a small but non-zero overhead every time wait(), wait_for(), or wait_until() is called. Note that if the feature-test macro __cpp_exceptions is undefined, wait deadlock checks will be automatically disabled, and trying to compile a program which creates a pool with the BS::tp::wait_deadlock_checks flag enabled will result in a compilation error.

Native extensions

Enabling the native extensions

While portability is one of the guiding principle for developing this library, non-portable features such as setting the thread priority using the operating system's native API can be very useful. Therefore, the library includes native extensions - which are disabled by default, as they are not portable.

The native extensions may be enabled by defining the macro BS_THREAD_POOL_NATIVE_EXTENSIONS at compilation time. If including the library as a header file, the macro must be defined before #include "BS_thread_pool.hpp". Note that even if the macro is defined, the native extensions are disabled automatically if a supported operating system (Windows, Linux, or macOS) is not detected.

If importing the library as a C++20 module, defining the macro before importing the module will not work, as modules cannot access macros defined in the program that imported them. Instead, you must define the macro as a compiler flag: -D BS_THREAD_POOL_NATIVE_EXTENSIONS for Clang and GCC or /D BS_THREAD_POOL_NATIVE_EXTENSIONS for MSVC.

The test program only tests the native extensions if the macro BS_THREAD_POOL_NATIVE_EXTENSIONS is defined at compilation time. If importing the library as a C++20 module, please ensure that the macro is also enabled when compiling the module.

The constexpr flag BS::thread_pool_native_extensions indicates whether the thread pool library was compiled with native extensions enabled. Note that the flag will be false if BS_THREAD_POOL_NATIVE_EXTENSIONS is defined but the operating system is unsupported.

Warning: Please note that, as of v5.0.0 of the thread pool library, the native extensions have only been tested on Windows 11 23H2, Ubuntu 24.10, and macOS 15.1. They have not been tested on older versions of these operating systems, other Linux distributions, or any other operating systems, and are therefore not guaranteed to work on every system. If you encounter any issues, please report them on the GitHub repository.

Setting thread priority

The thread pool's native extensions provide the ability to set a thread's priority using the operating system's native API. Please note that this is not the same as setting a task's priority, which is a feature of the thread pool's queue, unrelated to the pool's threads themselves. Task priority controls which tasks are executed first, while thread priority (roughly) controls how much CPU time a thread gets compared to other threads. In addition, you can use the native extensions to set the priority of any thread (such as a thread created using std::thread), not just a pool thread.

For performance-critical applications, you may wish to increase the thread priority, while for applications that should run in the background, you may wish to decrease it. As priority is handled very differently on different operating systems, the thread pool library provides an abstraction layer over the native APIs, in the form of the enumeration class BS::os_thread_priority, which has the following 7 members:

On Windows, these pre-defined priorities map 1-to-1 with the thread priority values defined by the Windows API (with realtime mapping to time critical priority). On Linux and macOS, thread priorities are a lot more complicated, so these pre-defined priorities are mapped to the parameters available in the native API.

On Linux (with POSIX threads), thread priority is determined by three factors: scheduling policy, priority value, and "nice" value. The thread pool library's abstraction layer distills these factors into the above pre-defined levels, for simplicity and portability. The total number of possible combinations of parameters is much larger, but allowing more fine-grained control would not be portable, and in any case it would have limited use. For the precise mapping, please refer to the source code itself (in the header file BS_thread_pool.hpp).

On macOS, the thread pool library will also use POSIX threads, but unlike Linux, the "nice" value is per-process, not per-thread (in compliance with the POSIX standard). However, macOS does allow more freedom with respect to the available range of priorities. Again, for the precise details of the mapping, please refer to the source code itself.

Most users do not need to worry about the specifics of how thread priority is handled on different operating systems. The abstraction layer provided by the thread pool library is meant to make everything as simple and portable as possible. However, it is important to note that only Windows allows a non-privileged user to set a thread's priority to a higher value. On Linux and macOS, a non-privileged user can only set a thread's priority to a lower value, and only root can set a higher value; also, confusingly, if a user decreased the priority of their thread from normal to a lower priority, they cannot increase it back to normal without root privileges, even though normal was the thread's initial priority.

Thread priority is managed using two static member functions of the BS::this_thread class:

Increasing or decreasing the priority of all the threads in a pool can be done most easily using an initialization function. Here is an example:

#define BS_THREAD_POOL_NATIVE_EXTENSIONS
#include "BS_thread_pool.hpp" // BS::os_thread_priority, BS::synced_stream, BS::this_thread, BS::thread_pool
#include <cstddef>            // std::size_t
#include <map>                // std::map
#include <optional>           // std::optional
#include <string>             // std::string

BS::synced_stream sync_out;
BS::os_thread_priority target = BS::os_thread_priority::highest;

const std::map<BS::os_thread_priority, std::string> os_thread_priority_map = {{BS::os_thread_priority::idle, "idle"}, {BS::os_thread_priority::lowest, "lowest"}, {BS::os_thread_priority::below_normal, "below_normal"}, {BS::os_thread_priority::normal, "normal"}, {BS::os_thread_priority::above_normal, "above_normal"}, {BS::os_thread_priority::highest, "highest"}, {BS::os_thread_priority::realtime, "realtime"}};

std::string os_thread_priority_name(const BS::os_thread_priority priority)
{
    const std::map<BS::os_thread_priority, std::string>::const_iterator it = os_thread_priority_map.find(priority);
    return (it != os_thread_priority_map.end()) ? it->second : "unknown";
}

void set_priority(const std::size_t idx)
{
    const std::optional<BS::os_thread_priority> get_result = BS::this_thread::get_os_thread_priority();
    if (get_result)
        sync_out.println("The OS thread priority of thread ", idx, " is currently set to '", os_thread_priority_name(*get_result), "'.");
    else
        sync_out.println("Error: Failed to get the OS thread priority of thread ", idx, '!');
    const bool set_result = BS::this_thread::set_os_thread_priority(target);
    sync_out.println(set_result ? "Successfully" : "Error: Failed to", " set the OS priority of thread ", idx, " to '", os_thread_priority_name(target), "'.");
}

int main()
{
    BS::thread_pool pool(4, set_priority);
}

On Linux or macOS, please ensure that you run this example as root using sudo, otherwise it will fail. In this example we used an initialization function set_priority() to first print the initial priority of each thread (which should be "normal") and then set the priority of each thread to "highest". os_thread_priority_name() is a helper function to convert a BS::os_thread_priority value to a human-readable string.

Setting thread affinity

The thread pool's native extensions allow the user to set a thread's processor affinity using the operating system's native API. Processor affinity, sometimes called "pinning", controls which logical processors a thread is allowed to run on. Generally, a non-hyperthreaded core corresponds to one logical processor, and a hyperthreaded core corresponds to two logical processors.

This can be useful for performance optimization, as it can reduce cache misses. However, it can also degrade performance, sometimes severely, since the thread will not run at all until its assigned cores are available. Therefore, it is usually better to let the operating system's scheduler manage thread affinities on its own, except in very specific cases.

Please note that setting thread affinity works on Windows and Linux, but not on macOS, as the native API does not allow it. As affinity is handled differently on different operating systems, the thread pool library provides an abstraction layer over the native APIs. In this abstraction layer, affinity is controlled using an std::vector<bool> where each element corresponds to a logical processor.

Thread affinity is managed using two static member functions of the BS::this_thread class:

Note that the thread affinity must be a subset of the process affinity (as obtained using BS::get_os_process_affinity()) for the containing process of a thread.

Setting thread affinity can significantly increase performance if multiple threads are accessing the same data, as the data can be kept in the local cache of the specific core that the threads are running on. This is illustrated in the following program:

#define BS_THREAD_POOL_NATIVE_EXTENSIONS
#include "BS_thread_pool.hpp" // BS::synced_stream, BS::this_thread
#include <atomic>             // std::atomic
#include <chrono>             // std::chrono
#include <cstdint>            // std::uint64_t
#include <thread>             // std::thread
#include <vector>             // std::vector

void do_test(const bool pin_threads)
{
    BS::synced_stream sync_out;
    constexpr std::uint64_t num_increments = 10'000'000;
    sync_out.println(pin_threads ? "With   " : "Without", " thread pinning:");
    std::atomic<std::uint64_t> counter = 0;
    auto worker = [&counter, pin_threads]
    {
        if (pin_threads)
        {
            std::vector<bool> affinity(std::thread::hardware_concurrency(), false);
            affinity[0] = true;
            BS::this_thread::set_os_thread_affinity(affinity);
        }
        for (std::uint64_t i = 0; i < num_increments; ++i)
            ++counter;
    };
    const std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now();
    std::thread thread1(worker);
    std::thread thread2(worker);
    thread1.join();
    thread2.join();
    const std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
    sync_out.println("Final count: ", counter, ", execution time: ", (std::chrono::duration_cast<std::chrono::milliseconds>(end - start)).count(), " ms.");
}

int main()
{
    do_test(false);
    do_test(true);
}

The output should be similar to:

Without thread pinning:
Final count: 20000000, execution time: 160 ms.
With thread pinning:
Final count: 20000000, execution time: 68 ms.

In this program, we create two threads, each of which increments an atomic counter 10 million times. First, we do this without thread pinning; in this case, since the OS will most likely run the threads on two different cores, the state of the atomic variable will need to be synchronized between the two cores, which will incur a performance penalty. Then, we do this with thread pinning, using BS::this_thread::set_os_thread_affinity() to set the affinity of each thread to core 0 by passing a vector with true at index 0 and false at all other indices. In this case, the atomic variable will be kept in the local cache of core 0, which will increase performance.

Warning: Setting the affinity of threads in a pool is almost never a good idea! When you submit a task to a thread pool, you have no control over which thread it will actually run in. The main benefit of thread affinity is to reduce cache misses, but there is no way to guarantee that tasks accessing the same data will run on the same core if they are submitted to a pool. In fact, setting the affinity of the pool threads will almost certainly decrease performance, sometimes substantially, as the operating system's scheduler will be prevented from assigning threads to cores in the most optimal way. The most common use case for BS::this_thread::set_os_thread_affinity() is to set the affinity of individual threads created independently of any pool, for example using std::thread.

Setting thread names

The thread pool's native extensions permit setting a thread's name using the operating system's native API. This can be useful for debugging, as the names of the threads will be visible in the debugger (for example, in the Call Stack on Visual Studio Code).

As with other features of the native extensions, the thread pool library provides an abstraction layer over the native APIs, consisting of the following two static member functions of the BS::this_thread class:

This feature is illustrated by the following program:

#define BS_THREAD_POOL_NATIVE_EXTENSIONS
#include "BS_thread_pool.hpp" // BS::synced_stream, BS::this_thread, BS::thread_pool
#include <cstddef>            // std::size_t
#include <optional>           // std::optional
#include <string>             // std::string, std::to_string

BS::synced_stream sync_out;

void set_name(const std::size_t idx)
{
    const std::string name = "Thread " + std::to_string(idx);
    const bool result = BS::this_thread::set_os_thread_name(name);
    sync_out.println(result ? "Successfully" : "Error: Failed to", " set the name of thread ", idx, " to '", name, "'.");
}

void get_name()
{
    const std::optional<std::string> result = BS::this_thread::get_os_thread_name();
    if (result)
        sync_out.println("This thread's name is set to '", *result, "'.");
    else
        sync_out.println("Error: Failed to get this thread's name!");
}

int main()
{
    const bool result = BS::this_thread::set_os_thread_name("Main Thread");
    sync_out.println(result ? "Successfully" : "Error: Failed to", " set the name of the main thread.");
    BS::thread_pool pool(4, set_name);
    pool.wait();
    // Place a breakpoint here to see the thread names in the debugger.
    pool.submit_task(get_name).wait();
}

If you place a breakpoint on the indicated line, you will be able to see the names of the threads in the debugger. The main thread will be named "Main Thread", while the 4 pool threads will be named "Thread 0" to "Thread 3". In the last line, a random thread's name will be read and printed out.

Setting process priority

Although not directly related to multithreading, BS::thread_pool's native extensions also provide the ability to set the entire process's priority using the operating system's native API. As with thread priority, the thread pool library provides an abstraction layer over the native APIs, in the form of the enumeration class BS::os_process_priority, which has the following 6 members:

On Windows, these pre-defined priorities map 1-to-1 with the process priority classes defined by the Windows API. On Linux and macOS, process priorities are mapped to "nice" values, as given by the actual values of the enumeration members (note that lower numbers correspond to higher priorities).

Process priority is managed using two functions:

This is demonstrated by the following program:

#define BS_THREAD_POOL_NATIVE_EXTENSIONS
#include "BS_thread_pool.hpp" // BS::get_os_process_priority, BS::os_process_priority, BS::set_os_process_priority, BS::synced_stream
#include <map>                // std::map
#include <optional>           // std::optional
#include <string>             // std::string

BS::synced_stream sync_out;
BS::os_process_priority target = BS::os_process_priority::high;

const std::map<BS::os_process_priority, std::string> os_process_priority_map = {{BS::os_process_priority::idle, "idle"}, {BS::os_process_priority::below_normal, "below_normal"}, {BS::os_process_priority::normal, "normal"}, {BS::os_process_priority::above_normal, "above_normal"}, {BS::os_process_priority::high, "high"}, {BS::os_process_priority::realtime, "realtime"}};

std::string os_process_priority_name(const BS::os_process_priority priority)
{
    const std::map<BS::os_process_priority, std::string>::const_iterator it = os_process_priority_map.find(priority);
    return (it != os_process_priority_map.end()) ? it->second : "unknown";
}

int main()
{
    const std::optional<BS::os_process_priority> get_result = BS::get_os_process_priority();
    if (get_result)
        sync_out.println("The OS process priority is currently set to '", os_process_priority_name(*get_result), "'.");
    else
        sync_out.println("Error: Failed to get the OS process priority!");
    const bool set_result = BS::set_os_process_priority(target);
    sync_out.println(set_result ? "Successfully" : "Error: Failed to", " set the OS process priority to '", os_process_priority_name(target), "'.");
}

On Linux or macOS, please ensure that you run this example as root using sudo, otherwise it will fail. (Note that here we didn't actually need to use BS::synced_stream, since we are not using the thread pool, and only the main thread prints to the stream; we used it only for consistency with other examples.)

Setting process affinity

The thread pool's native extensions also allow the user to set the entire process's processor affinity using the operating system's native API. This works on Windows and Linux, but not on macOS, as the native API does not allow it. As with thread affinity, the thread pool library provides an abstraction layer over the native APIs, in the form of an std::vector<bool> where each element corresponds to a logical processor.

Process affinity is managed using two functions:

Accessing native thread handles

If the native extensions are enabled, the BS::thread_pool class gains the member function get_native_handles(), which returns a vector containing the underlying implementation-defined thread handles for each of the pool's threads. These can then be used in an implementation-specific way to manage the threads at the OS level.

Here is a quick example:

#define BS_THREAD_POOL_NATIVE_EXTENSIONS
#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool
#include <thread>             // std::thread
#include <vector>             // std::vector

BS::synced_stream sync_out;
BS::thread_pool pool(4);

int main()
{
    std::vector<std::thread::native_handle_type> handles = pool.get_native_handles();
    for (std::size_t i = 0; i < handles.size(); ++i)
        sync_out.println("Thread ", i, " native handle: ", handles[i]);
}

The output will depend on your compiler and operating system. Here is an example:

Thread 0 native handle: 00000000000000AC
Thread 1 native handle: 00000000000000B0
Thread 2 native handle: 00000000000000B4
Thread 3 native handle: 00000000000000B8

Warning: Please note that any code written using the native handles directly will not be portable. As detailed above, the thread pool's native extensions define abstraction layers for several commonly used thread operations, which are portable on supported platforms, and are therefore strongly preferred over non-portable operations. The native handles are made available for users who need to perform operations that are not covered by these abstraction layers.

Testing the library

Automated tests

The file BS_thread_pool_test.cpp in the tests folder of the GitHub repository will perform automated tests of all aspects of the library. In addition, the code is meant to serve as an extensive example of how to properly use the library.

The test program also takes the following command line arguments:

If no options are entered, the default is benchmarks log stdout tests. If the file default_args.txt exists in the same folder, the test program reads the default arguments from it (space separated in a single line). Command line arguments can still override these defaults. This is useful when debugging.

The following macros can be defined during compilation (using the -D flag in Clang and GCC or /D in MSVC) to enable additional features:

A Python script, test_all.py, is provided for convenience in the scripts folder. This script makes use of the bundled compile_cpp.py script, and requires Python 3.12 or later. The script will automatically detect if Clang, GCC, and/or MSVC are available, and compile and run the test program using each available compiler 3 times:

  1. With C++17 support.
  2. With C++20 support, using import BS.thread_pool.
  3. With C++23 support, using import BS.thread_pool, and using import std on supported compilers.

If any of the tests fail, please submit a bug report including the exact specifications of your system (OS, CPU, compiler, etc.) and the generated log file. However, please note that only the latest versions of each compiler are supported.

Performance tests

BS_thread_pool_test.cpp also performs benchmarks, using a highly-optimized multithreaded algorithm which generates a plot of the Mandelbrot set, utilizing a normalized iteration count algorithm and linear interpolation to create smooth coloring. If tests are enabled, the benchmarks will only be performed if all of the tests pass.

These benchmarks are heavily CPU-intensive, which results in a high speedup factor due to multithreading, ideally utilizing every core and thread to their fullest extent. This makes them useful for optimizing the library, since they are more sensitive to the thread pool's own performance than to other factors such as memory or cache.

The full benchmarks are enabled using the command line argument benchmarks, which is enabled by default. The command line argument plot can be used to just plot the Mandelbrot set once, either instead of or in addition to doing the full benchmarks. This will plot the largest possible image that can be plotted in 5 seconds, and only measure the performance in pixels/ms for the entire plot.

If you want to see the actual plot, pass the save command line argument. The plot is saved to a BMP file, to avoid having to depend on 3rd-party libraries. This is off by default, since that file can get quite large.

The program determines the optimal resolution of the Mandelbrot plot by testing how many pixels are needed to reach a certain target duration when parallelizing the loop using a number of tasks equal to the number of threads. This ensures that the benchmarks take approximately the same amount of time (per thread) on all systems, and are thus more consistent and portable.

Once the appropriate resolution has been determined, the program plots the Mandelbrot set. For more details about the algorithm used, please see the source code for BS_thread_pool_test.cpp. This operation is performed both single-threaded and multithreaded, with the multithreaded computation spread across multiple tasks submitted to the pool.

Multithreaded tests are performed with increasingly higher task counts, while keeping the number of threads in the pool equal to the hardware concurrency for optimal performance. Each test is repeated multiple times, with the run times averaged over all runs of the same test. The program keeps increasing the number of tasks by a factor of 2 until diminishing returns are encountered. The run times of the tests are compared, and the maximum speedup obtained compared to the single-threaded test is calculated.

If the native extensions are enabled, the program will try to increase the priority of both the process itself and all the threads in the pool to the highest possible value, to prevent other processes from interfering with the benchmarks. Therefore, to obtain the most reliable benchmarks, it is recommended to run the tests as a privileged user, especially on Linux or macOS where only root can increase the priority.

As an example, here are the results of the benchmarks running on a 24-core (8P+16E) / 32-thread Intel i9-13900K CPU. The tests were compiled using MSVC in C++23 mode, to obtain maximum performance using the latest C++23 features. Compiler optimizations were enabled using the /O2 flag. The benchmarks were run 5 times, and the result with the median speedup was as follows:

Generating a 3965x3965 plot of the Mandelbrot set...
Each test will be repeated 30 times to collect reliable statistics.
   1 task:  [..............................]  (single-threaded)
-> Mean:  510.5 ms, standard deviation:  0.5 ms, speed:  1026.5 pixels/ms.
   8 tasks: [..............................]
-> Mean:  149.1 ms, standard deviation:  0.6 ms, speed:  3514.7 pixels/ms.
  16 tasks: [..............................]
-> Mean:   85.4 ms, standard deviation:  2.5 ms, speed:  6133.9 pixels/ms.
  32 tasks: [..............................]
-> Mean:   48.3 ms, standard deviation:  1.8 ms, speed: 10849.7 pixels/ms.
  64 tasks: [..............................]
-> Mean:   29.1 ms, standard deviation:  1.0 ms, speed: 17987.7 pixels/ms.
 128 tasks: [..............................]
-> Mean:   23.6 ms, standard deviation:  0.7 ms, speed: 22173.8 pixels/ms.
 256 tasks: [..............................]
-> Mean:   22.5 ms, standard deviation:  0.6 ms, speed: 23325.3 pixels/ms.
 512 tasks: [..............................]
-> Mean:   21.8 ms, standard deviation:  0.5 ms, speed: 24075.4 pixels/ms.
1024 tasks: [..............................]
-> Mean:   21.9 ms, standard deviation:  0.7 ms, speed: 23892.4 pixels/ms.
Maximum speedup obtained by multithreading vs. single-threading: 23.5x, using 512 tasks.

This CPU has 24 cores, of which 8 are fast (5.40 GHz max) performance cores with hyperthreading (thus providing 16 threads in total), and 16 are slower (4.30 GHz max) efficiency cores without hyperthreading, for a total of 32 threads.

Due to the hybrid architecture, it is not trivial to calculate the theoretical maximum speedup. However, we can get a rough estimate by noticing that the E-cores are about 20% slower than the P-cores, and that hyperthreading is generally known to provide around a 30% speedup. Thus, the estimated theoretical speedup (compared to a single P-core) is 8 × 1.3 + 16 × 0.8 = 23.2x.

The actual median speedup obtained, 23.5x, is slightly above this estimate, which indicates that the thread pool provides optimal performance and allows the Mandelbrot plot algorithm to take full advantage of the CPU's capabilities.

It should also be noted that even though the available number of hardware threads is 32, the maximum possible speedup is achieved not with 32 tasks, but with 512 tasks - half the square of the number of hardware threads. The reason for this is that splitting the job into more tasks than threads eliminates thread idle time, as explained above. However, at 1024 tasks we encounter diminishing returns, as the overhead of submitting the tasks to the pool starts to outweigh the benefits of parallelization.

Finding the version of the library

Starting with v5.0.0, the thread pool library defines the constexpr object BS::thread_pool_version, which can be used to check the version of the library at compilation time. This object is of type BS::version, with members major, minor, and patch, and all comparison operators defined as constexpr. It also has a to_string() member function and an operator<< overload for easy printing at runtime.

Since BS::thread_pool_version is a constexpr object, it can be used in any context where a constexpr object is allowed, such as static_assert() and if constexpr. For example, the following program will fail to compile if the version is not 5.1.0 or higher:

#include "BS_thread_pool.hpp"

static_assert(BS::thread_pool_version >= BS::version(5, 1, 0), "This program requires version 5.1.0 or later of the BS::thread_pool library.");

int main()
{
    // ...
}

As another example, the following program will print the version of the library (this will implicitly use the << operator of BS::version) and then conditionally compile one of two branches of code depending on the version of the library:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool

BS::synced_stream sync_out;
BS::thread_pool pool;

int main()
{
    sync_out.println("Detected BS::thread_pool v", BS::thread_pool_version, '.');
    if constexpr (BS::thread_pool_version <= BS::version(5, 1, 0))
    {
        // Do something supported by BS::thread_pool v5.1.0 or earlier.
    }
    else
    {
        // Do something supported by newer versions of BS::thread_pool after v5.1.0.
    }
}

Currently, both the examples above are of pedagogical value only, because BS::thread_pool_version was only introduced in v5.0.0, and that is also the latest version at the time of writing, so there are no other versions to compare to. However, once future versions of the library are released, this object will be the preferred way to do version checking.

For backwards compatibility, if you are not sure if you are going to get v4 or v5 of the library, you can check the version using the following preprocessor macros, which were introduced in v4.0.1:

These macros allow for conditional inclusion of code using #if directives. As an example, the member function set_cleanup_func() was introduced in v5.0.0. Therefore, if the major version number is 5 or higher, we can use this function; otherwise, we must find some other way to do the cleanup:

#include "BS_thread_pool.hpp" // BS::synced_stream, BS::thread_pool

BS::synced_stream sync_out;
BS::thread_pool pool;

int main()
{
#if BS_THREAD_POOL_VERSION_MAJOR >= 5
   pool.set_cleanup_func(
       []
       {
           sync_out.println("Doing cleanup...");
       });
#else
   // Do the cleanup in some other way.
#endif
}

However, please note that if the library is imported as a C++20 module, these macros will not be available, since macros cannot be exported from a module. In this case, you must use BS::thread_pool_version instead. (Indeed, this is exactly why it was introduced in the first place.)

Importing the library as a C++20 module

Compiling the module

If C++20 features are available, the library can be imported as a C++20 module using import BS.thread_pool. This is the officially recommended way to use the library, as it has many benefits, such as faster compilation times, better encapsulation, no namespace pollution, no include order issues, easier maintainability, simpler dependency management, and more. The constexpr flag BS::thread_pool_module indicates whether the thread pool library was compiled as a module. For more information on C++20 modules, please see cppreference.com.

The module file itself is BS.thread_pool.cppm, located in the modules folder, and it is just a thin wrapper around the header file BS_thread_pool.hpp. The C++20 standard does not provide a way for one file to be used as both a module and a header file, so both files are needed in order to compile the library as a module. (However, to use the library as a header file, only BS_thread_pool.hpp is needed.)

Note that the header file BS_thread_pool.hpp has an underscore _ following BS, for backwards compatibility with older versions of the library. However, the module file BS.thread_pool.cppm has a dot . following BS, to conform with the C++20 module naming convention, where dots represent hierarchy; all modules written by the author of this library will use the BS. prefix.

This feature has been tested with the latest versions of Clang, GCC, and MSVC. Unfortunately, at the time of writing, C++20 modules are still not fully implemented in all compilers, and each compiler implements them differently.

The easiest way to compile the module itself, as well as any programs that import it, is using the compile_cpp.py Python script provided in the GitHub repository, which will automatically figure out the appropriate flags for each compiler. Please see the next section for more information.

However, if you prefer to compile manually, the module must first be compiled into a binary file, in a format specific to each compiler, as described in the following sections. Once it is compiled once and for all, this binary file (plus an object file, in MSVC) is the only file needed to import the library; the .cppm and .hpp files are no longer needed. However, any program using the module must be compiled with a flag indicating to the compiler where to find that binary file.

Once the module is compiled, it can be imported using import BS.thread_pool. In all the examples above, you can simply replace #include "BS_thread_pool.hpp" with import BS.thread_pool; in order to import the library as a module. The only exception is the native extensions, which are enabled in the examples using a macro; as explained in that section, the macro must be defined as a compiler flag, as modules cannot access macros defined in the program that imported them.

Here is a quick example:

import BS.thread_pool;

BS::synced_stream sync_out;
BS::thread_pool pool;

int main()
{
    pool.submit_task(
            []
            {
                sync_out.println("Thread pool library successfully imported using C++20 modules!");
            })
        .wait();
}

Below we will provide the commands for compiling the library as a module and then compiling the test program BS_thread_pool_test.cpp using this module, with Clang, GCC, and MSVC, as well as with CMake. In the GitHub repository, the relevant files are organized as follows:

├── README.md                     <- this documentation file
├── include
│   └── BS_thread_pool.hpp        <- the header file
├── modules
│   └── BS.thread_pool.cppm       <- the module file
├── tasks
│   └── compile_cpp.py            <- the compile script (optional)
└── tests
    └── BS_thread_pool_test.cpp   <- the test program

In the following examples, it is assumed that the commands are executed in the root directory of the repository (the one that contains README.md). The compiled files will be placed in a build subdirectory, which should be created beforehand.

Compiling with compile_cpp.py using import BS.thread_pool

The bundled Python script compile_cpp.py can be used to easily compile any programs that import the library as a module. The script will automatically figure out the appropriate flags for each compiler, so you do not have to worry about the details. For example, to compile the test program BS_thread_pool_test.cpp and have it import the BS.thread_pool module, simply run the following command in the root folder of the repository:

python scripts/compile_cpp.py tests/BS_thread_pool_test.cpp -s=c++20 -i=include -t=release -m="BS.thread_pool=modules/BS.thread_pool.cppm,include/BS_thread_pool.hpp" -o=build/BS_thread_pool_test -d=BS_THREAD_POOL_TEST_IMPORT_MODULE -v

Please see below for an explanation of the command line arguments. The -d argument defines the macro BS_THREAD_POOL_TEST_IMPORT_MODULE, which is used to indicate to the test program that it needs to import the library as a module instead of including the header file. Note that this macro is only used by the test program; it is not needed when you compile your own programs. To enable the native extensions, you should also add -d=BS_THREAD_POOL_NATIVE_EXTENSIONS to define the required macro. To use C++23, replace -s=c++20 with -s=c++23.

Since we used -t=release, optimization flags will be added automatically. If you now type build/BS_thread_pool_test, the test program will run; you can also add the argument -r to run it automatically after compilation. If the module was successfully imported, the test program will print the message:

Thread pool library imported using: import BS.thread_pool (C++20 modules).

For further customization, it is recommend to create a compile_cpp.yaml file as explained below.

Compiling with Clang using import BS.thread_pool

Note: The following instructions have only been tested using Clang v19.1.6, the latest version at the time of writing, and may not work with older versions of the compiler.

To compile the module file BS.thread_pool.cppm with Clang, first create the build folder using mkdir build, and then run the following command in the root folder of the repository:

clang++ modules/BS.thread_pool.cppm --precompile -std=c++20 -I include -o build/BS.thread_pool.pcm

Here is a breakdown of the compiler arguments:

Note that to enable the native extensions, you should add -D BS_THREAD_POOL_NATIVE_EXTENSIONS to define the required macro.

Once the module is compiled, you can compile the test program as follows:

clang++ tests/BS_thread_pool_test.cpp -fmodule-file="BS.thread_pool=build/BS.thread_pool.pcm" -std=c++20 -o build/BS_thread_pool_test -D BS_THREAD_POOL_TEST_IMPORT_MODULE

Here is a breakdown of the compiler arguments:

Again, you should add -D BS_THREAD_POOL_NATIVE_EXTENSIONS if you wish to test the native extensions. You do not need to use the -I flag, since the header file is not needed, only the .pcm file. If you now type build/BS_thread_pool_test, the test program will run. If the module was successfully imported, the test program will print the message:

Thread pool library imported using: import BS.thread_pool (C++20 modules).

Of course, you should add warning, debugging, optimization, and other compiler flags to the commands above as needed. For more information about using C++20 modules with Clang, please see the official documentation.

Note: On macOS, Apple Clang v16.0.0 (the latest version at the time of writing) does not support C++20 modules. Please either install the latest version of LLVM Clang using Homebrew, or include the library as a header file.

Compiling with GCC using import BS.thread_pool

Note: The following instructions have only been tested using GCC v14.2.0, the latest version at the time of writing, and may not work with older versions of the compiler.

To compile the module file BS.thread_pool.cppm with GCC, first create the build folder using mkdir build, and then run the following command in the root folder of the repository:

g++ -x c++ modules/BS.thread_pool.cppm -c "-fmodule-mapper=|@g++-mapper-server -r build" -fmodule-only -fmodules-ts -std=c++20 -I include

Here is a breakdown of the compiler arguments:

Note that to enable the native extensions, you should add -D BS_THREAD_POOL_NATIVE_EXTENSIONS to define the required macro.

Once the module is compiled, you can compile the test program as follows:

g++ tests/BS_thread_pool_test.cpp "-fmodule-mapper=|@g++-mapper-server -r build" -fmodules-ts -std=c++20 -o build/BS_thread_pool_test -D BS_THREAD_POOL_TEST_IMPORT_MODULE

Here is a breakdown of the compiler arguments:

Again, you should add -D BS_THREAD_POOL_NATIVE_EXTENSIONS if you wish to test the native extensions. You do not need to use the -I flag, since the header file is not needed, only the .gcm file. If you now type build/BS_thread_pool_test, the test program will run. If the module was successfully imported, the test program will print the message:

Thread pool library imported using: import BS.thread_pool (C++20 modules).

Of course, you should add warning, debugging, optimization, and other compiler flags to the commands above as needed. For more information about using C++20 modules with GCC, please see the official documentation.

Note: GCC v14.2.0 (latest version at the time of writing) appears to have an internal compiler error when compiling programs containing modules (or at least, this particular module) with any optimization flags other than -Og enabled. Until this is fixed, if you wish to use compiler optimizations, please either include the library as a header file or use a different compiler.

Compiling with MSVC using import BS.thread_pool

Note: The following instructions have only been tested using MSVC v19.42.34435, the latest version at the time of writing, and may not work with older versions of the compiler.

To compile the module file BS.thread_pool.cppm with MSVC, first open the Visual Studio Developer PowerShell for the appropriate CPU architecture. For example, for x64, execute the following command in PowerShell:

& 'C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\Tools\Launch-VsDevShell.ps1' -Arch amd64 -HostArch amd64

For ARM64, replace amd64 with arm64. (Do not use the "Developer PowerShell for VS 2022" Start Menu shortcut, as it may not use the correct CPU architecture by default.)

Navigate to the repository folder, create the build folder using mkdir build, and then run the following command in the root folder of the repository:

cl modules/BS.thread_pool.cppm /c /EHsc /interface /nologo /permissive- /std:c++20 /TP /Zc:__cplusplus /I include /ifcOutput build/BS.thread_pool.ifc /Fo:build/BS.thread_pool.obj

Here is a breakdown of the compiler arguments:

Note that to enable the native extensions, you should add /D BS_THREAD_POOL_NATIVE_EXTENSIONS to define the required macro.

Once the module is compiled, you can compile the test program as follows:

cl tests/BS_thread_pool_test.cpp build/BS.thread_pool.obj /reference BS.thread_pool=build/BS.thread_pool.ifc /EHsc /nologo /permissive- /std:c++20 /Zc:__cplusplus /Fo:build/BS_thread_pool_test.obj /Fe:build/BS_thread_pool_test.exe /D BS_THREAD_POOL_TEST_IMPORT_MODULE

Here is a breakdown of the compiler arguments:

Again, you should add /D BS_THREAD_POOL_NATIVE_EXTENSIONS if you wish to test the native extensions. You do not need to use the /I flag, since the header file is not needed, only the .obj and .ifc files. If you now type build/BS_thread_pool_test, the test program will run. If the module was successfully imported, the test program will print the message:

Thread pool library imported using: import BS.thread_pool (C++20 modules).

Of course, you should add warning, debugging, optimization, and other compiler flags to the commands above as needed. For more information about using C++20 modules with MSVC, please see this blog post.

Compiling with CMake using import BS.thread_pool

Note: The following instructions have only been tested using CMake v3.31.2, the latest version at the time of writing, and may not work with older versions. Also, modules are currently only supported by CMake with the Ninja and Visual Studio 17 2022 generators.

If you are using CMake, you can use target_sources() with CXX_MODULES to include the module file BS.thread_pool.cppm. CMake will then automatically compile the module and link it to your program. Here is an example of a CMakeLists.txt file that can be used to build the test program and import the thread pool library as a module:

cmake_minimum_required(VERSION 3.31)
project(BS_thread_pool_test LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

if(MSVC)
    add_compile_options(/permissive- /Zc:__cplusplus)
endif()

add_library(BS_thread_pool)
target_sources(BS_thread_pool PRIVATE FILE_SET CXX_MODULES FILES modules/BS.thread_pool.cppm)
target_include_directories(BS_thread_pool PRIVATE include)

add_executable(${PROJECT_NAME} tests/BS_thread_pool_test.cpp)
target_link_libraries(${PROJECT_NAME} PRIVATE BS_thread_pool)
target_compile_definitions(${PROJECT_NAME} PRIVATE BS_THREAD_POOL_TEST_IMPORT_MODULE)

Note that for MSVC we have to add the /permissive- flag to enforce strict C++ standard conformance, otherwise the test program will not compile, and /Zc:__cplusplus, otherwise the test program cannot detect the correct C++ version. This is handled automatically by the if(MSVC) block.

To enable the native extensions, add the line add_compile_definitions(BS_THREAD_POOL_NATIVE_EXTENSIONS). Replace CMAKE_CXX_STANDARD 20 with 23 if you wish to use C++23 features.

Place this file in the root folder of the repository, and then run the following commands:

cmake -B build
cmake --build build
build/BS_thread_pool_test

For MSVC, replace the last command with build/Debug/BS_thread_pool_test. If the module was successfully imported, the test program will print the message:

Thread pool library imported using: import BS.thread_pool (C++20 modules).

Of course, you should add warning, debugging, optimization, and other compiler flags to the configuration above as needed. For more information about using C++20 modules with CMake, please see the official documentation.

You can also instruct CMake to download the library automatically from the GitHub repository, as explained below, either using CPM or FetchContent.

Importing the C++23 Standard Library as a module

Enabling import std

If C++23 features are available, the thread pool library can import the C++ Standard Library as a module using import std. This has the same benefits described above for importing the library as a module, such as faster compilation times. To enable this feature, define the macro BS_THREAD_POOL_IMPORT_STD at compilation time.

At the time of writing, importing the C++ Standard Library as a module is only officially supported by the following combinations of compilers and standard libraries:

It is not supported by GCC with any standard library, Clang with any standard library other than libc++, any compiler with GNU libstdc++, or any other compiler or standard library.

If BS_THREAD_POOL_IMPORT_STD is defined, then you must also import the thread pool library itself as a module. If the library is included as a header file, this will force the program that included the header file to also import std, which is not desirable and can lead to compilation errors if the program #includes any Standard Library header files.

Defining the macro before importing the module will not work, as modules cannot access macros defined in the program that imported them. Instead, you must define the macro as a compiler flag: -D BS_THREAD_POOL_IMPORT_STD for Clang and GCC or /D BS_THREAD_POOL_IMPORT_STD for MSVC.

The test program will also import the std module if the macro BS_THREAD_POOL_IMPORT_STD is defined at compilation time. In that case, you should also enable the macro BS_THREAD_POOL_TEST_IMPORT_MODULE to import the thread pool library as a module.

The constexpr flag BS::thread_pool_import_std indicates whether the thread pool library was compiled with import std. Note that the flag will be false if BS_THREAD_POOL_IMPORT_STD is defined but the compiler or standard library does not support importing the C++ Standard Library as a module.

At the time of writing, importing the std module requires compiling it first. As explained in the previous section, using the bundled compile_cpp.py script is the easiest way to do this, as we show in the next section. However, for those who wish to compile manually, in the following sections we will explain how to do it with both Clang and MSVC, as well as with CMake. It is assumed that the reader has already read the section about importing the BS.thread_pool library as a module, so we omit some details here.

Compiling with compile_cpp.py using import std

The bundled Python script compile_cpp.py can be used to easily compile any programs that import the C++ Standard Library as a module. The script will automatically figure out the appropriate flags for each compiler, so you do not have to worry about the details. For example, to compile the test program BS_thread_pool_test.cpp and have it import both the BS.thread_pool module and the std module, simply run the following command in the root folder of the repository:

python scripts/compile_cpp.py tests/BS_thread_pool_test.cpp -s=c++23 -i=include -t=release -m="BS.thread_pool=modules/BS.thread_pool.cppm,include/BS_thread_pool.hpp" -o=build/BS_thread_pool_test -d=BS_THREAD_POOL_TEST_IMPORT_MODULE -d=BS_THREAD_POOL_IMPORT_STD -u=auto -v

Please see below for an explanation of the command line arguments. The differences between this command and the one we used for importing the thread pool library as a module are:

To enable the native extensions, you should also add -d=BS_THREAD_POOL_NATIVE_EXTENSIONS to define the required macro. If you now type build/BS_thread_pool_test, the test program will run. If the std module was successfully imported, the test program will print the message:

C++ Standard Library imported using:
* Thread pool library: import std (C++23 std module).
* Test program: import std (C++23 std module).

For further customization, it is recommend to create a compile_cpp.yaml file as explained below.

Compiling with Clang and LLVM libc++ using import std

Note: The following instructions have only been tested using Clang v19.1.6 and LLVM libc++ v19.1.6, the latest versions at the time of writing, and may not work with older versions.

Before compiling the std module, you must find the file std.cppm:

To compile the module file std.cppm with Clang, first create the build folder using mkdir build, and then run the following command in the root folder of the repository:

clang++ "path to std.cppm" --precompile -std=c++23 -o build/std.pcm -Wno-reserved-module-identifier

Of course, you should replace "path to std.cppm" with the actual path. The compiler arguments are explained above. The additional argument -Wno-reserved-module-identifier is needed to silence a false-positive warning.

Next, compile the BS.thread_pool module as above, but with the following additional flags:

clang++ modules/BS.thread_pool.cppm --precompile -fmodule-file="std=build/std.pcm" -std=c++23 -I include -o build/BS.thread_pool.pcm -D BS_THREAD_POOL_IMPORT_STD

Add -D BS_THREAD_POOL_NATIVE_EXTENSIONS if you wish to enable the native extensions. Once the module is compiled, you can compile the test program as follows:

clang++ tests/BS_thread_pool_test.cpp -fmodule-file="std=build/std.pcm" -fmodule-file="BS.thread_pool=build/BS.thread_pool.pcm" -std=c++23 -o build/BS_thread_pool_test -D BS_THREAD_POOL_TEST_IMPORT_MODULE -D BS_THREAD_POOL_IMPORT_STD

Again, you should add -D BS_THREAD_POOL_NATIVE_EXTENSIONS if you wish to test the native extensions. If you now type build/BS_thread_pool_test, the test program will run. If the std module was successfully imported, the test program will print the message:

C++ Standard Library imported using:
* Thread pool library: import std (C++23 std module).
* Test program: import std (C++23 std module).

Compiling with MSVC and Microsoft STL using import std

Note: The following instructions have only been tested using MSVC v19.42.34435 and Microsoft STL v143 (202408), the latest versions at the time of writing, and may not work with older versions.

Before compiling the std module, you must find the file std.ixx. It should be located in the folder C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\<MSVC runtime version>\modules. Replace <MSVC runtime version> with the full version number of the MSVC runtime library, e.g. 14.42.34433. If you installed Visual Studio in a different folder, locate std.ixx manually in that folder.

To compile the module file std.ixx with MSVC, first open the Visual Studio Developer PowerShell for the appropriate CPU architecture as explained above. Navigate to the repository folder, create the build folder using mkdir build, and then run the following command in the root folder of the repository:

cl "path to std.ixx" /c /EHsc /nologo /permissive- /std:c++latest /Zc:__cplusplus /ifcOutput build/std.ifc /Fo:build/std.obj

Of course, you should replace "path to std.ixx" with the actual path. The compiler arguments are explained above.

Next, compile the BS.thread_pool module as above, but with the following additional flags:

cl modules/BS.thread_pool.cppm /reference std=build/std.ifc /c /EHsc /interface /nologo /permissive- /std:c++latest /TP /Zc:__cplusplus /I include /ifcOutput build/BS.thread_pool.ifc /Fo:build/BS.thread_pool.obj /D BS_THREAD_POOL_IMPORT_STD

Add /D BS_THREAD_POOL_NATIVE_EXTENSIONS if you wish to enable the native extensions. Once the module is compiled, you can compile the test program as follows (note that we added build/std.obj to link with the std module):

cl tests/BS_thread_pool_test.cpp build/std.obj build/BS.thread_pool.obj /reference std=build/std.ifc /reference BS.thread_pool=build/BS.thread_pool.ifc /EHsc /nologo /permissive- /std:c++latest /Zc:__cplusplus /Fo:build/BS_thread_pool_test.obj /Fe:build/BS_thread_pool_test.exe /D BS_THREAD_POOL_TEST_IMPORT_MODULE /D BS_THREAD_POOL_IMPORT_STD

Again, you should add /D BS_THREAD_POOL_NATIVE_EXTENSIONS if you wish to test the native extensions. If you now type build/BS_thread_pool_test, the test program will run. If the std module was successfully imported, the test program will print the message:

C++ Standard Library imported using:
* Thread pool library: import std (C++23 std module).
* Test program: import std (C++23 std module).

Compiling with CMake using import std

Note: The following instructions have only been tested using CMake v3.31.2, the latest version at the time of writing, and may not work with older versions. Also, modules are currently only supported by CMake with the Ninja and Visual Studio 17 2022 generators.

If you are using CMake, you can enable CMAKE_EXPERIMENTAL_CXX_IMPORT_STD to automatically compile the std module, provided the compiler and standard library support it. Here is an example of a CMakeLists.txt file that can be used to build the test program, import the thread pool library as a module, and import the C++ Standard Library as a module:

cmake_minimum_required(VERSION 3.31)
project(BS_thread_pool_test LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 23)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_EXPERIMENTAL_CXX_IMPORT_STD ON)

add_compile_definitions(BS_THREAD_POOL_IMPORT_STD)

if(MSVC)
    add_compile_options(/permissive- /Zc:__cplusplus)
endif()

add_library(BS_thread_pool)
target_sources(BS_thread_pool PRIVATE FILE_SET CXX_MODULES FILES modules/BS.thread_pool.cppm)
target_include_directories(BS_thread_pool PRIVATE include)

add_executable(${PROJECT_NAME} tests/BS_thread_pool_test.cpp)
target_link_libraries(${PROJECT_NAME} PRIVATE BS_thread_pool)
target_compile_definitions(${PROJECT_NAME} PRIVATE BS_THREAD_POOL_TEST_IMPORT_MODULE)

The if(MSVC) block is explained above. To enable the native extensions, add the macro BS_THREAD_POOL_NATIVE_EXTENSIONS to add_compile_definitions().

Place this file in the root folder of the repository, and then run the following commands:

cmake -B build
cmake --build build
build/BS_thread_pool_test

For MSVC, replace the last command with build/Debug/BS_thread_pool_test. If the std module was successfully imported, the test program will print the message:

C++ Standard Library imported using:
* Thread pool library: import std (C++23 std module).
* Test program: import std (C++23 std module).

You can also instruct CMake to download the library automatically from the GitHub repository, as explained below, either using CPM or FetchContent.

Installing the library using package managers

Installing using vcpkg

If you are using the vcpkg C/C++ package manager, you can easily install BS::thread_pool with the following command:

vcpkg install bshoshany-thread-pool

To update the package to the latest version, run:

vcpkg upgrade

Please refer to this package's page on vcpkg.io for more information.

Installing using Conan

If you are using the Conan C/C++ package manager, you can easily integrate BS::thread_pool into your project by adding the following lines to your conanfile.txt:

[requires]
bshoshany-thread-pool/5.0.0

To update the package to the latest version, simply change the version number. Please refer to this package's page on ConanCenter for more information.

Installing using Meson

If you are using the Meson build system, you can install BS::thread_pool from WrapDB. To do so, create a subprojects folder in your project (if it does not already exist) and run the following command:

meson wrap install bshoshany-thread-pool

Then, use dependency('bshoshany-thread-pool') in your meson.build file to include the package. To update the package to the latest version, run:

meson wrap update bshoshany-thread-pool

Installing using CMake with CPM

Note: The following instructions have only been tested using CMake v3.31.2 and CPM v0.40.2, the latest versions at the time of writing, and may not work with older versions.

If you are using CMake, you can install BS::thread_pool most easily with CPM. If CPM is already installed, simply add the following to your project's CMakeLists.txt:

CPMAddPackage(
    NAME BS_thread_pool
    GITHUB_REPOSITORY bshoshany/thread-pool
    VERSION 5.0.0
    EXCLUDE_FROM_ALL
    SYSTEM
)
add_library(BS_thread_pool INTERFACE)
target_include_directories(BS_thread_pool INTERFACE ${BS_thread_pool_SOURCE_DIR}/include)

This will automatically download the indicated version of the package from the GitHub repository and include it in your project.

A convenient shorthand for GitHub packages also exists, in which case CPMAddPackage() can be called with a single argument of the form "gh:user/name@version". After that, CPM_LAST_PACKAGE_NAME will be set to the name of the package, so we need to use this variable to define the include folder. This results in a more compact configuration:

CPMAddPackage("gh:bshoshany/thread-pool@5.0.0")
add_library(BS_thread_pool INTERFACE)
target_include_directories(BS_thread_pool INTERFACE ${${CPM_LAST_PACKAGE_NAME}_SOURCE_DIR}/include)

It is also possible to use CPM without installing it first, by adding the following lines to CMakeLists.txt before CPMAddPackage():

set(CPM_DOWNLOAD_LOCATION ${CMAKE_BINARY_DIR}/CPM.cmake)
if(NOT(EXISTS ${CPM_DOWNLOAD_LOCATION}))
    file(DOWNLOAD https://github.com/cpm-cmake/CPM.cmake/releases/latest/download/CPM.cmake ${CPM_DOWNLOAD_LOCATION})
endif()
include(${CPM_DOWNLOAD_LOCATION})

Here is an example of a complete CMakeLists.txt which automatically downloads and compiles the test program BS_thread_pool_test.cpp:

cmake_minimum_required(VERSION 3.31)
project(BS_thread_pool_test LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

if(MSVC)
    add_compile_options(/permissive- /Zc:__cplusplus)
endif()

set(CPM_DOWNLOAD_LOCATION ${CMAKE_BINARY_DIR}/CPM.cmake)
if(NOT(EXISTS ${CPM_DOWNLOAD_LOCATION}))
    file(DOWNLOAD https://github.com/cpm-cmake/CPM.cmake/releases/latest/download/CPM.cmake ${CPM_DOWNLOAD_LOCATION})
endif()
include(${CPM_DOWNLOAD_LOCATION})

CPMAddPackage("gh:bshoshany/thread-pool@5.0.0")
add_library(BS_thread_pool INTERFACE)
target_include_directories(BS_thread_pool INTERFACE ${${CPM_LAST_PACKAGE_NAME}_SOURCE_DIR}/include)

add_executable(${PROJECT_NAME} ${${CPM_LAST_PACKAGE_NAME}_SOURCE_DIR}/tests/BS_thread_pool_test.cpp)
target_link_libraries(${PROJECT_NAME} PRIVATE BS_thread_pool)

Note that for MSVC we have to add the /permissive- flag to enforce strict C++ standard conformance, otherwise the test program will not compile, and /Zc:__cplusplus, otherwise the test program cannot detect the correct C++ version. This is handled automatically by the if(MSVC) block.

To enable the native extensions, add the line add_compile_definitions(BS_THREAD_POOL_NATIVE_EXTENSIONS). Replace CMAKE_CXX_STANDARD 17 with 20 or 23 if you wish to use C++20 or C++23 features, respectively. Of course, you should add warning, debugging, optimization, and other compiler flags to the configuration above as needed.

With this CMakeLists.txt in an empty folder, type the following commands to build and run the project:

cmake -B build
cmake --build build
build/BS_thread_pool_test

For MSVC, replace the last command with build/Debug/BS_thread_pool_test. Please see here for instructions on how to import the library as a C++20 module with CMake, and here for instructions on how to import the C++ Standard Library as a module with CMake.

Installing using CMake with FetchContent

Note: The following instructions have only been tested using CMake v3.31.2, the latest version at the time of writing, and may not work with older versions.

If you are using CMake but do not wish to use 3rd-party tools, you can also install BS::thread_pool using the built-in FetchContent module. Here is an example of a complete CMakeLists.txt which automatically downloads and compiles the test program, as in the previous section, but this time using FetchContent directly:

cmake_minimum_required(VERSION 3.31)
project(BS_thread_pool_test LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

if(MSVC)
    add_compile_options(/permissive- /Zc:__cplusplus)
endif()

include(FetchContent)
set(FETCHCONTENT_UPDATES_DISCONNECTED ON)
FetchContent_Declare(
    bshoshany_thread_pool
    GIT_REPOSITORY https://github.com/bshoshany/thread-pool.git
    GIT_TAG v5.0.0
    DOWNLOAD_EXTRACT_TIMESTAMP TRUE
    EXCLUDE_FROM_ALL
    SYSTEM
)
FetchContent_MakeAvailable(bshoshany_thread_pool)
add_library(BS_thread_pool INTERFACE)
target_include_directories(BS_thread_pool INTERFACE ${bshoshany_thread_pool_SOURCE_DIR}/include)

add_executable(${PROJECT_NAME} ${bshoshany_thread_pool_SOURCE_DIR}/tests/BS_thread_pool_test.cpp)
target_link_libraries(${PROJECT_NAME} PRIVATE BS_thread_pool)

Complete library reference

This section provides a complete reference for all classes and functions available in this library, along with other important information. Functions are given with simplified prototypes (e.g. removing const) for ease of reading. Explanations are kept brief, as the purpose of this section is only to provide a quick reference; for more detailed information and usage examples, please refer to the full documentation above.

Descriptions of each item can also be found in the Doxygen comments embedded in the source code. Any modern IDE, such as Visual Studio Code, can use these Doxygen comments to provide automatic documentation for any class and member function in this library when hovering over code with the mouse or using auto-complete.

The BS::thread_pool class template

BS::thread_pool is the main thread pool class. It is used to create a pool of threads that continuously execute tasks submitted to a queue. It can take template parameters, which enable optional features as described below. The member functions that are available by default, when no template parameters are used, are:

Optional features and the template parameter

The thread pool has several optional features that must be explicitly enabled by passing a template parameter. The template parameter is a bitmask, so you can enable several features at once by combining them with the bitwise OR operator |. The bitmask flags are members of the BS::tp enumeration.

Convenience aliases are defined as follows:

The BS::this_thread class

The class BS::this_thread provides functionality analogous to std::this_thread. It contains the following static member functions:

If the native extensions are enabled, the class will contain additional static member functions. Please see the relevant section for more information.

The native extensions

The native extensions may be enabled by defining the macro BS_THREAD_POOL_NATIVE_EXTENSIONS at compilation time. If including the library as a header file, the macro must be defined before #include "BS_thread_pool.hpp". If importing the library as a C++20 module, the macro must be defined as a compiler flag. The native extensions use the operating system's native API, and are thus not portable; however, they should work on Windows, Linux, and macOS.

The native extensions add the following functions to the BS namespace:

The native extensions also add the following static member functions to BS::this_thread:

Finally, the native extensions add the following member function to BS::thread_pool:

The BS::multi_future class

BS::multi_future<T> is a helper class used to facilitate waiting for and/or getting the results of multiple futures at once. It is defined as a specialization of std::vector<std::future<T>>. This means that all of the member functions that can be used on an std::vector<std::future<T>> can also be used on a BS::multi_future<T>. For example, you may use a range-based for loop with a BS::multi_future<T>, since it has iterators.

In addition to inherited member functions, BS::multi_future<T> has the following specialized member functions (R and P, C, and D are template parameters):

The BS::synced_stream class

BS::synced_stream is a utility class which can be used to synchronize printing to one or more output streams by different threads. It has the following member functions (T is a template parameter pack):

In addition, the class comes with two stream manipulators, which are meant to help the compiler figure out which template specializations to use with the class:

The BS::version class

BS::version is a utility class used to represent a version number. It has public members major, minor, and patch, as well as the following member functions:

In addition, the library defines a constexpr object BS::thread_pool_version of type BS::version, which can be used to check the version of the library at compilation time.

Note that this feature is only available starting with v5.0.0 of the library; previous versions used the macros BS_THREAD_POOL_VERSION_MAJOR, BS_THREAD_POOL_VERSION_MINOR, and BS_THREAD_POOL_VERSION_PATCH, which are still defined for compatibility purposes, but are not accessible if the library is imported as a C++20 module.

Diagnostic variables

The library defines the following constexpr variables:

All names exported by the C++20 module

When the library is imported as a C++20 module using import BS.thread_pool, it exports the following names, in alphabetical order:

If the native extensions are enabled, the following names are also exported:

Development tools

The compile_cpp.py script

The Python script compile_cpp.py, in the scripts folder of the GitHub repository, can be used to compile any C++ source file with different compilers on different platforms. It requires Python 3.12 or later.

The script was written by the author of the library to make it easier to test the library with different combinations of compilers, standards, and platforms using the built-in Visual Studio Code tasks. However, note that this script is not meant to replace CMake or any full-fledged build system, it's just a convenient script for developing single-header libraries like this one or other small projects.

The compile_cpp.py script also transparently handles C++20 modules and importing the C++ Standard Library as a module in C++23. Therefore, users of this library who wish to import it as a C++20 module may find this script particularly useful.

The compilation parameters can be configured using the command line arguments and/or via an optional YAML configuration file compile_cpp.yaml. The command line arguments are as follows:

The compile_cpp.yaml file includes the following fields:

Please see the compile_cpp.yaml file in the GitHub repository for an example of how to use it.

Other included tools

The scripts folder of the GitHub repository contains two other Python scripts that are used in the development of the library:

In addition, for Visual Studio Code users, the GitHub repository includes three .vscode folders:

Each folder contains appropriate c_cpp_properties.json, launch.json, and tasks.json files that utilize the included Python scripts. Users are welcome to use these files in their own projects, but they may require some modifications to work on specific systems.

About the project

Bug reports and feature requests

This library is under continuous and active development. If you encounter any bugs, or if you would like to request any additional features, please feel free to open a new issue on GitHub and I will look into it as soon as I can.

Contribution and pull request policy

Contributions are always welcome. However, I release my projects in cumulative updates after editing and testing them locally on my system, so my policy is to never accept any pull requests. If you open a pull request, and I decide to incorporate your suggestion into the project, I will first modify your code to comply with the project's coding conventions (formatting, syntax, naming, comments, programming practices, etc.), and perform some tests to ensure that the change doesn't break anything. I will then merge it into the next release of the project, possibly together with some other changes. The new release will also include a note in CHANGELOG.md with a link to your pull request, and modifications to the documentation in README.md as needed.

Starring the repository

If you found this project useful, please consider starring it on GitHub! This allows me to see how many people are using my code, and motivates me to keep working to improve it.

Acknowledgements

Many GitHub users have helped improve this project, directly or indirectly, via issues, pull requests, comments, and/or personal correspondence. Please see CHANGELOG.md for links to specific issues and pull requests that have been the most helpful. Thank you all for your contribution! 😊

Copyright and citing

Copyright (c) 2024 Barak Shoshany. Licensed under the MIT license.

If you use this library in software of any kind, please provide a link to the GitHub repository in the source code and documentation.

If you use this library in published research, please cite it as follows:

You can use the following BibTeX entry:

@article{Shoshany2024_ThreadPool,
    archiveprefix = {arXiv},
    author        = {Barak Shoshany},
    doi           = {10.1016/j.softx.2024.101687},
    eprint        = {2105.00613},
    journal       = {SoftwareX},
    pages         = {101687},
    title         = {{A C++17 Thread Pool for High-Performance Scientific Computing}},
    url           = {https://www.sciencedirect.com/science/article/pii/S235271102400058X},
    volume        = {26},
    year          = {2024}
}

Please note that the papers on SoftwareX and arXiv are not up to date with the latest version of the library. These publications are only intended to facilitate discovery of this library by scientists, and to enable citing it in scientific research. Documentation for the latest version is provided only by the README.md file in the GitHub repository.

About the author

My name is Barak Shoshany and I am a theoretical, mathematical, and computational physicist. I work as an Assistant Professor of Physics at Brock University in Ontario, Canada, and I am also a Sessional Lecturer at McMaster University. My research focuses on the nature of time and causality in general relativity and quantum mechanics, as well as symbolic and high-performance scientific computing. For more about me, please see my personal website.

Learning more about C++

Beginner C++ programmers may be interested in my lecture notes for a graduate-level course taught at McMaster University, which teach modern C and C++ from scratch, including some of the advanced techniques and programming practices used in developing this library. I have been teaching this course every year since 2020, and the notes are continuously updated and improved based on student feedback.

Other projects to check out

If you are a physicist or astronomer, you may be interested in my project OGRe: An Object-Oriented General Relativity Package for Mathematica, or its Python port OGRePy: An Object-Oriented General Relativity Package for Python.