LLVM: llvm::ThreadPoolInterface Class Reference (original) (raw)

This defines the abstract base interface for a ThreadPool allowing asynchronous parallel execution on a defined number of threads. More...

#include "[llvm/Support/ThreadPool.h](ThreadPool%5F8h%5Fsource.html)"

Public Member Functions
virtual ~ThreadPoolInterface ()
Destroying the pool will drain the pending tasks and wait.
virtual void wait ()=0
Blocking wait for all the threads to complete and the queue to be empty.
virtual void wait (ThreadPoolTaskGroup &Group)=0
Blocking wait for only all the threads in the given group to complete.
virtual unsigned getMaxConcurrency () const =0
Returns the maximum number of worker this pool can eventually grow to.
template<typename Function , typename... Args>
auto async (Function &&F, Args &&...ArgList)
Asynchronous submission of a task to the pool.
template<typename Function , typename... Args>
auto async (ThreadPoolTaskGroup &Group, Function &&F, Args &&...ArgList)
Overload, task will be in the given task group.
template
auto async (Func &&F) -> std::shared_future< decltype(F())>
Asynchronous submission of a task to the pool.
template
auto async (ThreadPoolTaskGroup &Group, Func &&F) -> std::shared_future< decltype(F())>

This defines the abstract base interface for a ThreadPool allowing asynchronous parallel execution on a defined number of threads.

It is possible to reuse one thread pool for different groups of tasks by grouping tasks using ThreadPoolTaskGroup. All tasks are processed using the same queue, but it is possible to wait only for a specific group of tasks to finish.

It is also possible for worker threads to submit new tasks and wait for them. Note that this may result in a deadlock in cases such as when a task (directly or indirectly) tries to wait for its own completion, or when all available threads are used up by tasks waiting for a task that has no thread left to run on (this includes waiting on the returned future). It should be generally safe to wait() for a group as long as groups do not form a cycle.

Definition at line 49 of file ThreadPool.h.

ThreadPoolInterface::~ThreadPoolInterface ( ) virtualdefault

Destroying the pool will drain the pending tasks and wait.

The current thread may participate in the execution of the pending tasks.

async() [1/4]

template

auto llvm::ThreadPoolInterface::async ( Func && F) -> std::shared_future<decltype(F())> inline

Asynchronous submission of a task to the pool.

The returned future can be used to wait for the task to finish and is non-blocking on destruction.

Definition at line 95 of file ThreadPool.h.

References F.

async() [2/4]

template<typename Function , typename... Args>

auto llvm::ThreadPoolInterface::async ( Function && F, Args &&... ArgList ) inline

async() [3/4]

template

auto llvm::ThreadPoolInterface::async ( ThreadPoolTaskGroup & Group, Func && F ) -> std::shared_future<decltype(F())> inline

async() [4/4]

template<typename Function , typename... Args>

Overload, task will be in the given task group.

Definition at line 86 of file ThreadPool.h.

References async(), and F.

getMaxConcurrency()

virtual unsigned llvm::ThreadPoolInterface::getMaxConcurrency ( ) const pure virtual

wait() [1/2]

virtual void llvm::ThreadPoolInterface::wait ( ) pure virtual

wait() [2/2]

Blocking wait for only all the threads in the given group to complete.

It is possible to wait even inside a task, but waiting (directly or indirectly) on itself will deadlock. If called from a task running on a worker thread, the call may process pending tasks while waiting in order not to waste the thread.

Implemented in llvm::SingleThreadExecutor.


The documentation for this class was generated from the following files: