P3309R3: atomic and atomic_ref (original) (raw)
atomic and atomic_ref
Introduction and motivation
This paper proposes marking most of atomic<T> methods and associated functions constexpr to allow usage of atomic code without changes in a constexpr and consteval code.
Proposed changes will allow implementing other types (std::shared_ptr<T>, persistent data structures with atomic pointers) and algorithms (thread safe data-processing, like scanning data with atomic counter) with just sprinkling constexpr to their specification.
Changes
- R2 → R3: Added location of library feature test macro, removed
atomic<shared_ptr>andatomic<weak_ptr>. - R1 → R2: Added clarification for behaviour of
waitandnotifyfunctions. - R0 → R1: Make
waitandnotifyfunctions as requested by SG1. Wording changed accordingly. Updaded link to implementation on Compiler Explorer.
Previous polls
SG1: Forward P3309 to LEWG with the following notes:
- Add constexpr to the wait and notify functions in the next revision of P3309
atomic<shared_ptr>should be supported in constexpr whenevershared_ptris supported in constexpr (whichever paper lands second should have this change)is_lock_free()should not be made constexpr
| SF | F | N | A | SA |
|---|---|---|---|---|
| 2 | 10 | 4 | 0 | 0 |
Intention for wording changes
Mark all functions in [atomics] constexpr excluding all volatile overloads. As all these can be implemented in constant expression evaluator or using if consteval:
template<class T>
constexpr T atomic_fetch_add(atomic<T>* target, typename atomic<T>::difference_type diff) noexcept {
if consteval {
const auto previous = target->value;
target->value += diff;
return previous;
} else {
return __c11_atomic_fetch_add(&target->value, diff);
}
}Synchronization functions and helpers can be implemented as no-ops (std::kill_dependency, std::atomic_thread_fence). Memory order parameters should be just ignored as constant evaluated code doesn't have multiple threads.
Alternative implementation strategy is to allow atomic builtins to work in constant evaluator.
wait and notify behaviour
Wait and notify operations should work during constant evaluation as expected in a single-threaded environment. Notify will be noop and wait-ing for a different value will be a deadlock which should result in constant evaluation failure according to [expr.const#5.7] an expression that would exceed the implementation-defined limits as every check for the value is considered [intro.progress#4] continous executing of execution steps while waiting for the condition.
* Should we make islockfree functions also constexpr? No, keep it non-constexpr as it can be different on running environment.
* Should we make atomic<sharedptr<T>> and atomic<weakptr<T>> constexpr? (paper's wording contains this change)
There is associated paper P3037R1 making sharedptr<T> constexpr.
## Example
This example shows how you can easily reuse code between runtime and constant evaluated code without duplication. Without this paper you need to duplicate multiple functions.
``` constexpr bool processfirstunprocessed(std::atomic & counter, std::span subject) { // BEFORE: compile-time error when you try to evaluate this inside constant evaluated code // AFTER: work sequentialy in constant-evaluated code const sizet current = counter.fetchadd(1);
_if (current >= subject.size()) {_
_return false;_
_}_
_process(subject[current]);_
_return true;_}
constexpr void processall(std::span subject, unsigned threadcount = 1) { // BEFORE: calling following function in constant evaluated code will always fail with any number of requested threads // AFTER: calling it with argument threadcount == 1 will succeed in constant evaluated code std::atomic counter{0}; auto threads = std::vectorstd::jthread{};
_assert(threadcount >= 1);_
_for (unsigned i = 1; i < threadcount; ++i) {_
_threads.emplaceback([&]{_
_while (processfirstunprocessed(counter, subject));_
_});_
_}_
_while (processfirstunprocessed(counter, subject));_} ```
This was implemented in libc++ & clang by adding constexpr to needed places implementing atomic builtins.
None, currently std::atomic and std::atomicref can't be used in constant evaluated code.
## Proposed changes to wording
## 33 Concurrency support library [thread]
## 33.5 Atomic operations [atomics]
### 33.5.1 General [atomics.general]
Subclause [atomics] describes components for fine-grained atomic access.
This access is provided via operations on atomic objects.
### 33.5.3 Type aliases [atomics.alias]
The type aliases atomic_intN_t, atomic_uintN_t,atomic_intptr_t, and atomic_uintptr_tare defined if and only ifintN_t, uintN_t,intptr_t, and uintptr_tare defined, respectively.
The type aliasesatomic_signed_lock_free and atomic_unsigned_lock_freename specializations of atomicwhose template arguments are integral types, respectively signed and unsigned, and whose is_always_lock_free property is true.
[Note 1:
These aliases are optional in freestanding implementations ([compliance]).
— end note]
Implementations should choose for these aliases the integral specializations of atomicfor which the atomic waiting and notifying operations ([atomics.wait]) are most efficient.
### 33.5.4 Order and consistency [atomics.order]
namespace std { enum class memory_order : unspecified { relaxed, consume, acquire, release, acq_rel, seq_cst};}
The enumeration memory_order specifies the detailed regular (non-atomic) memory synchronization order as defined in[intro.multithread] and may provide for operation ordering.
Its enumerated values and their meanings are as follows:
* memory_order::relaxed: no operation orders memory. * memory_order::release, memory_order::acq_rel, andmemory_order::seq_cst: a store operation performs a release operation on the affected memory location. * memory_order::consume: a load operation performs a consume operation on the affected memory location. [Note 1: Prefer memory_order::acquire, which provides stronger guarantees than memory_order::consume. Implementations have found it infeasible to provide performance better than that of memory_order::acquire. Specification revisions are under consideration. — end note] * memory_order::acquire, memory_order::acq_rel, andmemory_order::seq_cst: a load operation performs an acquire operation on the affected memory location.
[Note 2:
Atomic operations specifying memory_order::relaxed are relaxed with respect to memory ordering.
Implementations must still guarantee that any given atomic access to a particular atomic object be indivisible with respect to all other atomic accesses to that object.
— end note]
An atomic operation A that performs a release operation on an atomic object M synchronizes with an atomic operation B that performs an acquire operation on M and takes its value from any side effect in the release sequence headed by A.
An atomic operation A on some atomic object M iscoherence-ordered beforeanother atomic operation B on M if
* A is a modification, andB reads the value stored by A, or * A precedes Bin the modification order of M, or * A and B are not the same atomic read-modify-write operation, and there exists an atomic modification X of Msuch that A reads the value stored by X andX precedes Bin the modification order of M, or * there exists an atomic modification X of Msuch that A is coherence-ordered before X andX is coherence-ordered before B.
There is a single total order Son all memory_order::seq_cst operations, including fences, that satisfies the following constraints.
First, if A and B arememory_order::seq_cst operations andA strongly happens before B, then A precedes B in S.
Second, for every pair of atomic operations A andB on an object M, where A is coherence-ordered before B, the following four conditions are required to be satisfied by S:
* if A and B are bothmemory_order::seq_cst operations, then A precedes B in S; and * if A is a memory_order::seq_cst operation andB happens before a memory_order::seq_cst fence Y, then A precedes Y in S; and * if a memory_order::seq_cst fence Xhappens before A andB is a memory_order::seq_cst operation, then X precedes B in S; and * if a memory_order::seq_cst fence Xhappens before A andB happens before a memory_order::seq_cst fence Y, then X precedes Y in S.
[Note 3:
This definition ensures that S is consistent with the modification order of any atomic object M.
It also ensures that a memory_order::seq_cst load A of Mgets its value either from the last modification of Mthat precedes A in S or from some non-memory_order::seq_cst modification of Mthat does not happen before any modification of Mthat precedes A in S.
— end note]
[Note 4:
We do not require that S be consistent with “happens before” ([intro.races]).
This allows more efficient implementation of memory_order::acquire and memory_order::releaseon some machine architectures.
It can produce surprising results when these are mixed with memory_order::seq_cst accesses.
— end note]
[Note 5:
memory_order::seq_cst ensures sequential consistency only for a program that is free of data races and uses exclusively memory_order::seq_cst atomic operations.
Any use of weaker ordering will invalidate this guarantee unless extreme care is used.
In many cases, memory_order::seq_cst atomic operations are reorderable with respect to other atomic operations performed by the same thread.
— end note]
Implementations should ensure that no “out-of-thin-air” values are computed that circularly depend on their own computation.
[Note 6:
For example, with x and y initially zero,r1 = y.load(memory_order::relaxed); x.store(r1, memory_order::relaxed);
r2 = x.load(memory_order::relaxed); y.store(r2, memory_order::relaxed);this recommendation discourages producing r1 == r2 == 42, since the store of 42 to y is only possible if the store to x stores 42, which circularly depends on the store to y storing 42.
Note that without this restriction, such an execution is possible.
— end note]
[Note 7:
The recommendation similarly disallows r1 == r2 == 42 in the following example, with x and y again initially zero:
r1 = x.load(memory_order::relaxed);if (r1 == 42) y.store(42, memory_order::relaxed);
r2 = y.load(memory_order::relaxed);if (r2 == 42) x.store(42, memory_order::relaxed); — end note]
Atomic read-modify-write operations shall always read the last value (in the modification order) written before the write associated with the read-modify-write operation.
Recommended practice: The implementation should make atomic stores visible to atomic loads, and atomic loads should observe atomic stores, within a reasonable amount of time.
template<class T> constexpr T killdependency(T y) noexcept;
### 33.5.5 Lock-free property [atomics.lockfree]
#define ATOMIC_BOOL_LOCK_FREE unspecified #define ATOMIC_CHAR_LOCK_FREE unspecified #define ATOMIC_CHAR8_T_LOCK_FREE unspecified #define ATOMIC_CHAR16_T_LOCK_FREE unspecified #define ATOMIC_CHAR32_T_LOCK_FREE unspecified #define ATOMIC_WCHAR_T_LOCK_FREE unspecified #define ATOMIC_SHORT_LOCK_FREE unspecified #define ATOMIC_INT_LOCK_FREE unspecified #define ATOMIC_LONG_LOCK_FREE unspecified #define ATOMIC_LLONG_LOCK_FREE unspecified #define ATOMIC_POINTER_LOCK_FREE unspecified
The ATOMIC_..._LOCK_FREE macros indicate the lock-free property of the corresponding atomic types, with the signed and unsigned variants grouped together.
The properties also apply to the corresponding (partial) specializations of theatomic template.
A value of 0 indicates that the types are never lock-free.
A value of 1 indicates that the types are sometimes lock-free.
A value of 2 indicates that the types are always lock-free.
On a hosted implementation ([compliance]), at least one signed integral specialization of the atomic template, along with the specialization for the corresponding unsigned type ([basic.fundamental]), is always lock-free.
The functions atomic<T>::is_lock_free andatomic_is_lock_free ([atomics.types.operations]) indicate whether the object is lock-free.
In any given program execution, the result of the lock-free query is the same for all atomic objects of the same type.
Atomic operations that are not lock-free are considered to potentially block ([intro.progress]).
Recommended practice: Operations that are lock-free should also be address-free.301
The implementation of these operations should not depend on any per-process state.
[Note 1:
This restriction enables communication by memory that is mapped into a process more than once and by memory that is shared between two processes.
— end note]
### 33.5.6 Waiting and notifying [atomics.wait]
Atomic waiting operationsand atomic notifying operationsprovide a mechanism to wait for the value of an atomic object to change more efficiently than can be achieved with polling.
An atomic waiting operation may block until it is unblocked by an atomic notifying operation, according to each function's effects.
[Note 1:
Programs are not guaranteed to observe transient atomic values, an issue known as the A-B-A problem, resulting in continued blocking if a condition is only temporarily met.
— end note]
[Note 2:
The following functions are atomic waiting operations:
* atomic<T>::wait, * atomic_flag::wait, * atomic_wait and atomic_wait_explicit, * atomic_flag_wait and atomic_flag_wait_explicit, and * atomic_ref<T>::wait.
— end note]
[Note 3:
The following functions are atomic notifying operations:
* atomic<T>::notify_one and atomic<T>::notify_all, * atomic_flag::notify_one and atomic_flag::notify_all, * atomic_notify_one and atomic_notify_all, * atomic_flag_notify_one and atomic_flag_notify_all, and * atomic_ref<T>::notify_one and atomic_ref<T>::notify_all.
— end note]
A call to an atomic waiting operation on an atomic object Mis eligible to be unblockedby a call to an atomic notifying operation on Mif there exist side effects X and Y on M such that:
* the atomic waiting operation has blocked after observing the result of X, * X precedes Y in the modification order of M, and * Y happens before the call to the atomic notifying operation.
### 33.5.7 Class template atomic_ref [atomics.ref.generic]
#### 33.5.7.1 General [atomics.ref.generic.general]
namespace std { template<class T> struct atomic_ref { private: T* ptr; public: using value_type = T;static constexpr size_t required_alignment = implementation-defined;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const noexcept;constexpr explicit atomic_ref(T&);constexpr atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete;constexpr void store(T, memory_order = memory_order::seq_cst) const noexcept;constexpr T operator=(T) const noexcept;constexpr T load(memory_order = memory_order::seq_cst) const noexcept;constexpr operator T() const noexcept;constexpr T exchange(T, memory_order = memory_order::seq_cst) const noexcept;constexpr bool compare_exchange_weak(T&, T, memory_order, memory_order) const noexcept;constexpr bool compare_exchange_strong(T&, T, memory_order, memory_order) const noexcept;constexpr bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) const noexcept;constexpr bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) const noexcept;constexpr void wait(T, memory_order = memory_order::seq_cst) const noexcept;constexpr void notify_one() const noexcept;constexpr void notify_all() const noexcept;};}
An atomic_ref object applies atomic operations ([atomics.general]) to the object referenced by *ptr such that, for the lifetime ([basic.life]) of the atomic_ref object, the object referenced by *ptr is an atomic object ([intro.races]).
The program is ill-formed if is_trivially_copyable_v<T> is false.
The lifetime ([basic.life]) of an object referenced by *ptrshall exceed the lifetime of all atomic_refs that reference the object.
While any atomic_ref instances exist that reference the *ptr object, all accesses to that object shall exclusively occur through those atomic_ref instances.
No subobject of the object referenced by atomic_refshall be concurrently referenced by any other atomic_ref object.
Atomic operations applied to an object through a referencing atomic_ref are atomic with respect to atomic operations applied through any other atomic_refreferencing the same object.
[Note 1:
Atomic operations or the atomic_ref constructor can acquire a shared resource, such as a lock associated with the referenced object, to enable atomic operations to be applied to the referenced object.
— end note]
#### 33.5.7.2 Operations [atomics.ref.ops]
static constexpr sizet requiredalignment;
The alignment required for an object to be referenced by an atomic reference, which is at least alignof(T).
[Note 1:
Hardware could require an object referenced by an atomic_refto have stricter alignment ([basic.align]) than other objects of type T.
Further, whether operations on an atomic_refare lock-free could depend on the alignment of the referenced object.
For example, lock-free operations on std::complex<double>could be supported only if aligned to 2*alignof(double).
— end note]
static constexpr bool isalwayslockfree;
The static data member is_always_lock_free is trueif the atomic_ref type's operations are always lock-free, and false otherwise.
bool islockfree() const noexcept;
Returns: true if operations on all objects of the type atomic_ref<T>are lock-free,false otherwise.
constexpr atomicref(T& obj);
Preconditions: The referenced object is aligned to required_alignment.
Postconditions: *this references obj.
constexpr atomicref(const atomicref& ref) noexcept;
Postconditions: *this references the object referenced by ref.
constexpr void store(T desired, memoryorder order = memoryorder::seqcst) const noexcept;
Preconditions: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst.
Effects: Atomically replaces the value referenced by *ptrwith the value of desired.
Memory is affected according to the value of order.
constexpr T operator=(T desired) const noexcept;
Effects: Equivalent to:store(desired);return desired;
constexpr T load(memoryorder order = memoryorder::seqcst) const noexcept;
Preconditions: order ismemory_order::relaxed,memory_order::consume,memory_order::ac- quire, ormemory_order::seq_cst.
Effects: Memory is affected according to the value of order.
Returns: Atomically returns the value referenced by *ptr.
constexpr operator T() const noexcept;
Effects: Equivalent to: return load();
constexpr T exchange(T desired, memoryorder order = memoryorder::seqcst) const noexcept;
Effects: Atomically replaces the value referenced by *ptrwith desired.
Memory is affected according to the value of order.
This operation is an atomic read-modify-write operation ([intro.multithread]).
Returns: Atomically returns the value referenced by *ptrimmediately before the effects.
constexpr bool compareexchangeweak(T& expected, T desired, memoryorder success, memoryorder failure) const noexcept;constexpr bool compareexchangestrong(T& expected, T desired, memoryorder success, memoryorder failure) const noexcept;constexpr bool compareexchangeweak(T& expected, T desired, memoryorder order = memoryorder::seqcst) const noexcept;constexpr bool compareexchangestrong(T& expected, T desired, memoryorder order = memoryorder::seqcst) const noexcept;
Preconditions: failure ismemory_order::relaxed,memory_order::consume,memory_order::acquire, ormemory_order::seq_cst.
Effects: Retrieves the value in expected.
It then atomically compares the value representation of the value referenced by *ptr for equality with that previously retrieved from expected, and if true, replaces the value referenced by *ptrwith that in desired.
If and only if the comparison is true, memory is affected according to the value of success, and if the comparison is false, memory is affected according to the value of failure.
When only one memory_order argument is supplied, the value of success is order, and the value of failure is orderexcept that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value of memory_order::release shall be replaced by the value memory_order::relaxed.
If and only if the comparison is false then, after the atomic operation, the value in expected is replaced by the value read from the value referenced by *ptrduring the atomic comparison.
If the operation returns true, these operations are atomic read-modify-write operations ([intro.races]) on the value referenced by *ptr.
Otherwise, these operations are atomic load operations on that memory.
Returns: The result of the comparison.
Remarks: A weak compare-and-exchange operation may fail spuriously.
That is, even when the contents of memory referred to by expected and ptr are equal, it may return false and store back to expected the same memory contents that were originally there.
[Note 2:
This spurious failure enables implementation of compare-and-exchange on a broader class of machines, e.g., load-locked store-conditional machines.
A consequence of spurious failure is that nearly all uses of weak compare-and-exchange will be in a loop.
When a compare-and-exchange is in a loop, the weak version will yield better performance on some platforms.
When a weak compare-and-exchange would require a loop and a strong one would not, the strong one is preferable.
— end note]
constexpr void wait(T old, memoryorder order = memoryorder::seqcst) const noexcept;
Preconditions: order ismemory_order::relaxed,memory_order::consume,memory_order::ac- quire, ormemory_order::seq_cst.
Effects: Repeatedly performs the following steps, in order:
* Evaluates load(order) and compares its value representation for equality against that of old. * If they compare unequal, returns. * Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
Remarks: This function is an atomic waiting operation ([atomics.wait]) on atomic object *ptr.
constexpr void notifyone() const noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation on *ptrthat is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.
Remarks: This function is an atomic notifying operation ([atomics.wait]) on atomic object *ptr.
constexpr void notifyall() const noexcept;
Effects: Unblocks the execution of all atomic waiting operations on *ptrthat are eligible to be unblocked ([atomics.wait]) by this call.
Remarks: This function is an atomic notifying operation ([atomics.wait]) on atomic object *ptr.
#### 33.5.7.3 Specializations for integral types [atomics.ref.int]
There are specializations of the atomic_ref class template for the integral typeschar,signed char,unsigned char,short,unsigned short,int,unsigned int,long,unsigned long,long long,unsigned long long,char8_t,char16_t,char32_t,wchar_t, and any other types needed by the typedefs in the header .
For each such type integral-type, the specialization atomic_ref<integral-type> provides additional atomic operations appropriate to integral types.
[Note 1:
The specialization atomic_ref<bool>uses the primary template ([atomics.ref.generic]).
— end note]
namespace std { template<> struct atomic_ref<integral-type> { private: integral-type* ptr; public: using value_type = integral-type;using difference_type = value_type;static constexpr size_t required_alignment = implementation-defined;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const noexcept;constexpr explicit atomic_ref(integral-type&);constexpr atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete;constexpr void store(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr integral-type operator=(integral-type) const noexcept;constexpr integral-type load(memory_order = memory_order::seq_cst) const noexcept;constexpr operator integral-type() const noexcept;constexpr integral-type exchange(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr bool compare_exchange_weak(integral-type&, integral-type, memory_order, memory_order) const noexcept;constexpr bool compare_exchange_strong(integral-type&, integral-type, memory_order, memory_order) const noexcept;constexpr bool compare_exchange_weak(integral-type&, integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr bool compare_exchange_strong(integral-type&, integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr integral-type fetch_add(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr integral-type fetch_sub(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr integral-type fetch_and(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr integral-type fetch_or(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr integral-type fetch_xor(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr integral-type fetch_max(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr integral-type fetch_min(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr integral-type operator++(int) const noexcept;constexpr integral-type operator--(int) const noexcept;constexpr integral-type operator++() const noexcept;constexpr integral-type operator--() const noexcept;constexpr integral-type operator+=(integral-type) const noexcept;constexpr integral-type operator-=(integral-type) const noexcept;constexpr integral-type operator&=(integral-type) const noexcept;constexpr integral-type operator|=(integral-type) const noexcept;constexpr integral-type operator^=(integral-type) const noexcept;constexpr void wait(integral-type, memory_order = memory_order::seq_cst) const noexcept;constexpr void notify_one() const noexcept;constexpr void notify_all() const noexcept;};}
Descriptions are provided below only for members that differ from the primary template.
The following operations perform arithmetic computations.
The correspondence among key, operator, and computation is specified in Table 148.
constexpr integral-type fetch key(integral-type operand, memoryorder order = memoryorder::seqcst) const noexcept;
Effects: Atomically replaces the value referenced by *ptr with the result of the computation applied to the value referenced by *ptrand the given operand.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.races]).
Returns: Atomically, the value referenced by *ptrimmediately before the effects.
Remarks: Except for fetch_max and fetch_min, for signed integer types the result is as if the object value and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.
[Note 2:
There are no undefined results arising from the computation.
— end note]
For fetch_max and fetch_min, the maximum and minimum computation is performed as if by max and min algorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.
constexpr integral-type operator op=(integral-type operand) const noexcept;
Effects: Equivalent to:return fetch_ key(operand) op operand;
#### 33.5.7.4 Specializations for floating-point types [atomics.ref.float]
There are specializations of the atomic_ref class template for all cv-unqualified floating-point types.
For each such type floating-point-type, the specialization atomic_ref<floating-point> provides additional atomic operations appropriate to floating-point types.
namespace std { template<> struct atomic_ref<floating-point-type> { private: floating-point-type* ptr; public: using value_type = floating-point-type;using difference_type = value_type;static constexpr size_t required_alignment = implementation-defined;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const noexcept;constexpr explicit atomic_ref(floating-point-type&);constexpr atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete;constexpr void store(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;constexpr floating-point-type operator=(floating-point-type) const noexcept;constexpr floating-point-type load(memory_order = memory_order::seq_cst) const noexcept;constexpr operator floating-point-type() const noexcept;constexpr floating-point-type exchange(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order, memory_order) const noexcept;constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order, memory_order) const noexcept;constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) const noexcept;constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) const noexcept;constexpr floating-point-type fetch_add(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;constexpr floating-point-type fetch_sub(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;constexpr floating-point-type operator+=(floating-point-type) const noexcept;constexpr floating-point-type operator-=(floating-point-type) const noexcept;constexpr void wait(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;constexpr void notify_one() const noexcept;constexpr void notify_all() const noexcept;};}
Descriptions are provided below only for members that differ from the primary template.
The following operations perform arithmetic computations.
The correspondence among key, operator, and computation is specified in Table 148.
constexpr floating-point-type fetch key(floating-point-type operand, memoryorder order = memoryorder::seqcst) const noexcept;
Effects: Atomically replaces the value referenced by *ptr with the result of the computation applied to the value referenced by *ptrand the given operand.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.races]).
Returns: Atomically, the value referenced by *ptrimmediately before the effects.
Remarks: If the result is not a representable value for its type ([expr.pre]), the result is unspecified, but the operations otherwise have no undefined behavior.
Atomic arithmetic operations on floating-point-type should conform to the std::numeric_limits<floating-point-type> traits associated with the floating-point type ([limits.syn]).
The floating-point environment ([cfenv]) for atomic arithmetic operations on floating-point-typemay be different than the calling thread's floating-point environment.
constexpr floating-point-type operator op=(floating-point-type operand) const noexcept;
Effects: Equivalent to:return fetch_ key(operand) op operand;
#### 33.5.7.5 Partial specialization for pointers [atomics.ref.pointer]
namespace std { template<class T> struct atomic_ref<T*> { private: T** ptr; public: using value_type = T*;using difference_type = ptrdiff_t;static constexpr size_t required_alignment = implementation-defined;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const noexcept;constexpr explicit atomic_ref(T*&);constexpr atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete;constexpr void store(T*, memory_order = memory_order::seq_cst) const noexcept;constexpr T* operator=(T*) const noexcept;constexpr T* load(memory_order = memory_order::seq_cst) const noexcept;constexpr operator T*() const noexcept;constexpr T* exchange(T*, memory_order = memory_order::seq_cst) const noexcept;constexpr bool compare_exchange_weak(T*&, T*, memory_order, memory_order) const noexcept;constexpr bool compare_exchange_strong(T*&, T*, memory_order, memory_order) const noexcept;constexpr bool compare_exchange_weak(T*&, T*, memory_order = memory_order::seq_cst) const noexcept;constexpr bool compare_exchange_strong(T*&, T*, memory_order = memory_order::seq_cst) const noexcept;constexpr T* fetch_add(difference_type, memory_order = memory_order::seq_cst) const noexcept;constexpr T* fetch_sub(difference_type, memory_order = memory_order::seq_cst) const noexcept;constexpr T* fetch_max(T*, memory_order = memory_order::seq_cst) const noexcept;constexpr T* fetch_min(T*, memory_order = memory_order::seq_cst) const noexcept;constexpr T* operator++(int) const noexcept;constexpr T* operator--(int) const noexcept;constexpr T* operator++() const noexcept;constexpr T* operator--() const noexcept;constexpr T* operator+=(difference_type) const noexcept;constexpr T* operator-=(difference_type) const noexcept;constexpr void wait(T*, memory_order = memory_order::seq_cst) const noexcept;constexpr void notify_one() const noexcept;constexpr void notify_all() const noexcept;};}
Descriptions are provided below only for members that differ from the primary template.
The following operations perform arithmetic computations.
The correspondence among key, operator, and computation is specified in Table 149.
constexpr T* fetch key(differencetype operand, memoryorder order = memoryorder::seqcst) const noexcept;
Mandates: T is a complete object type.
Effects: Atomically replaces the value referenced by *ptr with the result of the computation applied to the value referenced by *ptrand the given operand.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.races]).
Returns: Atomically, the value referenced by *ptrimmediately before the effects.
Remarks: The result may be an undefined address, but the operations otherwise have no undefined behavior.
For fetch_max and fetch_min, the maximum and minimum computation is performed as if by max and minalgorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.
[Note 1:
If the pointers point to different complete objects (or subobjects thereof), the < operator does not establish a strict weak ordering (Table 29, [expr.rel]).
— end note]
constexpr T* operator op=(differencetype operand) const noexcept;
Effects: Equivalent to:return fetch_ key(operand) op operand;
#### 33.5.7.6 Member operators common to integers and pointers to objects [atomics.ref.memop]
constexpr valuetype operator++(int) const noexcept;
Effects: Equivalent to: return fetch_add(1);
constexpr valuetype operator--(int) const noexcept;
Effects: Equivalent to: return fetch_sub(1);
constexpr valuetype operator++() const noexcept;
Effects: Equivalent to: return fetch_add(1) + 1;
constexpr valuetype operator--() const noexcept;
Effects: Equivalent to: return fetch_sub(1) - 1;
### 33.5.8 Class template atomic [atomics.types.generic]
#### 33.5.8.1 General [atomics.types.generic.general]
namespace std { template<class T> struct atomic { using value_type = T;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const volatile noexcept;bool is_lock_free() const noexcept;constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);constexpr atomic(T) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete; T load(memory_order = memory_order::seq_cst) const volatile noexcept;constexpr T load(memory_order = memory_order::seq_cst) const noexcept;operator T() const volatile noexcept;constexpr operator T() const noexcept;void store(T, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store(T, memory_order = memory_order::seq_cst) noexcept; T operator=(T) volatile noexcept;constexpr T operator=(T) noexcept; T exchange(T, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T exchange(T, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept;bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept;bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) noexcept;void wait(T, memory_order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(T, memory_order = memory_order::seq_cst) const noexcept;void notify_one() volatile noexcept;constexpr void notify_one() noexcept;void notify_all() volatile noexcept;constexpr void notify_all() noexcept;};}
The template argument for T shall meet theCpp17CopyConstructible and Cpp17CopyAssignable requirements.
The program is ill-formed if any of
* is_trivially_copyable_v<T>, * is_copy_constructible_v<T>, * is_move_constructible_v<T>, * is_copy_assignable_v<T>, or * is_move_assignable_v<T>
is false.
[Note 1:
Type arguments that are not also statically initializable can be difficult to use.
— end note]
The specialization atomic<bool> is a standard-layout struct.
It has a trivial destructor.
[Note 2:
The representation of an atomic specialization need not have the same size and alignment requirement as its corresponding argument type.
— end note]
#### 33.5.8.2 Operations on atomic types [atomics.types.operations]
constexpr atomic() noexcept(isnothrowdefaultconstructiblev<T>);
Mandates: is_default_constructible_v<T> is true.
Effects: Initializes the atomic object with the value of T().
constexpr atomic(T desired) noexcept;
Effects: Initializes the object with the value desired.
[Note 1:
It is possible to have an access to an atomic object Arace with its construction, for example by communicating the address of the just-constructed object A to another thread viamemory_order::relaxed operations on a suitable atomic pointer variable, and then immediately accessing A in the receiving thread.
This results in undefined behavior.
— end note]
The static data member is_always_lock_free is trueif the atomic type's operations are always lock-free, and false otherwise.
[Note 2:
The value of is_always_lock_free is consistent with the value of the corresponding ATOMIC_..._LOCK_FREE macro, if defined.
— end note]
Returns: true if the object's operations are lock-free, false otherwise.
[Note 3:
The return value of the is_lock_free member function is consistent with the value of is_always_lock_free for the same type.
— end note]
void store(T desired, memoryorder order = memoryorder::seqcst) volatile noexcept;constexpr void store(T desired, memoryorder order = memoryorder::seqcst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Preconditions: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst.
Effects: Atomically replaces the value pointed to by thiswith the value of desired.
Memory is affected according to the value oforder.
T operator=(T desired) volatile noexcept;constexpr T operator=(T desired) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to store(desired).
T load(memoryorder order = memoryorder::seqcst) const volatile noexcept;constexpr T load(memoryorder order = memoryorder::seqcst) const noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Preconditions: order ismemory_order::relaxed,memory_order::consume,memory_order::ac- quire, ormemory_order::seq_cst.
Effects: Memory is affected according to the value of order.
Returns: Atomically returns the value pointed to by this.
operator T() const volatile noexcept;constexpr operator T() const noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return load();
T exchange(T desired, memoryorder order = memoryorder::seqcst) volatile noexcept;constexpr T exchange(T desired, memoryorder order = memoryorder::seqcst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Atomically replaces the value pointed to by thiswith desired.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.multithread]).
Returns: Atomically returns the value pointed to by this immediately before the effects.
bool compareexchangeweak(T& expected, T desired, memoryorder success, memoryorder failure) volatile noexcept;constexpr bool compareexchangeweak(T& expected, T desired, memoryorder success, memoryorder failure) noexcept;bool compareexchangestrong(T& expected, T desired, memoryorder success, memoryorder failure) volatile noexcept;constexpr bool compareexchangestrong(T& expected, T desired, memoryorder success, memoryorder failure) noexcept;bool compareexchangeweak(T& expected, T desired, memoryorder order = memoryorder::seqcst) volatile noexcept;constexpr bool compareexchangeweak(T& expected, T desired, memoryorder order = memoryorder::seqcst) noexcept;bool compareexchangestrong(T& expected, T desired, memoryorder order = memoryorder::seqcst) volatile noexcept;constexpr bool compareexchangestrong(T& expected, T desired, memoryorder order = memoryorder::seqcst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Preconditions: failure ismemory_order::relaxed,memory_order::consume,memory_order::acquire, ormemory_order::seq_cst.
Effects: Retrieves the value in expected.
It then atomically compares the value representation of the value pointed to by thisfor equality with that previously retrieved from expected, and if true, replaces the value pointed to by this with that in desired.
If and only if the comparison is true, memory is affected according to the value of success, and if the comparison is false, memory is affected according to the value of failure.
When only one memory_order argument is supplied, the value of success is order, and the value offailure is order except that a value of memory_order::acq_relshall be replaced by the value memory_order::acquire and a value ofmemory_order::release shall be replaced by the valuememory_order::relaxed.
If and only if the comparison is false then, after the atomic operation, the value in expected is replaced by the value pointed to by this during the atomic comparison.
If the operation returns true, these operations are atomic read-modify-write operations ([intro.multithread]) on the memory pointed to by this.
Otherwise, these operations are atomic load operations on that memory.
Returns: The result of the comparison.
[Note 4:
For example, the effect ofcompare_exchange_strongon objects without padding bits ([basic.types.general]) isif (memcmp(this, &expected, sizeof(*this)) == 0) memcpy(this, &desired, sizeof(*this));else memcpy(&expected, this, sizeof(*this));
— end note]
[Example 1:
The expected use of the compare-and-exchange operations is as follows.
The compare-and-exchange operations will update expected when another iteration of the loop is needed.
expected = current.load();do { desired = function(expected);} while (!current.compare_exchange_weak(expected, desired)); — end example]
[Example 2:
Because the expected value is updated only on failure, code releasing the memory containing the expected value on success will work.
For example, list head insertion will act atomically and would not introduce a data race in the following code:do { p->next = head; } while (!head.compare_exchange_weak(p->next, p));
— end example]
Implementations should ensure that weak compare-and-exchange operations do not consistently return false unless either the atomic object has value different from expected or there are concurrent modifications to the atomic object.
Remarks: A weak compare-and-exchange operation may fail spuriously.
That is, even when the contents of memory referred to by expected and this are equal, it may return false and store back to expected the same memory contents that were originally there.
[Note 5:
This spurious failure enables implementation of compare-and-exchange on a broader class of machines, e.g., load-locked store-conditional machines.
A consequence of spurious failure is that nearly all uses of weak compare-and-exchange will be in a loop.
When a compare-and-exchange is in a loop, the weak version will yield better performance on some platforms.
When a weak compare-and-exchange would require a loop and a strong one would not, the strong one is preferable.
— end note]
[Note 6:
Under cases where the memcpy and memcmp semantics of the compare-and-exchange operations apply, the comparisons can fail for values that compare equal withoperator== if the value representation has trap bits or alternate representations of the same value.
Notably, on implementations conforming to ISO/IEC/IEEE 60559, floating-point -0.0 and +0.0will not compare equal with memcmp but will compare equal with operator==, and NaNs with the same payload will compare equal with memcmp but will not compare equal with operator==.
— end note]
[Note 7:
Because compare-and-exchange acts on an object's value representation, padding bits that never participate in the object's value representation are ignored.
As a consequence, the following code is guaranteed to avoid spurious failure:struct padded { char clank = 0x42;unsigned biff = 0xC0DEFEFE;}; atomic<padded> pad = {};bool zap() { padded expected, desired{0, 0};return pad.compare_exchange_strong(expected, desired);}
— end note]
[Note 8:
For a union with bits that participate in the value representation of some members but not others, compare-and-exchange might always fail.
This is because such padding bits have an indeterminate value when they do not participate in the value representation of the active member.
As a consequence, the following code is not guaranteed to ever succeed:union pony { double celestia = 0.;short luna; }; atomic<pony> princesses = {};bool party(pony desired) { pony expected;return princesses.compare_exchange_strong(expected, desired);}
— end note]
void wait(T old, memoryorder order = memoryorder::seqcst) const volatile noexcept;constexpr void wait(T old, memoryorder order = memoryorder::seqcst) const noexcept;
Preconditions: order ismemory_order::relaxed,memory_order::consume,memory_order::ac- quire, ormemory_order::seq_cst.
Effects: Repeatedly performs the following steps, in order:
* Evaluates load(order) and compares its value representation for equality against that of old. * If they compare unequal, returns. * Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
Remarks: This function is an atomic waiting operation ([atomics.wait]).
void notifyone() volatile noexcept;constexpr void notifyone() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.
Remarks: This function is an atomic notifying operation ([atomics.wait]).
void notifyall() volatile noexcept;constexpr void notifyall() noexcept;
Effects: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked ([atomics.wait]) by this call.
Remarks: This function is an atomic notifying operation ([atomics.wait]).
#### 33.5.8.3 Specializations for integers [atomics.types.int]
There are specializations of the atomicclass template for the integral typeschar,signed char,unsigned char,short,unsigned short,int,unsigned int,long,unsigned long,long long,unsigned long long,char8_t,char16_t,char32_t,wchar_t, and any other types needed by the typedefs in the header .
For each such type integral-type, the specializationatomic<integral-type> provides additional atomic operations appropriate to integral types.
[Note 1:
The specialization atomic<bool>uses the primary template ([atomics.types.generic]).
— end note]
namespace std { template<> struct atomic<integral-type> { using value_type = integral-type;using difference_type = value_type;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const volatile noexcept;bool () const noexcept;constexpr atomic() noexcept;constexpr atomic(integral-type) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete;void store(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type operator=(integral-type) volatile noexcept;constexpr integral-type operator=(integral-type) noexcept;integral-type load(memory_order = memory_order::seq_cst) const volatile noexcept;constexpr integral-type load(memory_order = memory_order::seq_cst) const noexcept;operator integral-type() const volatile noexcept;constexpr operator integral-type() const noexcept;integral-type exchange(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type exchange(integral-type, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_weak(integral-type&, integral-type, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_weak(integral-type&, integral-type, memory_order, memory_order) noexcept;bool compare_exchange_strong(integral-type&, integral-type, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_strong(integral-type&, integral-type, memory_order, memory_order) noexcept;bool compare_exchange_weak(integral-type&, integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_weak(integral-type&, integral-type, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_strong(integral-type&, integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_strong(integral-type&, integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_add(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_add(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_sub(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_sub(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_and(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_and(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_or(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_or(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_xor(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_xor(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_max( integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_max( integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_min( integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_min( integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type operator++(int) volatile noexcept;constexpr integral-type operator++(int) noexcept;integral-type operator--(int) volatile noexcept;constexpr integral-type operator--(int) noexcept;integral-type operator++() volatile noexcept;constexpr integral-type operator++() noexcept;integral-type operator--() volatile noexcept;constexpr integral-type operator--() noexcept;integral-type operator+=(integral-type) volatile noexcept;constexpr integral-type operator+=(integral-type) noexcept;integral-type operator-=(integral-type) volatile noexcept;constexpr integral-type operator-=(integral-type) noexcept;integral-type operator&=(integral-type) volatile noexcept;constexpr integral-type operator&=(integral-type) noexcept;integral-type operator|=(integral-type) volatile noexcept;constexpr integral-type operator|=(integral-type) noexcept;integral-type operator^=(integral-type) volatile noexcept;constexpr integral-type operator^=(integral-type) noexcept;void wait(integral-type, memory_order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(integral-type, memory_order = memory_order::seq_cst) const noexcept;void notify_one() volatile noexcept;constexpr void notify_one() noexcept;void notify_all() volatile noexcept;constexpr void notify_all() noexcept;};}
The atomic integral specializations are standard-layout structs.
They each have a trivial destructor.
Descriptions are provided below only for members that differ from the primary template.
The following operations perform arithmetic computations.
The correspondence among key, operator, and computation is specified in Table 148.
Table 148: Atomic arithmetic computations [tab:atomic.types.int.comp]
| 🔗 | key | Op | Computation | key | Op | Computation | | -------------------------------------- | --------- | ------- | --------------- | --------- | -------------------- | -------------------- | | 🔗 | add | + | addition | and | & | bitwise and | | 🔗 | sub | - | subtraction | or | | | bitwise inclusive or | | 🔗 | max | maximum | xor | ^ | bitwise exclusive or | | | 🔗 | min | minimum | | | | |
T fetch key(T operand, memoryorder order = memoryorder::seqcst) volatile noexcept;constexpr T fetch key(T operand, memoryorder order = memoryorder::seqcst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Atomically replaces the value pointed to bythis with the result of the computation applied to the value pointed to by this and the given operand.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.multithread]).
Returns: Atomically, the value pointed to by this immediately before the effects.
Remarks: Except for fetch_max and fetch_min, for signed integer types the result is as if the object value and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.
[Note 2:
There are no undefined results arising from the computation.
— end note]
For fetch_max and fetch_min, the maximum and minimum computation is performed as if by max and min algorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.
T operator op=(T operand) volatile noexcept;constexpr T operator op=(T operand) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_ key(operand) op operand;
#### 33.5.8.4 Specializations for floating-point types [atomics.types.float]
There are specializations of the atomicclass template for all cv-unqualified floating-point types.
For each such type floating-point-type, the specialization atomic<floating-point-type>provides additional atomic operations appropriate to floating-point types.
namespace std { template<> struct atomic<floating-point-type> { using value_type = floating-point-type;using difference_type = value_type;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const volatile noexcept;bool is_lock_free() const noexcept;constexpr atomic() noexcept;constexpr atomic(floating-point-type) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete;void store(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type operator=(floating-point-type) volatile noexcept;constexpr floating-point-type operator=(floating-point-type) noexcept;floating-point-type load(memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type load(memory_order = memory_order::seq_cst) noexcept;operator floating-point-type() volatile noexcept;constexpr operator floating-point-type() noexcept;floating-point-type exchange(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type exchange(floating-point-type, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order, memory_order) noexcept;bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order, memory_order) noexcept;bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_add(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type fetch_add(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_sub(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type fetch_sub(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type operator+=(floating-point-type) volatile noexcept;constexpr floating-point-type operator+=(floating-point-type) noexcept;floating-point-type operator-=(floating-point-type) volatile noexcept;constexpr floating-point-type operator-=(floating-point-type) noexcept;void wait(floating-point-type, memory_order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;void notify_one() volatile noexcept;constexpr void notify_one() noexcept;void notify_all() volatile noexcept;constexpr void notify_all() noexcept;};}
The atomic floating-point specializations are standard-layout structs.
They each have a trivial destructor.
Descriptions are provided below only for members that differ from the primary template.
The following operations perform arithmetic addition and subtraction computations.
The correspondence among key, operator, and computation is specified in Table 148.
T fetch key(T operand, memoryorder order = memoryorder::seqcst) volatile noexcept;constexpr T fetch key(T operand, memoryorder order = memoryorder::seqcst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Atomically replaces the value pointed to by thiswith the result of the computation applied to the value pointed to by this and the given operand.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.multithread]).
Returns: Atomically, the value pointed to by this immediately before the effects.
Remarks: If the result is not a representable value for its type ([expr.pre]) the result is unspecified, but the operations otherwise have no undefined behavior.
Atomic arithmetic operations on floating-point-typeshould conform to the std::numeric_limits<floating-point-type>traits associated with the floating-point type ([limits.syn]).
The floating-point environment ([cfenv]) for atomic arithmetic operations on floating-point-type may be different than the calling thread's floating-point environment.
T operator op=(T operand) volatile noexcept;constexpr T operator op=(T operand) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_ key(operand) op operand;
Remarks: If the result is not a representable value for its type ([expr.pre]) the result is unspecified, but the operations otherwise have no undefined behavior.
Atomic arithmetic operations on floating-point-typeshould conform to the std::numeric_limits<floating-point-type>traits associated with the floating-point type ([limits.syn]).
The floating-point environment ([cfenv]) for atomic arithmetic operations on floating-point-type may be different than the calling thread's floating-point environment.
#### 33.5.8.5 Partial specialization for pointers [atomics.types.pointer]
namespace std { template<class T> struct atomic<T*> { using value_type = T*;using difference_type = ptrdiff_t;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const volatile noexcept;bool is_lock_free() const noexcept;constexpr atomic() noexcept;constexpr atomic(T*) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete;void store(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store(T*, memory_order = memory_order::seq_cst) noexcept; T* operator=(T*) volatile noexcept;constexpr T* operator=(T*) noexcept; T* load(memory_order = memory_order::seq_cst) const volatile noexcept;constexpr T* load(memory_order = memory_order::seq_cst) const noexcept;operator T*() const volatile noexcept;constexpr operator T*() const noexcept; T* exchange(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* exchange(T*, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept;bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept;bool compare_exchange_weak(T*&, T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_weak(T*&, T*, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_strong(T*&, T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_strong(T*&, T*, memory_order = memory_order::seq_cst) noexcept; T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept; T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept; T* fetch_max(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* fetch_max(T*, memory_order = memory_order::seq_cst) noexcept; T* fetch_min(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* fetch_min(T*, memory_order = memory_order::seq_cst) noexcept; T* operator++(int) volatile noexcept;constexpr T* operator++(int) noexcept; T* operator--(int) volatile noexcept;constexpr T* operator--(int) noexcept; T* operator++() volatile noexcept;constexpr T* operator++() noexcept; T* operator--() volatile noexcept;constexpr T* operator--() noexcept; T* operator+=(ptrdiff_t) volatile noexcept;constexpr T* operator+=(ptrdiff_t) noexcept; T* operator-=(ptrdiff_t) volatile noexcept;constexpr T* operator-=(ptrdiff_t) noexcept;void wait(T*, memory_order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(T*, memory_order = memory_order::seq_cst) const noexcept;void notify_one() volatile noexcept;constexpr void notify_one() noexcept;void notify_all() volatile noexcept;constexpr void notify_all() noexcept;};}
There is a partial specialization of the atomic class template for pointers.
Specializations of this partial specialization are standard-layout structs.
They each have a trivial destructor.
Descriptions are provided below only for members that differ from the primary template.
The following operations perform pointer arithmetic.
The correspondence among key, operator, and computation is specified in Table 149.
T* fetch key(ptrdifft operand, memoryorder order = memoryorder::seqcst) volatile noexcept;constexpr T* fetch key(ptrdifft operand, memoryorder order = memoryorder::seqcst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Mandates: T is a complete object type.
[Note 1:
Pointer arithmetic on void* or function pointers is ill-formed.
— end note]
Effects: Atomically replaces the value pointed to bythis with the result of the computation applied to the value pointed to by this and the given operand.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.multithread]).
Returns: Atomically, the value pointed to by this immediately before the effects.
Remarks: The result may be an undefined address, but the operations otherwise have no undefined behavior.
For fetch_max and fetch_min, the maximum and minimum computation is performed as if by max and minalgorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.
[Note 2:
If the pointers point to different complete objects (or subobjects thereof), the < operator does not establish a strict weak ordering (Table 29, [expr.rel]).
— end note]
T* operator op=(ptrdifft operand) volatile noexcept;constexpr T* operator op=(ptrdifft operand) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_ key(operand) op operand;
#### 33.5.8.6 Member operators common to integers and pointers to objects [atomics.types.memop]
valuetype operator++(int) volatile noexcept;constexpr valuetype operator++(int) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_add(1);
valuetype operator--(int) volatile noexcept;constexpr valuetype operator--(int) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_sub(1);
valuetype operator++() volatile noexcept;constexpr valuetype operator++() noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_add(1) + 1;
valuetype operator--() volatile noexcept;constexpr valuetype operator--() noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_sub(1) - 1;
### 33.5.9 Non-member functions [atomics.nonmembers]
A non-member function template whose name matches the patternatomic_ f or the pattern atomic_ f_explicitinvokes the member function f, with the value of the first parameter as the object expression and the values of the remaining parameters (if any) as the arguments of the member function call, in order.
An argument for a parameter of type atomic<T>::value_type* is dereferenced when passed to the member function call.
If no such member function exists, the program is ill-formed.
[Note 1:
The non-member functions enable programmers to write code that can be compiled as either C or C++, for example in a shared header file.
— end note]
### 33.5.10 Flag type and operations [atomics.flag]
namespace std { struct atomic_flag { constexpr atomic_flag() noexcept; atomic_flag(const atomic_flag&) = delete; atomic_flag& operator=(const atomic_flag&) = delete; atomic_flag& operator=(const atomic_flag&) volatile = delete;bool test(memory_order = memory_order::seq_cst) const volatile noexcept;constexpr bool test(memory_order = memory_order::seq_cst) const noexcept;bool test_and_set(memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool test_and_set(memory_order = memory_order::seq_cst) noexcept;void clear(memory_order = memory_order::seq_cst) volatile noexcept;constexpr void clear(memory_order = memory_order::seq_cst) noexcept;void wait(bool, memory_order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(bool, memory_order = memory_order::seq_cst) const noexcept;void notify_one() volatile noexcept;constexpr void notify_one() noexcept;void notify_all() volatile noexcept;constexpr void notify_all() noexcept;};}
The atomic_flag type provides the classic test-and-set functionality.
It has two states, set and clear.
Operations on an object of type atomic_flag shall be lock-free.
The operations should also be address-free.
The atomic_flag type is a standard-layout struct.
It has a trivial destructor.
constexpr atomicflag::atomicflag() noexcept;
Effects: Initializes *this to the clear state.
bool atomicflagtest(const volatile atomicflag* object) noexcept;constexpr bool atomicflagtest(const atomicflag* object) noexcept;bool atomicflagtestexplicit(const volatile atomicflag* object, memoryorder order) noexcept;constexpr bool atomicflagtestexplicit(const atomicflag* object, memoryorder order) noexcept;bool atomicflag::test(memoryorder order = memoryorder::seqcst) const volatile noexcept;constexpr bool atomicflag::test(memoryorder order = memoryorder::seqcst) const noexcept;
For atomic_flag_test, let order be memory_order::seq_cst.
Preconditions: order ismemory_order::relaxed,memory_order::consume,memory_order::ac- quire, ormemory_order::seq_cst.
Effects: Memory is affected according to the value of order.
Returns: Atomically returns the value pointed to by object or this.
bool atomicflagtestandset(volatile atomicflag* object) noexcept;constexpr bool atomicflagtestandset(atomicflag* object) noexcept;bool atomicflagtestandsetexplicit(volatile atomicflag* object, memoryorder order) noexcept;constexpr bool atomicflagtestandsetexplicit(atomicflag* object, memoryorder order) noexcept;bool atomicflag::testandset(memoryorder order = memoryorder::seqcst) volatile noexcept;constexpr bool atomicflag::testandset(memoryorder order = memoryorder::seqcst) noexcept;
Effects: Atomically sets the value pointed to by object or by this to true.
Memory is affected according to the value oforder.
These operations are atomic read-modify-write operations ([intro.multithread]).
Returns: Atomically, the value of the object immediately before the effects.
void atomicflagclear(volatile atomicflag* object) noexcept;constexpr void atomicflagclear(atomicflag* object) noexcept;void atomicflagclearexplicit(volatile atomicflag* object, memoryorder order) noexcept;constexpr void atomicflagclearexplicit(atomicflag* object, memoryorder order) noexcept;void atomicflag::clear(memoryorder order = memoryorder::seqcst) volatile noexcept;constexpr void atomicflag::clear(memoryorder order = memoryorder::seqcst) noexcept;
Preconditions: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst.
Effects: Atomically sets the value pointed to by object or by this tofalse.
Memory is affected according to the value of order.
void atomicflagwait(const volatile atomicflag* object, bool old) noexcept;constexpr void atomicflagwait(const atomicflag* object, bool old) noexcept;void atomicflagwaitexplicit(const volatile atomicflag* object,bool old, memoryorder order) noexcept;constexpr void atomicflagwaitexplicit(const atomicflag* object,bool old, memoryorder order) noexcept;void atomicflag::wait(bool old, memoryorder order = memoryorder::seqcst) const volatile noexcept;constexpr void atomicflag::wait(bool old, memoryorder order = memoryorder::seqcst) const noexcept;
For atomic_flag_wait, let order be memory_order::seq_cst.
Let flag be object for the non-member functions andthis for the member functions.
Preconditions: order ismemory_order::relaxed,memory_order::consume,memory_order::ac- quire, ormemory_order::seq_cst.
Effects: Repeatedly performs the following steps, in order:
* Evaluates flag->test(order) != old. * If the result of that evaluation is true, returns. * Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
Remarks: This function is an atomic waiting operation ([atomics.wait]).
void atomicflagnotifyone(volatile atomicflag* object) noexcept;constexpr void atomicflagnotifyone(atomicflag* object) noexcept;void atomicflag::notifyone() volatile noexcept;constexpr void atomicflag::notifyone() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.
Remarks: This function is an atomic notifying operation ([atomics.wait]).
void atomicflagnotifyall(volatile atomicflag* object) noexcept;constexpr void atomicflagnotifyall(atomicflag* object) noexcept;void atomicflag::notifyall() volatile noexcept;constexpr void atomicflag::notifyall() noexcept;
Effects: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked ([atomics.wait]) by this call.
Remarks: This function is an atomic notifying operation ([atomics.wait]).
#define ATOMICFLAGINIT see below
Remarks: The macro ATOMIC_FLAG_INIT is defined in such a way that it can be used to initialize an object of type atomic_flagto the clear state.
The macro can be used in the form:atomic_flag guard = ATOMIC_FLAG_INIT;
It is unspecified whether the macro can be used in other initialization contexts.
For a complete static-duration object, that initialization shall be static.
### 33.5.11 Fences [atomics.fences]
This subclause introduces synchronization primitives called fences.
Fences can have acquire semantics, release semantics, or both.
A fence with acquire semantics is called an acquire fence.
A fence with release semantics is called a release fence.
A release fence A synchronizes with an acquire fence B if there exist atomic operations X and Y, both operating on some atomic objectM, such that A is sequenced before X, X modifiesM, Y is sequenced before B, and Y reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation.
A release fence A synchronizes with an atomic operation B that performs an acquire operation on an atomic object M if there exists an atomic operation X such that A is sequenced before X, Xmodifies M, and B reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation.
An atomic operation A that is a release operation on an atomic objectM synchronizes with an acquire fence B if there exists some atomic operation X on M such that X is sequenced before Band reads the value written by A or a value written by any side effect in the release sequence headed by A.
extern "C" constexpr void atomicthreadfence(memoryorder order) noexcept;
Effects: Depending on the value of order, this operation:
* has no effects, if order == memory_order::relaxed; * is an acquire fence, if order == memory_order::acquire or order == memory_order::consume; * is a release fence, if order == memory_order::release; * is both an acquire fence and a release fence, if order == memory_order::acq_rel; * is a sequentially consistent acquire and release fence, if order == memory_order::seq_cst.
extern "C" constexpr void atomicsignalfence(memoryorder order) noexcept;
Effects: Equivalent to atomic_thread_fence(order), except that the resulting ordering constraints are established only between a thread and a signal handler executed in the same thread.
[Note 1:
atomic_signal_fence can be used to specify the order in which actions performed by the thread become visible to the signal handler.
Compiler optimizations and reorderings of loads and stores are inhibited in the same way as with atomic_thread_fence, but the hardware fence instructions that atomic_thread_fence would have inserted are not emitted.
— end note]
### 33.5.12 C compatibility [stdatomic.h.syn]
The header provides the following definitions:
template<class T> using std-atomic = std::atomic<T>; #define _Atomic(T) std-atomic<T> #define ATOMIC_BOOL_LOCK_FREE see below #define ATOMIC_CHAR_LOCK_FREE see below #define ATOMIC_CHAR16_T_LOCK_FREE see below #define ATOMIC_CHAR32_T_LOCK_FREE see below #define ATOMIC_WCHAR_T_LOCK_FREE see below #define ATOMIC_SHORT_LOCK_FREE see below #define ATOMIC_INT_LOCK_FREE see below #define ATOMIC_LONG_LOCK_FREE see below #define ATOMIC_LLONG_LOCK_FREE see below #define ATOMIC_POINTER_LOCK_FREE see below using std::memory_order; using std::memory_order_relaxed; using std::memory_order_consume; using std::memory_order_acquire; using std::memory_order_release; using std::memory_order_acq_rel; using std::memory_order_seq_cst; using std::atomic_flag; using std::atomic_bool; using std::atomic_char; using std::atomic_schar; using std::atomic_uchar; using std::atomic_short; using std::atomic_ushort; using std::atomic_int; using std::atomic_uint; using std::atomic_long; using std::atomic_ulong; using std::atomic_llong; using std::atomic_ullong; using std::atomic_char8_t; using std::atomic_char16_t; using std::atomic_char32_t; using std::atomic_wchar_t; using std::atomic_int8_t; using std::atomic_uint8_t; using std::atomic_int16_t; using std::atomic_uint16_t; using std::atomic_int32_t; using std::atomic_uint32_t; using std::atomic_int64_t; using std::atomic_uint64_t; using std::atomic_int_least8_t; using std::atomic_uint_least8_t; using std::atomic_int_least16_t; using std::atomic_uint_least16_t; using std::atomic_int_least32_t; using std::atomic_uint_least32_t; using std::atomic_int_least64_t; using std::atomic_uint_least64_t; using std::atomic_int_fast8_t; using std::atomic_uint_fast8_t; using std::atomic_int_fast16_t; using std::atomic_uint_fast16_t; using std::atomic_int_fast32_t; using std::atomic_uint_fast32_t; using std::atomic_int_fast64_t; using std::atomic_uint_fast64_t; using std::atomic_intptr_t; using std::atomic_uintptr_t; using std::atomic_size_t; using std::atomic_ptrdiff_t; using std::atomic_intmax_t; using std::atomic_uintmax_t; using std::atomic_is_lock_free; using std::atomic_load; using std::atomic_load_explicit; using std::atomic_store; using std::atomic_store_explicit; using std::atomic_exchange; using std::atomic_exchange_explicit; using std::atomic_compare_exchange_strong; using std::atomic_compare_exchange_strong_explicit; using std::atomic_compare_exchange_weak; using std::atomic_compare_exchange_weak_explicit; using std::atomic_fetch_add; using std::atomic_fetch_add_explicit; using std::atomic_fetch_sub; using std::atomic_fetch_sub_explicit; using std::atomic_fetch_and; using std::atomic_fetch_and_explicit; using std::atomic_fetch_or; using std::atomic_fetch_or_explicit; using std::atomic_fetch_xor; using std::atomic_fetch_xor_explicit; using std::atomic_flag_test_and_set; using std::atomic_flag_test_and_set_explicit; using std::atomic_flag_clear; using std::atomic_flag_clear_explicit; #define ATOMIC_FLAG_INIT see below using std::atomic_thread_fence; using std::atomic_signal_fence;
Each using-declaration for some name A in the synopsis above makes available the same entity as std::Adeclared in .
Each macro listed above other than _Atomic(T)is defined as in .
It is unspecified whether makes available any declarations in namespace std.
Each of the using-declarations forintN_t, uintN_t, intptr_t, and uintptr_tlisted above is defined if and only if the implementation defines the corresponding typedef-name in [atomics.syn].
Neither the _Atomic macro, nor any of the non-macro global namespace declarations, are provided by any C++ standard library header other than .
Recommended practice: Implementations should ensure that C and C++ representations of atomic objects are compatible, so that the same object can be accessed as both an _Atomic(T)from C code and an atomic<T> from C++ code.
The representations should be the same, and the mechanisms used to ensure atomicity and memory ordering should be compatible.