[atomics.types.generic] (original) (raw)
32 Concurrency support library [thread]
32.5 Atomic operations [atomics]
32.5.8 Class template atomic [atomics.types.generic]
32.5.8.1 General [atomics.types.generic.general]
32.5.8.2 Operations on atomic types [atomics.types.operations]
32.5.8.3 Specializations for integers [atomics.types.int]
32.5.8.4 Specializations for floating-point types [atomics.types.float]
32.5.8.5 Partial specialization for pointers [atomics.types.pointer]
32.5.8.6 Member operators common to integers and pointers to objects [atomics.types.memop]
32.5.8.7 Partial specializations for smart pointers [util.smartptr.atomic]
32.5.8.7.1 General [util.smartptr.atomic.general]
32.5.8.7.2 Partial specialization for shared_ptr [util.smartptr.atomic.shared]
32.5.8.7.3 Partial specialization for weak_ptr [util.smartptr.atomic.weak]
32.5.8.1 General [atomics.types.generic.general]
namespace std { template<class T> struct atomic { using value_type = T;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const volatile noexcept;bool is_lock_free() const noexcept;constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);constexpr atomic(T) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete; T load(memory_order = memory_order::seq_cst) const volatile noexcept;constexpr T load(memory_order = memory_order::seq_cst) const noexcept;operator T() const volatile noexcept;constexpr operator T() const noexcept;void store(T, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store(T, memory_order = memory_order::seq_cst) noexcept; T operator=(T) volatile noexcept;constexpr T operator=(T) noexcept; T exchange(T, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T exchange(T, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept;bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept;bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) noexcept;void wait(T, memory_order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(T, memory_order = memory_order::seq_cst) const noexcept;void notify_one() volatile noexcept;constexpr void notify_one() noexcept;void notify_all() volatile noexcept;constexpr void notify_all() noexcept;};}
The program is ill-formed if any of
- is_trivially_copyable_v<T>,
- is_copy_constructible_v<T>,
- is_move_constructible_v<T>,
- is_copy_assignable_v<T>,
- is_move_assignable_v<T>, or
- same_as<T, remove_cv_t<T>>,
is false.
[Note 1:
Type arguments that are not also statically initializable can be difficult to use.
— _end note_]
The specialization atomic<bool> is a standard-layout struct.
It has a trivial destructor.
[Note 2:
The representation of an atomic specialization need not have the same size and alignment requirement as its corresponding argument type.
— _end note_]
32.5.8.2 Operations on atomic types [atomics.types.operations]
constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
Constraints: is_default_constructible_v<T> is true.
Effects: Initializes the atomic object with the value of T().
constexpr atomic(T desired) noexcept;
Effects: Initializes the object with the value desired.
[Note 1:
It is possible to have an access to an atomic object Arace with its construction, for example by communicating the address of the just-constructed object A to another thread viamemory_order::relaxed operations on a suitable atomic pointer variable, and then immediately accessing A in the receiving thread.
This results in undefined behavior.
— _end note_]
The static data member is_always_lock_free is trueif the atomic type's operations are always lock-free, and false otherwise.
[Note 2:
The value of is_always_lock_free is consistent with the value of the corresponding ATOMIC_..._LOCK_FREE macro, if defined.
— _end note_]
Returns: true if the object's operations are lock-free, false otherwise.
[Note 3:
The return value of the is_lock_free member function is consistent with the value of is_always_lock_free for the same type.
— _end note_]
void store(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr void store(T desired, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Preconditions: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst.
Effects: Atomically replaces the value pointed to by thiswith the value of desired.
Memory is affected according to the value oforder.
T operator=(T desired) volatile noexcept;constexpr T operator=(T desired) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to store(desired).
T load(memory_order order = memory_order::seq_cst) const volatile noexcept;constexpr T load(memory_order order = memory_order::seq_cst) const noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Preconditions: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst.
Effects: Memory is affected according to the value of order.
Returns: Atomically returns the value pointed to by this.
operator T() const volatile noexcept;constexpr operator T() const noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return load();
T exchange(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr T exchange(T desired, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Atomically replaces the value pointed to by thiswith desired.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.multithread]).
Returns: Atomically returns the value pointed to by this immediately before the effects.
bool compare_exchange_weak(T& expected, T desired, memory_order success, memory_order failure) volatile noexcept;constexpr bool compare_exchange_weak(T& expected, T desired, memory_order success, memory_order failure) noexcept;bool compare_exchange_strong(T& expected, T desired, memory_order success, memory_order failure) volatile noexcept;constexpr bool compare_exchange_strong(T& expected, T desired, memory_order success, memory_order failure) noexcept;bool compare_exchange_weak(T& expected, T desired, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_weak(T& expected, T desired, memory_order order = memory_order::seq_cst) noexcept;bool compare_exchange_strong(T& expected, T desired, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_strong(T& expected, T desired, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Preconditions: failure ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst.
Effects: Retrieves the value in expected.
It then atomically compares the value representation of the value pointed to by thisfor equality with that previously retrieved from expected, and if true, replaces the value pointed to by this with that in desired.
If and only if the comparison is true, memory is affected according to the value of success, and if the comparison is false, memory is affected according to the value of failure.
When only one memory_order argument is supplied, the value of success is order, and the value offailure is order except that a value of memory_order::acq_relshall be replaced by the value memory_order::acquire and a value ofmemory_order::release shall be replaced by the valuememory_order::relaxed.
If and only if the comparison is false then, after the atomic operation, the value in expected is replaced by the value pointed to by this during the atomic comparison.
If the operation returns true, these operations are atomic read-modify-write operations ([intro.multithread]) on the memory pointed to by this.
Otherwise, these operations are atomic load operations on that memory.
Returns: The result of the comparison.
[Note 4:
For example, the effect ofcompare_exchange_strongon objects without padding bits ([basic.types.general]) isif (memcmp(this, &expected, sizeof(*this)) == 0) memcpy(this, &desired, sizeof(*this));else memcpy(&expected, this, sizeof(*this));
— _end note_]
[Example 1:
The expected use of the compare-and-exchange operations is as follows.
The compare-and-exchange operations will update expected when another iteration of the loop is needed.
expected = current.load();do { desired = function(expected);} while (!current.compare_exchange_weak(expected, desired)); — _end example_]
[Example 2:
Because the expected value is updated only on failure, code releasing the memory containing the expected value on success will work.
For example, list head insertion will act atomically and would not introduce a data race in the following code:do { p->next = head; } while (!head.compare_exchange_weak(p->next, p));
— _end example_]
Implementations should ensure that weak compare-and-exchange operations do not consistently return false unless either the atomic object has value different from expected or there are concurrent modifications to the atomic object.
Remarks: A weak compare-and-exchange operation may fail spuriously.
That is, even when the contents of memory referred to by expected and this are equal, it may return false and store back to expected the same memory contents that were originally there.
[Note 5:
This spurious failure enables implementation of compare-and-exchange on a broader class of machines, e.g., load-locked store-conditional machines.
A consequence of spurious failure is that nearly all uses of weak compare-and-exchange will be in a loop.
When a compare-and-exchange is in a loop, the weak version will yield better performance on some platforms.
When a weak compare-and-exchange would require a loop and a strong one would not, the strong one is preferable.
— _end note_]
[Note 6:
Under cases where the memcpy and memcmp semantics of the compare-and-exchange operations apply, the comparisons can fail for values that compare equal withoperator== if the value representation has trap bits or alternate representations of the same value.
Notably, on implementations conforming to ISO/IEC 60559, floating-point -0.0 and +0.0will not compare equal with memcmp but will compare equal with operator==, and NaNs with the same payload will compare equal with memcmp but will not compare equal with operator==.
— _end note_]
[Note 7:
Because compare-and-exchange acts on an object's value representation, padding bits that never participate in the object's value representation are ignored.
As a consequence, the following code is guaranteed to avoid spurious failure:struct padded { char clank = 0x42;unsigned biff = 0xC0DEFEFE;}; atomic<padded> pad = {};bool zap() { padded expected, desired{0, 0};return pad.compare_exchange_strong(expected, desired);}
— _end note_]
[Note 8:
For a union with bits that participate in the value representation of some members but not others, compare-and-exchange might always fail.
This is because such padding bits have an indeterminate value when they do not participate in the value representation of the active member.
As a consequence, the following code is not guaranteed to ever succeed:union pony { double celestia = 0.;short luna; }; atomic<pony> princesses = {};bool party(pony desired) { pony expected;return princesses.compare_exchange_strong(expected, desired);}
— _end note_]
void wait(T old, memory_order order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst.
Effects: Repeatedly performs the following steps, in order:
- Evaluates load(order) and compares its value representation for equality against that of old.
- If they compare unequal, returns.
- Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
void notify_one() volatile noexcept;constexpr void notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.
Remarks: This function is an atomic notifying operation ([atomics.wait]).
void notify_all() volatile noexcept;constexpr void notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked ([atomics.wait]) by this call.
Remarks: This function is an atomic notifying operation ([atomics.wait]).
32.5.8.3 Specializations for integers [atomics.types.int]
There are specializations of the atomicclass template for the integral typeschar,signed char,unsigned char,short,unsigned short,int,unsigned int,long,unsigned long,long long,unsigned long long,char8_t,char16_t,char32_t,wchar_t, and any other types needed by the typedefs in the header .
For each such type integral-type, the specializationatomic<_integral-type_> provides additional atomic operations appropriate to integral types.
[Note 1:
The specialization atomic<bool>uses the primary template ([atomics.types.generic]).
— _end note_]
namespace std { template<> struct atomic<_integral-type_> { using value_type = integral-type;using difference_type = value_type;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const volatile noexcept;bool is_lock_free() const noexcept;constexpr atomic() noexcept;constexpr atomic(integral-type) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete;void store(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type operator=(integral-type) volatile noexcept;constexpr integral-type operator=(integral-type) noexcept;integral-type load(memory_order = memory_order::seq_cst) const volatile noexcept;constexpr integral-type load(memory_order = memory_order::seq_cst) const noexcept;operator integral-type() const volatile noexcept;constexpr operator integral-type() const noexcept;integral-type exchange(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type exchange(integral-type, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_weak(integral-type&, integral-type, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_weak(integral-type&, integral-type, memory_order, memory_order) noexcept;bool compare_exchange_strong(integral-type&, integral-type, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_strong(integral-type&, integral-type, memory_order, memory_order) noexcept;bool compare_exchange_weak(integral-type&, integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_weak(integral-type&, integral-type, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_strong(integral-type&, integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_strong(integral-type&, integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_add(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_add(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_sub(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_sub(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_and(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_and(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_or(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_or(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_xor(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_xor(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_max(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_max(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type fetch_min(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr integral-type fetch_min(integral-type, memory_order = memory_order::seq_cst) noexcept;void store_add(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_add(integral-type, memory_order = memory_order::seq_cst) noexcept;void store_sub(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_sub(integral-type, memory_order = memory_order::seq_cst) noexcept;void store_and(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_and(integral-type, memory_order = memory_order::seq_cst) noexcept;void store_or(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_or(integral-type, memory_order = memory_order::seq_cst) noexcept;void store_xor(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_xor(integral-type, memory_order = memory_order::seq_cst) noexcept;void store_max(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_max(integral-type, memory_order = memory_order::seq_cst) noexcept;void store_min(integral-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_min(integral-type, memory_order = memory_order::seq_cst) noexcept;integral-type operator++(int) volatile noexcept;constexpr integral-type operator++(int) noexcept;integral-type operator--(int) volatile noexcept;constexpr integral-type operator--(int) noexcept;integral-type operator++() volatile noexcept;constexpr integral-type operator++() noexcept;integral-type operator--() volatile noexcept;constexpr integral-type operator--() noexcept;integral-type operator+=(integral-type) volatile noexcept;constexpr integral-type operator+=(integral-type) noexcept;integral-type operator-=(integral-type) volatile noexcept;constexpr integral-type operator-=(integral-type) noexcept;integral-type operator&=(integral-type) volatile noexcept;constexpr integral-type operator&=(integral-type) noexcept;integral-type operator|=(integral-type) volatile noexcept;constexpr integral-type operator|=(integral-type) noexcept;integral-type operator^=(integral-type) volatile noexcept;constexpr integral-type operator^=(integral-type) noexcept;void wait(integral-type, memory_order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(integral-type, memory_order = memory_order::seq_cst) const noexcept;void notify_one() volatile noexcept;constexpr void notify_one() noexcept;void notify_all() volatile noexcept;constexpr void notify_all() noexcept;};}
The atomic integral specializations are standard-layout structs.
They each have a trivial destructor.
Descriptions are provided below only for members that differ from the primary template.
The following operations perform arithmetic computations.
The correspondence among key, operator, and computation is specified in Table 155.
Table 155 — Atomic arithmetic computations [tab:atomic.types.int.comp]
| 🔗key | Op | Computation | key | Op | Computation |
|---|---|---|---|---|---|
| 🔗add | + | addition | and | & | bitwise and |
| 🔗sub | - | subtraction | or | | | bitwise inclusive or |
| 🔗max | maximum | xor | ^ | bitwise exclusive or | |
| 🔗min | minimum |
_integral-type_ fetch_ _key_(_integral-type_ operand, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr _integral-type_ fetch_ _key_(_integral-type_ operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Atomically replaces the value pointed to bythis with the result of the computation applied to the value pointed to by this and the given operand.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.multithread]).
Returns: Atomically, the value pointed to by this immediately before the effects.
Remarks: Except for fetch_max and fetch_min, for signed integer types the result is as if the object value and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.
[Note 2:
There are no undefined results arising from the computation.
— _end note_]
For fetch_max and fetch_min, the maximum and minimum computation is performed as if by max and minalgorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.
void store_ _key_(_integral-type_ operand, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr void store_ _key_(_integral-type_ operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Preconditions: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst.
Effects: Atomically replaces the value pointed to by thiswith the result of the computation applied to the value pointed to by this and the given operand.
Memory is affected according to the value of order.
These operations are atomic modify-write operations ([atomics.order]).
Remarks: Except for store_max and store_min, for signed integer types, the result is as if the value pointed to by this and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.
[Note 3:
There are no undefined results arising from the computation.
— _end note_]
For store_max and store_min, the maximum and minimum computation is performed as if by max and min algorithms ([alg.min.max]), respectively, with the value pointed to by this and the first parameter as the arguments.
_integral-type_ operator _op_=(_integral-type_ operand) volatile noexcept;constexpr _integral-type_ operator _op_=(_integral-type_ operand) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_ key(operand) op operand;
32.5.8.4 Specializations for floating-point types [atomics.types.float]
There are specializations of the atomicclass template for all cv-unqualified floating-point types.
For each such type floating-point-type, the specialization atomic<_floating-point-type_>provides additional atomic operations appropriate to floating-point types.
namespace std { template<> struct atomic<_floating-point-type_> { using value_type = floating-point-type;using difference_type = value_type;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const volatile noexcept;bool is_lock_free() const noexcept;constexpr atomic() noexcept;constexpr atomic(floating-point-type) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete;void store(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type operator=(floating-point-type) volatile noexcept;constexpr floating-point-type operator=(floating-point-type) noexcept;floating-point-type load(memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type load(memory_order = memory_order::seq_cst) noexcept;operator floating-point-type() volatile noexcept;constexpr operator floating-point-type() noexcept;floating-point-type exchange(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type exchange(floating-point-type, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order, memory_order) noexcept;bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order, memory_order) noexcept;bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_weak(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_strong(floating-point-type&, floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_add(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type fetch_add(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_sub(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type fetch_sub(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_max(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type fetch_max(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_min(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr _floating-poin-type_t fetch_min(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_fmaximum(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type fetch_fmaximum(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_fminimum(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type fetch_fminimum(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_fmaximum_num( floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type fetch_fmaximum_num( floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type fetch_fminimum_num( floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr floating-point-type fetch_fminimum_num( floating-point-type, memory_order = memory_order::seq_cst) noexcept;void store_add(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_add(floating-point-type, memory_order = memory_order::seq_cst) noexcept;void store_sub(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_sub(floating-point-type, memory_order = memory_order::seq_cst) noexcept;void store_max(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_max(floating-point-type, memory_order = memory_order::seq_cst) noexcept;void store_min(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_min(floating-point-type, memory_order = memory_order::seq_cst) noexcept;void store_fmaximum(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_fmaximum(floating-point-type, memory_order = memory_order::seq_cst) noexcept;void store_fminimum(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_fminimum(floating-point-type, memory_order = memory_order::seq_cst) noexcept;void store_fmaximum_num(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_fmaximum_num(floating-point-type, memory_order = memory_order::seq_cst) noexcept;void store_fminimum_num(floating-point-type, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_fminimum_num(floating-point-type, memory_order = memory_order::seq_cst) noexcept;floating-point-type operator+=(floating-point-type) volatile noexcept;constexpr floating-point-type operator+=(floating-point-type) noexcept;floating-point-type operator-=(floating-point-type) volatile noexcept;constexpr floating-point-type operator-=(floating-point-type) noexcept;void wait(floating-point-type, memory_order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(floating-point-type, memory_order = memory_order::seq_cst) const noexcept;void notify_one() volatile noexcept;constexpr void notify_one() noexcept;void notify_all() volatile noexcept;constexpr void notify_all() noexcept;};}
The atomic floating-point specializations are standard-layout structs.
They each have a trivial destructor.
Descriptions are provided below only for members that differ from the primary template.
The following operations perform arithmetic addition and subtraction computations.
The correspondence among key, operator, and computation is specified in Table 155, except for the keysmax,min,fmaximum,fminimum,fmaximum_num, andfminimum_num, which are specified below.
_floating-point-type_ fetch_ _key_(_floating-point-type_ operand, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr _floating-point-type_ fetch_ _key_(_floating-point-type_ operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Atomically replaces the value pointed to by thiswith the result of the computation applied to the value pointed to by this and the given operand.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.multithread]).
Returns: Atomically, the value pointed to by this immediately before the effects.
Remarks: If the result is not a representable value for its type ([expr.pre]) the result is unspecified, but the operations otherwise have no undefined behavior.
Atomic arithmetic operations on _floating-point-type_should conform to the std::numeric_limits<_floating-point-type_>traits associated with the floating-point type ([limits.syn]).
The floating-point environment ([cfenv]) for atomic arithmetic operations on floating-point-type may be different than the calling thread's floating-point environment.
- For fetch_fmaximum and fetch_fminimum, the maximum and minimum computation is performed as if by fmaximum and fminimum, respectively, with the value pointed to by this and the first parameter as the arguments.
- For fetch_fmaximum_num and fetch_fminimum_num, the maximum and minimum computation is performed as if by fmaximum_num and fminimum_num, respectively, with the value pointed to by this and the first parameter as the arguments.
- For fetch_max and fetch_min, the maximum and minimum computation is performed as if by fmaximum_num and fminimum_num, respectively, with the value pointed to by this and the first parameter as the arguments, except that:
- If both arguments are NaN, an unspecified NaN value replaces the value pointed to by this.
- If exactly one argument is a NaN, either the other argument or an unspecified NaN value replaces the value pointed to by this; it is unspecified which.
- If the arguments are differently signed zeros, which of these values replaces the value pointed to by this is unspecified.
Recommended practice: The implementation of fetch_max and fetch_minshould treat negative zero as smaller than positive zero.
void store_ _key_(_floating-point-type_ operand, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr void store_ _key_(_floating-point-type_ operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Preconditions: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst.
Effects: Atomically replaces the value pointed to by thiswith the result of the computation applied to the value pointed to by this and the given operand.
Memory is affected according to the value of order.
These operations are atomic modify-write operations ([atomics.order]).
Remarks: If the result is not a representable value for its type ([expr.pre]) the result is unspecified, but the operations otherwise have no undefined behavior.
Atomic arithmetic operations on _floating-point-type_should conform to the numeric_limits<_floating-point-type_>traits associated with the floating-point type ([limits.syn]).
The floating-point environment ([cfenv]) for atomic arithmetic operations on _floating-point-type_may be different than the calling thread's floating-point environment.
The arithmetic rules of floating-point atomic modify-write operations may be different from operations on floating-point types or atomic floating-point types.
[Note 1:
Tree reductions are permitted for atomic modify-write operations.
— _end note_]
- For store_fmaximum and store_fminimum, the maximum and minimum computation is performed as if by fmaximum and fminimum, respectively, with the value pointed to by this and the first parameter as the arguments.
- For store_fmaximum_num and store_fminimum_num, the maximum and minimum computation is performed as if by fmaximum_num and fminimum_num, respectively, with the value pointed to by this and the first parameter as the arguments.
- For store_max and store_min, the maximum and minimum computation is performed as if by fmaximum_num and fminimum_num, respectively, with the value pointed to by this and the first parameter as the arguments, except that:
- If both arguments are NaN, an unspecified NaN value replaces the value pointed to by this.
- If exactly one argument is a NaN, either the other argument or an unspecified NaN value replaces the value pointed to by this; it is unspecified which.
- If the arguments are differently signed zeros, which of these values replaces the value pointed to by this is unspecified.
Recommended practice: The implementation of store_max and store_minshould treat negative zero as smaller than positive zero.
_floating-point-type_ operator _op_=(_floating-point-type_ operand) volatile noexcept;constexpr _floating-point-type_ operator _op_=(_floating-point-type_ operand) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_ key(operand) op operand;
Remarks: If the result is not a representable value for its type ([expr.pre]) the result is unspecified, but the operations otherwise have no undefined behavior.
Atomic arithmetic operations on _floating-point-type_should conform to the std::numeric_limits<_floating-point-type_>traits associated with the floating-point type ([limits.syn]).
The floating-point environment ([cfenv]) for atomic arithmetic operations on floating-point-type may be different than the calling thread's floating-point environment.
32.5.8.5 Partial specialization for pointers [atomics.types.pointer]
namespace std { template<class T> struct atomic<T*> { using value_type = T*;using difference_type = ptrdiff_t;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const volatile noexcept;bool is_lock_free() const noexcept;constexpr atomic() noexcept;constexpr atomic(T*) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete;void store(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store(T*, memory_order = memory_order::seq_cst) noexcept; T* operator=(T*) volatile noexcept;constexpr T* operator=(T*) noexcept; T* load(memory_order = memory_order::seq_cst) const volatile noexcept;constexpr T* load(memory_order = memory_order::seq_cst) const noexcept;operator T*() const volatile noexcept;constexpr operator T*() const noexcept; T* exchange(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* exchange(T*, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept;bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept;constexpr bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept;bool compare_exchange_weak(T*&, T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_weak(T*&, T*, memory_order = memory_order::seq_cst) noexcept;bool compare_exchange_strong(T*&, T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr bool compare_exchange_strong(T*&, T*, memory_order = memory_order::seq_cst) noexcept; T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept; T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept; T* fetch_max(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* fetch_max(T*, memory_order = memory_order::seq_cst) noexcept; T* fetch_min(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr T* fetch_min(T*, memory_order = memory_order::seq_cst) noexcept;void store_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;void store_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;void store_max(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_max(T*, memory_order = memory_order::seq_cst) noexcept;void store_min(T*, memory_order = memory_order::seq_cst) volatile noexcept;constexpr void store_min(T*, memory_order = memory_order::seq_cst) noexcept; T* operator++(int) volatile noexcept;constexpr T* operator++(int) noexcept; T* operator--(int) volatile noexcept;constexpr T* operator--(int) noexcept; T* operator++() volatile noexcept;constexpr T* operator++() noexcept; T* operator--() volatile noexcept;constexpr T* operator--() noexcept; T* operator+=(ptrdiff_t) volatile noexcept;constexpr T* operator+=(ptrdiff_t) noexcept; T* operator-=(ptrdiff_t) volatile noexcept;constexpr T* operator-=(ptrdiff_t) noexcept;void wait(T*, memory_order = memory_order::seq_cst) const volatile noexcept;constexpr void wait(T*, memory_order = memory_order::seq_cst) const noexcept;void notify_one() volatile noexcept;constexpr void notify_one() noexcept;void notify_all() volatile noexcept;constexpr void notify_all() noexcept;};}
There is a partial specialization of the atomic class template for pointers.
Specializations of this partial specialization are standard-layout structs.
They each have a trivial destructor.
Descriptions are provided below only for members that differ from the primary template.
The following operations perform pointer arithmetic.
The correspondence among key, operator, and computation is specified in Table 156.
T* fetch_ _key_(_see above_ operand, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr T* fetch_ _key_(_see above_ operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Mandates: T is a complete object type.
[Note 1:
Pointer arithmetic on void* or function pointers is ill-formed.
— _end note_]
Effects: Atomically replaces the value pointed to bythis with the result of the computation applied to the value pointed to by this and the given operand.
Memory is affected according to the value of order.
These operations are atomic read-modify-write operations ([intro.multithread]).
Returns: Atomically, the value pointed to by this immediately before the effects.
Remarks: The result may be an undefined address, but the operations otherwise have no undefined behavior.
For fetch_max and fetch_min, the maximum and minimum computation is performed as if by max and minalgorithms ([alg.min.max]), respectively, with the object value and the first parameter as the arguments.
[Note 2:
If the pointers point to different complete objects (or subobjects thereof), the < operator does not establish a strict weak ordering (Table 29, [expr.rel]).
— _end note_]
void store_ _key_(_see above_ operand, memory_order order = memory_order::seq_cst) volatile noexcept;constexpr void store_ _key_(_see above_ operand, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Mandates: T is a complete object type.
[Note 3:
Pointer arithmetic on void* or function pointers is ill-formed.
— _end note_]
Effects: Atomically replaces the value pointed to by thiswith the result of the computation applied to the value pointed to by this and the given operand.
Memory is affected according to the value of order.
These operations are atomic modify-write operations ([atomics.order]).
Remarks: The result may be an undefined address, but the operations otherwise have no undefined behavior.
For store_max and store_min, the maximum and minimum computation is performed as if by max and min algorithms ([alg.min.max]), respectively, with the value pointed to by this and the first parameter as the arguments.
[Note 4:
If the pointers point to different complete objects (or subobjects thereof), the < operator does not establish a strict weak ordering (Table 29, [expr.rel]).
— _end note_]
T* operator _op_=(ptrdiff_t operand) volatile noexcept;constexpr T* operator _op_=(ptrdiff_t operand) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_ key(operand) op operand;
32.5.8.6 Member operators common to integers and pointers to objects [atomics.types.memop]
value_type operator++(int) volatile noexcept;constexpr value_type operator++(int) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_add(1);
value_type operator--(int) volatile noexcept;constexpr value_type operator--(int) noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_sub(1);
value_type operator++() volatile noexcept;constexpr value_type operator++() noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_add(1) + 1;
value_type operator--() volatile noexcept;constexpr value_type operator--() noexcept;
Constraints: For the volatile overload of this function,is_always_lock_free is true.
Effects: Equivalent to: return fetch_sub(1) - 1;
32.5.8.7 Partial specializations for smart pointers [util.smartptr.atomic]
32.5.8.7.1 General [util.smartptr.atomic.general]
The library provides partial specializations of the atomic template for shared-ownership smart pointers ([util.sharedptr]).
[Note 1:
The partial specializations are declared in header .
— _end note_]
The behavior of all operations is as specified in [atomics.types.generic], unless specified otherwise.
The template parameter T of these partial specializations may be an incomplete type.
All changes to an atomic smart pointer in [util.smartptr.atomic], and all associated use_count increments, are guaranteed to be performed atomically.
Associated use_count decrements are sequenced after the atomic operation, but are not required to be part of it.
Any associated deletion and deallocation are sequenced after the atomic update step and are not part of the atomic operation.
[Note 2:
If the atomic operation uses locks, locks acquired by the implementation will be held when any use_count adjustments are performed, and will not be held when any destruction or deallocation resulting from this is performed.
— _end note_]
[Example 1: template<typename T> class atomic_list { struct node { T t; shared_ptr<node> next;}; atomic<shared_ptr<node>> head;public: shared_ptr<node> find(T t) const { auto p = head.load();while (p && p->t != t) p = p->next;return p;} void push_front(T t) { auto p = make_shared<node>(); p->t = t; p->next = head;while (!head.compare_exchange_weak(p->next, p)) {} } }; — _end example_]
32.5.8.7.2 Partial specialization for shared_ptr [util.smartptr.atomic.shared]
namespace std { template<class T> struct atomic<shared_ptr<T>> { using value_type = shared_ptr<T>;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const noexcept;constexpr atomic() noexcept;constexpr atomic(nullptr_t) noexcept : atomic() { } constexpr atomic(shared_ptr<T> desired) noexcept; atomic(const atomic&) = delete;void operator=(const atomic&) = delete;constexpr shared_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;constexpr operator shared_ptr<T>() const noexcept;constexpr void store(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;constexpr void operator=(shared_ptr<T> desired) noexcept;constexpr void operator=(nullptr_t) noexcept;constexpr shared_ptr<T> exchange(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired, memory_order success, memory_order failure) noexcept;constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired, memory_order success, memory_order failure) noexcept;constexpr bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;constexpr bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;constexpr void wait(shared_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;constexpr void notify_one() noexcept;constexpr void notify_all() noexcept;private: shared_ptr<T> p; };}
32.5.8.7.3 Partial specialization for weak_ptr [util.smartptr.atomic.weak]
namespace std { template<class T> struct atomic<weak_ptr<T>> { using value_type = weak_ptr<T>;static constexpr bool is_always_lock_free = implementation-defined;bool is_lock_free() const noexcept;constexpr atomic() noexcept;constexpr atomic(weak_ptr<T> desired) noexcept; atomic(const atomic&) = delete;void operator=(const atomic&) = delete;constexpr weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;constexpr operator weak_ptr<T>() const noexcept;constexpr void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;constexpr void operator=(weak_ptr<T> desired) noexcept;constexpr weak_ptr<T> exchange(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order success, memory_order failure) noexcept;constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order success, memory_order failure) noexcept;constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;constexpr void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;constexpr void notify_one() noexcept;constexpr void notify_all() noexcept;private: weak_ptr<T> p; };}
constexpr atomic() noexcept;
Effects: Value-initializes p.
constexpr atomic(weak_ptr<T> desired) noexcept;
Effects: Initializes the object with the value desired.
[Note 1:
It is possible to have an access to an atomic object A race with its construction, for example, by communicating the address of the just-constructed object Ato another thread via memory_order::relaxed operations on a suitable atomic pointer variable, and then immediately accessing A in the receiving thread.
This results in undefined behavior.
— _end note_]
constexpr void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Preconditions: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst.
Effects: Atomically replaces the value pointed to by this with the value of desired as if by p.swap(desired).
Memory is affected according to the value of order.
constexpr void operator=(weak_ptr<T> desired) noexcept;
Effects: Equivalent to store(desired).
constexpr weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst.
Effects: Memory is affected according to the value of order.
Returns: Atomically returns p.
constexpr operator weak_ptr<T>() const noexcept;
Effects: Equivalent to: return load();
constexpr weak_ptr<T> exchange(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Effects: Atomically replaces p with desiredas if by p.swap(desired).
Memory is affected according to the value of order.
This is an atomic read-modify-write operation ([intro.races]).
Returns: Atomically returns the value of p immediately before the effects.
constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order success, memory_order failure) noexcept;constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order success, memory_order failure) noexcept;
Preconditions: failure ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst.
Effects: If p is equivalent to expected, assigns desired to p and has synchronization semantics corresponding to the value of success, otherwise assigns p to expected and has synchronization semantics corresponding to the value of failure.
Returns: true if p was equivalent to expected,false otherwise.
Remarks: Two weak_ptr objects are equivalent if they store the same pointer value and either share ownership or are both empty.
The weak form may fail spuriously.
If the operation returns true,expected is not accessed after the atomic update and the operation is an atomic read-modify-write operation ([intro.multithread]) on the memory pointed to by this.
Otherwise, the operation is an atomic load operation on that memory, andexpected is updated with the existing value read from the atomic object in the attempted atomic update.
The use_count update corresponding to the write to expectedis part of the atomic operation.
The write to expected itself is not required to be part of the atomic operation.
constexpr bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:return compare_exchange_weak(expected, desired, order, fail_order);where fail_order is the same as orderexcept that a value of memory_order::acq_relshall be replaced by the value memory_order::acquire and a value of memory_order::releaseshall be replaced by the value memory_order::relaxed.
constexpr bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:return compare_exchange_strong(expected, desired, order, fail_order);where fail_order is the same as orderexcept that a value of memory_order::acq_relshall be replaced by the value memory_order::acquire and a value of memory_order::releaseshall be replaced by the value memory_order::relaxed.
constexpr void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst.
Effects: Repeatedly performs the following steps, in order:
- Evaluates load(order) and compares it to old.
- If the two are not equivalent, returns.
- Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
Remarks: Two weak_ptr objects are equivalent if they store the same pointer and either share ownership or are both empty.
This function is an atomic waiting operation ([atomics.wait]).
constexpr void notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked ([atomics.wait]) by this call, if any such atomic waiting operations exist.
Remarks: This function is an atomic notifying operation ([atomics.wait]).
constexpr void notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked ([atomics.wait]) by this call.
Remarks: This function is an atomic notifying operation ([atomics.wait]).