gpucoder.atomicSub - Atomically subtract a specified value from a variable in global or shared
memory - MATLAB ([original](https://in.mathworks.com/help/gpucoder/ref/gpucoder.atomicsub.html)) ([raw](?raw))
Atomically subtract a specified value from a variable in global or shared memory
Since R2021b
Syntax
Description
[A,oldA] = gpucoder.atomicSub([A](#mw%5F9fc98293-5748-4c32-9239-fdc92a5bdadc),[B](#mw%5F9fc98293-5748-4c32-9239-fdc92a5bdadc))
subtracts B
from the value of A
in global or shared memory and writes the result back into A. The operation is atomic in a sense that the entire read-modify-write operation is guaranteed to be performed without interference from other threads. The order of the input and output arguments must match the syntax provided.
Examples
Perform a simple atomic subtraction operation by using thegpucoder.atomicSub
function and generate CUDA® code that calls corresponding CUDAatomicSub()
APIs.
In one file, write an entry-point function myAtomicSub
that accepts matrix inputs a
and b
.
function a = myAtomicSub(a,b)
coder.gpu.kernelfun; for i = 1:numel(a) [a(i),~] = gpucoder.atomicSub(a(i),b); end
end
To create a type for a matrix of doubles for use in code generation, use thecoder.newtype function.
A = coder.newtype('int32', [1 30], [0 1]); B = coder.newtype('int32', [1 1], [0 0]); inputArgs = {A,B};
To generate a CUDA library, use the codegen function.
cfg = coder.gpuConfig('lib'); cfg.GenerateReport = true;
codegen -config cfg -args inputArgs myAtomicSub -d myAtomicSub
The generated CUDA code contains the myAtomicSub_kernel1
kernel with calls to the atomicSub()
CUDA APIs.
// // File: myAtomicSub.cu // ...
static global launch_bounds(1024, 1) void myAtomicSub_kernel1( const int32_T b, const int32_T i, int32_T a_data[]) { uint64_T loopEnd; uint64_T threadId; ...
loopEnd = static_cast(i - 1); for (uint64_T idx{threadId}; idx <= loopEnd; idx += threadStride) { int32_T b_i; b_i = static_cast(idx); atomicSub(&a_data[b_i], b); } } ...
void myAtomicSub(int32_T a_data[], int32_T a_size[2], int32_T b) { dim3 block; dim3 grid; ...
cudaMemcpy(gpu_a_data, a_data, a_size[1] * sizeof(int32_T),
cudaMemcpyHostToDevice);
myAtomicSub_kernel1<<<grid, block>>>(b, i, gpu_a_data);
cudaMemcpy(a_data, gpu_a_data, a_size[1] * sizeof(int32_T),
cudaMemcpyDeviceToHost);
...
} ...
}
Input Arguments
Operands, specified as scalars, vectors, matrices, or multidimensional arrays. Inputs A
and B
must satisfy the following requirements:
- Have the same data type.
- Have the same size or have sizes that are compatible. For example,
A
is anM
-by-N
matrix andB
is a scalar or1
-by-N
row vector.
Data Types: int32
| uint32
Version History
Introduced in R2021b
See Also
Functions
- gpucoder.atomicAdd | gpucoder.atomicCAS | gpucoder.atomicDec | gpucoder.atomicExch | gpucoder.atomicInc | gpucoder.atomicMax | gpucoder.atomicMin