gpucoder.atomicXor - Atomically perform bit-wise XOR between a specified value and a variable in global
or shared memory - MATLAB ([original](https://in.mathworks.com/help/gpucoder/ref/gpucoder.atomicxor.html)) ([raw](?raw))
Atomically perform bit-wise XOR between a specified value and a variable in global or shared memory
Since R2021b
Syntax
Description
[A,oldA] = gpucoder.atomicXor([A](#mw%5F34051c80-1c0f-4a44-8800-cd205cd7044e),[B](#mw%5F34051c80-1c0f-4a44-8800-cd205cd7044e))
perform bit-wise XOR between B
and the value of A
in global or shared memory and writes the results back into A. The operation is atomic in a sense that the entire read-modify-write operation is guaranteed to be performed without interference from other threads. The order of the input and output arguments must match the syntax provided.
Examples
Perform a simple atomic addition operation by using thegpucoder.atomicXor
function and generate CUDA® code that calls appropriate CUDAatomicXor()
APIs.
In one file, write an entry-point function myAtomicXor
that accepts matrix inputs a
and b
.
function a = myAtomicXor(a,b)
coder.gpu.kernelfun; for i =1:numel(a) [a(i),~] = gpucoder.atomicXor(a(i), b); end
end
To create a type for a matrix of doubles for use in code generation, use thecoder.newtype function.
A = coder.newtype('uint32', [1 30], [0 1]); B = coder.newtype('uint32', [1 1], [0 0]); inputArgs = {A,B};
To generate a CUDA library, use the codegen function.
cfg = coder.gpuConfig('lib'); cfg.GenerateReport = true;
codegen -config cfg -args inputArgs myAtomicXor -d myAtomicXor
The generated CUDA code contains the myAtomicXor_kernel1
kernel with calls to the atomicXor()
CUDA APIs.
// // File: myAtomicXor.cu // ...
static global launch_bounds(1024, 1) void myAtomicXor_kernel1( const uint32_T b, const int32_T i, uint32_T a_data[]) { uint64_T loopEnd; uint64_T threadId; ...
for (uint64_T idx{threadId}; idx <= loopEnd; idx += threadStride) { int32_T b_i; b_i = static_cast(idx); atomicXor(&a_data[b_i], b); } } ...
void myAtomicXor(uint32_T a_data[], int32_T a_size[2], uint32_T b) { dim3 block; dim3 grid; ...
cudaMemcpy(gpu_a_data, a_data, a_size[1] * sizeof(uint32_T),
cudaMemcpyHostToDevice);
myAtomicXor_kernel1<<<grid, block>>>(b, i, gpu_a_data);
cudaMemcpy(a_data, gpu_a_data, a_size[1] * sizeof(uint32_T),
cudaMemcpyDeviceToHost);
...
}
Input Arguments
Operands, specified as scalars, vectors, matrices, or multidimensional arrays. Inputs A
and B
must satisfy the following requirements:
- Have the same data type.
- Have the same size or have sizes that are compatible. For example,
A
is anM
-by-N
matrix andB
is a scalar or1
-by-N
row vector.
Data Types: int32
| uint32
| uint64
Version History
Introduced in R2021b