coder.GpuCodeConfig - Configuration parameters for CUDA code generation from MATLAB code by using GPU Coder - MATLAB (original) (raw)

coder.gpuConfig

Configuration parameters for CUDA code generation from MATLAB code by using GPU Coder

Description

The coder.GpuCodeConfig orcoder.gpuConfig object contains the configuration parameters thatcodegen uses for generating CUDA® MEX, a static library, a dynamically linked library, or an executable program with GPU Coder™. Pass the object to the codegen function by using the-config option.

Creation

Syntax

Description

cfg = coder.gpuConfig([build_type](#d126e3972)) creates a code generation configuration object for the specified build type, which can be CUDA MEX, a static library, a dynamically linked library, or an executable program. If the Embedded Coder® product is installed, it creates a coder.EmbeddedCodeConfig object for static library, dynamic library, or executable build types.

example

cfg = coder.gpuConfig([build_type](#d126e3972),'ecoder',false) creates a code generation configuration object to generate CUDA'lib', 'dll', or 'exe' output even if the Embedded Coder product is installed.

cfg = coder.gpuConfig([build_type](#d126e3972),'ecoder',true) creates a coder.EmbeddedCodeConfig configuration object even if the Embedded Coder product is not installed. However, code generation using acoder.EmbeddedCodeConfig object requires an Embedded Coder license.

Input Arguments

expand all

Output to build from generated CUDA code, specified as one of the values in this table.

Value Description
'MEX' CUDA MEX
'LIB' Static library
'DLL' Dynamically linked library
'EXE' Executable program

Properties

expand all

coder.GpuConfig contains only GPU specific configuration parameters of the code configuration object. To see the properties of the code configuration object, see coder.CodeConfig and coder.EmbeddedCodeConfig.

Control GPU Code generation, specified as true orfalse. For more information, see Generate GPU Code.

Example: cfg.GpuConfig.Enabled = true

Memory allocation (malloc) mode to be used in the generated CUDA code, specified as 'discrete' or'unified'. For more information, see Malloc mode.

Example: cfg.GpuConfig.MallocMode = 'discrete'

Custom kernel name prefixes, specified as a character vector. For more information, see Kernel name prefix.

Example: cfg.GpuConfig.KernelNamePrefix = 'myKernel'

Replacement of math function calls with NVIDIA® cuBLAS library calls, specified astrue or false. For more information, see Enable cuBLAS.

Example: cfg.GpuConfig.EnableCUBLAS = true

Replacement of math function calls with NVIDIAcuSOLVER library calls, specified astrue or false. For more information, see Enable cuSOLVER.

Example: cfg.GpuConfig.EnableCUSOLVER = true

Replacement of fft function calls with NVIDIAcuFFT library calls, specified as true or false. For more information, see Enable cuFFT.

Example: cfg.GpuConfig.EnableCUFFT = true

Add benchmarking code to the generated CUDA code, specified as true orfalse. For more information, see Benchmarking.

Example: cfg.GpuConfig.Benchmarking = true

Check for errors in CUDA API calls and kernel launches, specified astrue or false. For more information, see Safe build.

Example: cfg.GpuConfig.SafeBuild = true

Minimum compute capability of an NVIDIA GPU device for which CUDA code is generated, specified as one of these values.

For more information, see Minimum compute capability.

Example: cfg.GpuConfig.ComputeCapability = '6.1'

Name of the NVIDIA virtual GPU architecture for which the CUDA input files must be compiled, specified as a character vector. For more information, see Custom compute capability.

Example: cfg.GpuConfig.CustomComputeCapability = '-arch=compute_50'

Additional flags passed to the GPU compiler, specified as a character vector. For more information, see Compiler flags.

Example: cfg.GpuConfig.CompilerFlags = '--fmad=false'

Stack limit in bytes per GPU thread, specified as an integer. For more information, see Stack limit.

Example: cfg.GpuConfig.StackLimitPerThread = 1024

Threshold in bytes above which the private variables are allocated on the heap instead of the stack, specified as an integer. For more information, see Malloc threshold.

Example: cfg.GpuConfig.MallocThreshold = 256

Maximum number of blocks created during a kernel launch, specified as an integer. For more information, see Maximum blocks per kernel.

Example: cfg.GpuConfig.MaximumBlocksPerKernel = 1024

Whether to use the GPU memory manager, specified astrue or false. For more information, see Enable GPU memory manager.

Example: cfg.GpuConfig.EnableMemoryManager = true

CUDA device selection, specified as the numeric value of the device ID. For more information, see GPU device ID.

Example: cfg.GpuConfig.SelectCudaDevice = <DeviceID>

Examples

collapse all

Generate CUDA MEX function from a MATLAB® function that is suitable for GPU code generation. Also, enable a code generation report.

Write a MATLAB function VecAdd, that performs vector addition of inputs A and B.

function [C] = VecAdd(A,B) %#codegen C = coder.nullcopy(zeros(size(A))); coder.gpu.kernelfun(); C = A + B; end

To generate a MEX function, create a code generation configuration object.

cfg = coder.gpuConfig('mex');

Enable the code generation report.

cfg.GpuConfig.EnableCUBLAS = true; cfg.GenerateReport = true;

Generate a MEX function in the current folder specifying the configuration object using the -config option.

% Generate a MEX function and code generation report codegen -config cfg -args {zeros(512,512,'double'),zeros(512,512,'double')} VecAdd

Limitations

Version History

Introduced in R2017b

expand all

In previous releases, the default value of theEnableMemoryManager property was false. Now, the default value has changed to true. Therefore, when you generate CUDA code, the GPU memory manager is enabled by default.

Because of this change, once you generate a CUDA MEX with the default configuration setting, you cannot run this MEX on a different GPU. If you want to run the generated MEX on a different GPU, set theEnableMemoryManager property to false before you generate code.

In previous releases, the GPU memory manager provided code configuration parameters to manage the allocation and deallocation of memory blocks in the GPU memory pools. These properties have now been removed.

The removed properties are:

The default value of the ComputeCapability property is now'Auto' instead of '3.5'. When compute capability is set to 'Auto', the code generator detects and uses the compute capability of the GPU device that you have selected for GPU code generation. If no GPU device is available or if the code generator is unable to detect a GPU device, the code generator uses a compute capability value of '5.0'.

For Simulink® Coder™, the default compute capability value is now '5.0' instead of '3.5'. To change this default value, modify theCompute capability parameter on the > pane in the Configuration Parameters dialog box. For more information, see Compute capability (Simulink Coder).

In a future release, the unified memory allocation (cudaMallocManaged) mode will be removed when targeting NVIDIA GPU devices on the host development computer. You can continue to use unified memory allocation mode when targeting NVIDIA embedded platforms.

When generating CUDA code for the host from MATLAB, set the MallocMode property of thecoder.gpuConfig code configuration object to'discrete'.