NVIDIA 2D Image And Signal Performance Primitives (NPP): NVIDIA 2D Image and Signal Processing Performance Primitives (original) (raw)
What is NPP?
NVIDIA NPP is a library of functions for performing CUDA accelerated 2D image and signal processing. The primary set of functionality in the library focuses on image processing and is widely applicable for developers in these areas. NPP will evolve over time to encompass more of the compute heavy tasks in a variety of problem domains. The NPP library is written to maximize flexibility, while maintaining high performance.
NPP can be used in one of two ways:
- A stand-alone library for adding GPU acceleration to an application with minimal effort. Using this route allows developers to add GPU acceleration to their applications in a matter of hours.
- A cooperative library for interoperating with a developer's GPU code efficiently.
Either route allows developers to harness the massive compute resources of NVIDIA GPUs, while simultaneously reducing development times. After reading this Main Page it is recommended that you read the General API Conventions page below and either the Image-Processing Specific API Conventions page or Signal-Processing Specific API Conventions page depending on the kind of processing you expect to do. Finally, if you select the Modules tab at the top of this page you can find the kinds of functions available for the NPP operations that support your needs.
Documentation
- General API Conventions
- Image-Processing Specific API Conventions
- Signal-Processing Specific API Conventions
Files
NPP API is defined in the following files:
Header Files
All those header files are located in the following CUDA Toolkit's directory:
/include/
Library Files
NPP's functionality is split up into 3 distinct library groups:
- A core library (NPPC) containing basic functionality from the npp.h header file as well as common functionality used by the other two libraries.
- The image processing library NPPI. Any functions from the nppi.h header file or the various header files named "nppi_xxx.h" are bundled into the NPPI library.
- The signal processing library NPPS. Any function from the npps.h header file or the various header files named "npps_xxx.h" are bundled into the NPPS library.
On the Windows platform the NPP stub libraries are found in the CUDA Toolkit's library directory:
/lib/nppc.lib
/lib/nppial.lib
/lib/nppicc.lib
/lib/nppidei.lib
/lib/nppif.lib
/lib/nppig.lib
/lib/nppim.lib
/lib/nppist.lib
/lib/nppisu.lib
/lib/nppitc.lib
/lib/npps.lib
The matching DLLs are located in the CUDA Toolkit's binary directory. Example
- /bin/nppial64_110_.dll // Dynamic image-processing library for 64-bit Windows.
On Linux and Mac platforms the dynamic libraries are located in the lib directory and the names include major and minor version numbers along with build numbers
- /lib/libnppc.so.11.0. // NPP dynamic core library for Linux
- /lib/libnpps.11.0.dylib // NPP dynamic signal processing library for Mac
Library Organization
Note: The static NPP libraries depend on a common thread abstraction layer library called cuLIBOS (libculibos.a
) that is now distributed as a part of the toolkit. Consequently, cuLIBOS must be provided to the linker when the static library is being linked against. To minimize library loading and CUDA runtime startup times it is recommended to use the static library(s) whenever possible. To improve loading and runtime performance when using dynamic libraries, NPP provides a full set of NPPI sub-libraries. Linking to only the sub-libraries that contain functions that your application uses can significantly improve load time and runtime startup performance. Some NPPI functions make calls to other NPPI and/or NPPS functions internally so you may need to link to a few extra libraries depending on what function calls your application makes. The NPPI sub-libraries are split into sections corresponding to the way that NPPI header files are split. This list of sub-libraries is as follows:
- NPPC, NPP core library which MUST be included when linking any application, functions are listed in nppCore.h,
- NPPIAL, arithmetic and logical operation functions in nppi_arithmetic_and_logical_operations.h,
- NPPICC, color conversion and sampling functions in nppi_color_conversion.h,
- NPPIDEI, data exchange and initialization functions in nppi_data_exchange_and_initialization.h,
- NPPIF, filtering and computer vision functions in nppi_filtering_functions.h,
- NPPIG, geometry transformation functions found in nppi_geometry_transforms.h,
- NPPIM, morphological operation functions found in nppi_morphological_operations.h,
- NPPIST, statistics and linear transform in nppi_statistics_functions.h and nppi_linear_transforms.h,
- NPPISU, memory support functions in nppi_support_functions.h,
- NPPITC, threshold and compare operation functions in nppi_threshold_and_compare_operations.h,
For example, on Linux, to compile a small color conversion application foo
using NPP against the dynamic library, the following command can be used:
* nvcc foo.c -lnppc -lnppicc -o foo
Whereas to compile against the static NPP library, the following command has to be used:
* nvcc foo.c -lnppc_static -lnppicc_static -lculibos -o foo
It is also possible to use the native host C++ compiler. Depending on the host operating system, some additional libraries like pthread or dl might be needed on the linking line. The following command on Linux is suggested:
* g++ foo.c -lnppc_static -lnppicc_static -lculibos -lcudart_static -lpthread -ldl
* -I /include -L /lib64 -o foo
NPP is a stateless API, as of NPP 6.5 the ONLY state that NPP remembers between function calls is the current stream ID, i.e. the stream ID that was set in the most recent nppSetStream() call and a few bits of device specific information about that stream. The default stream ID is 0. If an application intends to use NPP with multiple streams then it is the responsibility of the application to use the fully stateless application managed stream context interface described below or call nppSetStream() whenever it wishes to change stream IDs. Any NPP function call which does not use an application managed stream context will use the stream set by the most recent call to nppSetStream() and nppGetStream() and other "nppGet" type function calls which do not contain an application managed stream context parameter will also always use that stream.
All NPP functions should be thread safe.
Note: New to NPP 10.1 is support for the fp16 (__half) data type in GPU architectures of Volta and beyond in some NPP image processing functions. NPP image functions that support pixels of __half data types have function names of type 16f and pointers to pixels of that data type need to be passed to NPP as NPP data type Npp16f. Here is an example of how to pass image pointers of type __half to an NPP 16f function that should work on all compilers including Armv7.
* nppiAdd_16f_C3R(reinterpret_cast<const Npp16f *>((const void *)(pSrc1Data)), nSrc1Pitch,
* reinterpret_cast<const Npp16f *>((const void *)(pSrc2Data)), nSrc2Pitch,
* reinterpret_cast<Npp16f *>((void *)(pDstData)), nDstPitch,
* oDstROI);
Application Managed Stream Context
Note: Also new to NPP 10.1 is support for application managed stream contexts. Application managed stream contexts make NPP truely stateless internally allowing for rapid, no overhead, stream context switching. While it is recommended that all new NPP application code use application managed stream contexts, existing application code can continue to use nppSetStream() and nppGetStream() to manage stream contexts (also with no overhead now) but over time NPP will likely deprecate the older non-application managed stream context API. Both the new and old stream management techniques can be intermixed in applications but any NPP calls using the old API will use the stream set by the most recent call to nppSetStream() and nppGetStream() calls will also return that stream ID. All NPP function names ending in _Ctx expect application managed stream contexts to be passed as a parameter to that function. The new NppStreamContext application managed stream context structure is defined in nppdefs.h and should be initialized by the application to the Cuda device ID and values associated with a particular stream. Applications can use multiple fixed stream contexts or change the values in a particular stream context on the fly whenever a different stream is to be used.
Note: NPP 10.2 and beyond contain an additional element in the NppStreamContext structure named nStreamFlags which MUST also be initialized by the application. Failure to do so could unnecessarily reduce NPP performance in some functions.
Note: NPP does not support non blocking streams on Windows for devices working in WDDM mode.
Note that some of the "GetBufferSize" style functions now have application managed stream contexts associated with them and should be used with the same stream context that the associated application managed stream context NPP function will use.
Note that NPP does minimal checking of the parameters in an application managed stream context structure so it is up to the application to assure that they are correct and valid when passed to NPP functions.
Note that NPP has deprecated the nppicom JPEG compression library as of NPP 11.0, use the NVJPEG library instead.
Supported NVIDIA Hardware
NPP runs on all CUDA capable NVIDIA hardware. For details please see http://www.nvidia.com/object/cuda_learn_products.html