DeepStream-3D Sensor Fusion Multi-Modal Application and Framework — DeepStream documentation (original) (raw)

This deepstream-3d-lidar-sensor-fusion sample application showcases multi-modal sensor fusion pipelines for LiDAR and camera data using the DS3D framework. This application with DS3D framework could setup different LiDAR/RADAR/Camera sensor fusion models, late fusion inference pipelines with several key features.

The deepstream-3d-lidar-sensor-fusion sample application and source code is located at app/sample_apps/deepstream-3d-lidar-sensor-fusion/ for your reference.

There are 2 multi-modal sensor fusion pipelines for LiDAR and camera data, enabling 3D detections.

Example 1. BEVFusion Multi-Modal with 6-Camera Plus 1-LiDAR Data Fusion Pipeline#

Refer to the provided instructions for the setup. DS3D BEVFusion Setup with Triton

Example 2. V2XFusion multi-modal batched 4-Camera and 4-LiDAR Inference Pipeline:#

Refer to the provided instructions for the setup. DS3D V2XFusion setup

DS-3D Lidar-Camera V2XFusion pipeline overview

Quick Start#

run cmdline outside of the container

$ export DISPLAY=0.0 # set the correct display number if DISPLAY is not exported
$ xhost +
$ cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion

make directory for dataset and model repo in host, it would be mounted into the container for bevfusion tests

$ sudo chmod -R a+rw . # Grant read/write permission for all of the files in this folder
$ mkdir -p data bevfusion/model_root

If any scripts are run outside of container, or if file read/write permission errors are experienced, please run the commands with sudo -E.

Note

Users have to run the following commandline on every terminal outside of the container or seeing errors such as xhost: unable to open display

$ export DISPLAY=0.0 # set the correct display number if DISPLAY is not exported $ xhost +

BEVFusion pipeline Demo Setup#

V2XFusion pipeline Demo Setup#

Refer to the detailed provided instructions in DS3D V2XFusion Setup

running cmdline outside of the container

$ xhost +

export DOCKER_GPU_ARG="--runtime nvidia --privileged" # for Jetson Orin

$ export DOCKER_GPU_ARG="--gpus all" # for x86

start the container interactively

$ docker run $DOCKER_GPU_ARG -it --rm --ipc=host --net=host -v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
-w /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion \
nvcr.io/nvidia/deepstream:{xx.xx.xx}-triton-multiarch

{xx.xx.xx} is deepstream sdk version number

With this docker run container, the following instructions of v2XFusion setup are running inside of this container. If this step skipped on Jetson, the following instructions are running on host directly.

run the cmdline inside deepstream-triton container

$ cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion
$ deepstream-3d-lidar-sensor-fusion -c ds3d_lidar_video_sensor_v2x_fusion.yml

Build application From Source#

$ cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-lidar-sensor-fusion $ make $ sudo make install (sudo not required in the case of docker containers)

Note

To compile the sources, run make with sudo -E or root permission.

DS3D Components used in this sample application#

This section describes the DS3D components used in the deepstream-3d-lidar-sensor-fusion pipeline.

LiDAR Data Loading#

Video Data Bridging Into DS3D ds3d::datamap#

Data Mixing#

LiDAR/Camera Data Alignment/Calibration Filtering#

LiDAR Data V2X Preprocess Filtering#

LiDAR/Camera Data GLES Rendering#

Data Inference Filtering#

Custom Post-Processing for LiDAR Detection#

BEVFusion Model Inference with Triton-LMM#

DS3D Custom Components Configuration Specifications#

See more details in the DS_3D supported custom components specifications section in the DeepStream-3D Custom Apps and Libs Tutorials.