[Jetson Orin NX + DeepStream 7.1] Failed in mem copy after running for some time (original) (raw)

Hi all,

I’m working on a DeepStream pipeline on a Jetson Orin NX (16GB), but I keep running into an issue where the application fails due to memory copy errors after running for a period of time (sometimes after 1 minute, sometimes after 10). I’m hoping someone can help me identify what might be going wrong.

Environment

• Hardware Platform (Jetson / GPU) Jetson Orin NX (16GB)
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only) 6.2
• TensorRT Version 10.3.0.30
• Issue Type( questions, new requirements, bugs) bugs

Pipeline:

v4l2src → capsfilter → nvv4l2decoder → nvvideoconvert → capsfilter → nvstreammux → nvinfer → nvtracker → appsink

Model:

Self-trained YOLOv7 model

Issue

After running my DeepStream app for a while, it crashes with the following errors:

cudaMemcpy2DAsync failed with error cudaErrorIllegalAddress
nvbufsurftransform_copy.cpp:438 => Failed in mem copy
get_converted_buffer failed
gst_nvinfer_process_full_frame: error: Buffer conversion failed

Sometimes, I also see:

PyCUDA ERROR: The context stack was not empty upon module cleanup.

What I’ve Tried:

I’ve followed suggestions from other threads and tried adjusting nvbuf-memory-type and compute-hw in the pipeline:

Please let me know if additional logs or config files would be helpful. Thanks in advance!

Hi,

My guess is that the buffer’s memory is being accessed after it has been released by GStreamer. I’m not sure how you’re obtaining the buffer, but you might try using gst_buffer_ref to increase the buffer’s reference count. This ensures the buffer isn’t prematurely freed. Just make sure to call gst_buffer_unref once you’re done with it to avoid memory leaks.

Some other things you can try:

yuweiw June 13, 2025, 2:06am 4

Could you try to just set the copy-hw=2 for the nvvideoconvert plugin?

zxc8913 June 13, 2025, 6:23am 5

@miguel.taylor
Hi, thanks for the detailed suggestions!

I’ll definitely try replacing appsink with fakesink to check whether the illegal access is happening downstream. Also planning to run cuda-memcheck ./app and will report back with any specific findings.

As for gst_buffer_ref(), I’m not manually accessing buffers in the pipeline — it’s all handled through GStreamer elements — but I’ll inspect any interaction around appsink just in case.

For nvbuf-memory-type=0, I did test it and enabled GST_DEBUG=3. Unfortunately, I still got the following errors:

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:438 => Failed in mem copy  
cudaErrorIllegalAddress  
cuDCFFrameTransformTexture.cu 693  
gstnvtracker: Low-level tracker lib returned error 1  
gst_nvinfer_process_full_frame: Buffer conversion failed

I have attached a more detailed log for your convenience.
nvbuf-memory-type = 0.txt (1.5 KB)

Appreciate the help — will update again after running the fakesink and cuda-memcheck tests.

zxc8913 June 13, 2025, 6:25am 6

@yuweiw

Hi yuweiw, thanks for the tip!

I tested with copy-hw=2 on the nvvideoconvert plugin and enabled GST_DEBUG=3.

Unfortunately, I still encountered the same memory copy failures and illegal access errors.

I’ve attached the full log file in case there’s anything you spot.

copy-hw = 2.txt (2.6 KB)

Let me know if you think I should adjust other parameters.

Thanks again!

yuweiw June 13, 2025, 6:58am 7

zxc8913 June 17, 2025, 3:04am 8

Hi yuweiw,

  1. Yes, I’ve only set the copy-hw=2 parameter on the nvvideoconvert element before nvinfer. There are no other memory-related parameters set for any other plugins in the pipeline.
  2. I’ve tried narrowing down the pipeline by removing plugins. Through this process, it seems the issue might be related to the appsink. Interestingly, even when I replace appsink with autovideosink, the problem still persists.

yuweiw June 17, 2025, 6:29am 9

How did you confirm that? What if you use the nv3dsink?