GPU memory issues (leak?) (original) (raw)
I am running a loop where I initialize a new model in each loop and train it. I am using NVIDIA Apex for mixed precision training. My current issue is that there seems to be some unwanted memory allocations across different steps in the loop. The GPU memory accumulates and after a few steps in the loop CUDA memory runs out.
I have debugged everything, have monitored memory, and have deleted every single thing possible. Only after removing apex the memory allocation seems to be consistent. I am doing nothing else than adding the three lines of code from the tutorial for initializing and backward passing.
Any ideas?