Unexpected Memory Leak When Using np.concatenate with Large Arrays in NumPy 1.25
I'm working with a important memory leak when using `np.concatenate` with large arrays in NumPy 1.25. When I attempt to concatenate two large arrays, the system memory usage spikes dramatically, and it seems that the memory isn't being released after the operation. Here's a simplified version of what I'm doing: ```python import numpy as np a = np.random.rand(1000000) b = np.random.rand(1000000) # Attempt to concatenate result = np.concatenate((a, b)) ``` After running the above code in a loop to repeat the concatenation multiple times, memory usage increases continuously until it crashes the kernel. I’ve also monitored the memory with tools like `memory_profiler`, and it shows that memory usage is not decreasing after each iteration. I tried using `np.array` instead of `np.concatenate`, and while that seems to reduce the spike, my goal is to combine these arrays efficiently. I also ensured that no references to the old arrays remain after concatenation, yet the question continues. Is there an scenario with how `np.concatenate` handles memory in this version, or am I missing something in my implementation? Any help would be appreciated!