Noncropped Percepts

I changed the percepUnit class so that percepts are stored at a fixed resolution (the full resolution of input frames). This way opencv does not try and reallocate any memory on merging percepts, and indeed my leak is gone. That is the good news. The bad news is that because of the size of the input frames (1920×1080) and the fact that all percepts segmented from a single frame hold their own copy of the same data, the memory usage is extreme. Before I could have probably been able to hold  3000+ percepts in memory, and now I can fit only 300. This makes sense since there are about 100 percepts per input frame. Following are the percepts after 25,000 frame test generating 200 percepts:


Note the long-exposure effect I was expecting. Also the image is much more cohesive since the percepts are locked into place. I’m going to be running a test with this code of the whole ~250,000 frame sequence and see what the results are like. I need to think about how to deal with the memory usage problem. I could include a second class that holds each frame, and all the regions segmented from that frame could hold a reference to it. This could decrease memory usage 100 times, but it’s unclear how the clustering process would work in this context… Another option is just to fix the size to some arbitrary number that suits the number of clusters I want, say 500x500pixels. Larger percepts would then be more pixelated, and smaller percepts would be stored at an unnecessarily high resolution.  Following is the debug plot from the test, note the stable memory usage: