This second test went well, although the processing time for each frame shows what looks like it could be an exponential increase. I presume this is due to the comparison of each newly segmented percept to the set of all existing percepts:
This is not proper profiling, but just using the differences between the timestamps on the render images produced by the system. This means that it is slower than the system would be without writing files to disk, and we can’t compare these trends to those in the following plots, or see what functions use more CPU time. According to this data, the minimum time to process one frame is 2 seconds, while the max is 9 seconds, where the time increases as more frames are processed. It appears that processing time increases, even though the number of background percepts are stable. It’s possible the increasing foreground percepts could be the cause, this those increase throughout this processing run. Note that at this point no optimization is done. While I was hoping to have dumps of the final percepts, this did not happen due to another bug in the code, which should be resolved for the next test.
The following plots are consistent with the previous test. After processing 20,000 frames, ~2000 background and 1124 foreground percepts were constructed.
Following are more screen-grabs from this run: