Over the weekend I ran a 30,000 frame test, thus far the longest test running the predictor and integrated segmentation and clustering system. The temporal instability has lead to many percepts ending up being extremely ephemeral. Following is an image that shows all percepts after 30,000 frames rendering on top of one and other with a white background:
The sky, lit parts of the asphalt, and parts of the trees are well recognized, but the rest of the image is occupied by highly transparent weak clusters. This test was run over a particularly challenging part of the input data where the sun sets behind the trees, the bright sun flares the lens, appears and disappears rapidly, and is bright enough to make the reflection of the camera visible in the glass. Following is the last captured frame for reference, and a histogram that shows the mean value of the masks for each percept, which is what I have been using as a measure of how ephemeral a percept is.
Following are renderings of the 100 percepts whose masks have the greatest and least means, respectively:
One idea to deal with this is to filter the percepts based on their degree of ephemeral quality and remove them. This would drop the number of percepts in the system and allowing new ones to take their place. Unfortunately, this is not feasible because it would change the meaning of the slots in memory which would interfere with the predictor training, which requires a static number of percepts which only change their content gradually.