Clustering Test 3 (Foreground)

Posted: March 22, 2013 at 3:24 pm

Following is a test of foreground clustering. In this case the only features being used are colour (mean L, u and v values), and the threshold of similarity is somewhat arbitrary. The first thing to notice is that the clusters are quite poor compared to background. This is because foreground objects change a lot in area, position, aspect, size, etc. which is why those features are not used to cluster them. Background objects are much more stable because they don’t move around much with the static camera. The frames in the test stream were captured once per second, so foreground objects close to the camera move a lot between frames. Resulting clusters then appear quite strained, the constituent patterns don’t reinforce but conflict, because there is such a high likelihood of drastic changes of the same moving object between frames. Combine that with the limited number of frames they are present, and we end up the video like the following, where sometimes only two frames are merged. Even if the threshold allowed more merges, they would likely be even more strained. Again this video is extremely high-resolution, and the percepts presented don’t get cleared when they disappear from the input, so they accumulate in image.

I’m inclined to leave foreground clustering as is for two reasons: (1) The aesthetic that results from these “strained” clusters were very prominent in the NFF exhibition and it was repeatedly commented how the subtle opacity appeared like water-colour, and how successful that was from an image-making perspective. (2) These examples show the current worst-case scenario, where foreground objects move quickly in front the camera so close that they occupy significant frame area. Additionally, my mixed aesthetic response to these strained accumulations is partially due to the hard edges at the borders of the frame, which already has to be solved with some blending method. Only a long-term test will show how foreground objects do (or don’t) converge. Such a long-term test requires an adjustment of the clustering algorithm. Currently, once the maximum number of clusters has been reached, the system will simply stop leaving them, which is not appropriate for a long-term continuously generative work. The idea is that once that maximum number of clusters has been reached, that new precepts will be merged with the nearest existing cluster, even if its not that close. This way previous learning will be obliterated by new learning, but if there are sufficient clusters, variation should be maintained. Seems this needs to be the next step before moving onto RL.