Clustering Test 3 (Foreground)
Posted: March 22, 2013 at 3:24 pm
Following is a test of foreground clustering. In this case the only features being used are colour (mean L, u and v values), and the threshold of similarity is somewhat arbitrary. The first thing to notice is that the clusters are quite poor compared to background. This is because foreground objects change a lot in area, position, aspect, size, etc. which is why those features are not used to cluster them. Background objects are much more stable because they don’t move around much with the static camera. The frames in the test stream were captured once per second, so foreground objects close to the camera move a lot between frames. Resulting clusters then appear quite strained, the constituent patterns don’t reinforce but conflict, because there is such a high likelihood of drastic changes of the same moving object between frames. Combine that with the limited number of frames they are present, and we end up the video like the following, where sometimes only two frames are merged. Even if the threshold allowed more merges, they would likely be even more strained. Again this video is extremely high-resolution, and the percepts presented don’t get cleared when they disappear from the input, so they accumulate in image.
Clustering Test 2
Posted: March 21, 2013 at 8:33 am
Following is a new video constructed from the same frames as the previous clustering test. The issue with the fine horizontal lines in masks was due to a bug in the segmentation code during some optimization changes I made while writing the paper for Creativity and Cognition. That has been fixed, and I have also switched to extracting pixels from the background model, rather than the current frame, so no foreground percepts are included in the background. There are still hard edges around some masks, which will eventually need to be dealt with.
Clustering and Aesthetics
Posted: March 20, 2013 at 1:38 pm
The clustering code is working pretty well for background percepts. Following is a video that shows the raw frames on the left (in 720p) and the resulting clustered output on the right (also in 720p) through ~300 consecutive frames. Note the video is quite high resolution (2560×720) and best performance is likely attained by downloading (right click and “save video as”) and using a native video player. For each new frame all regions in the previous frame are compared and clustered: if they are sufficiently similar, then the corresponding regions in both frames are merged by averaging into a single percept.
Posted: March 15, 2013 at 4:57 pm
I have integrated the existing segmentation code into openframeworks and also implemented a first version of the new clustering algorithm (based on BSAS). This clustering algorithm is quite simple and I’m currently using all features provided by segmentation — position (x, y), size (width, height, area), colour (mean of CIE L, u, v channels) — similarity being Euclidean distance in multiple dimensions. I have only tested thus far with background percepts, with the following results:
Free Energy, Prediction and MDP
Posted: March 5, 2013 at 10:31 am
I just finished reading a new Hobson paper (“Waking and dreaming consciousness: Neurobiological and functional considerations”), which is an update on Hobson’s AIM model integrating Friston’s “free energy formulation”. The key points are that we can consider waking perception as a learning process where the difference between what happens and what is expected drives more accurate predictions. REM sleep and waking are contiguous processes, where the lack of external stimulus during REM means there are no prediction errors, which triggers dream experiences. In my reading it appears that Hobson proposes that visual images in dreams are the result of the ocular movements themselves (REMs) predicting visual percepts. Hobson proposes that dreams are of functional use because they manifest an optimization process: “one can improve models by removing redundant parameters to optimize prior beliefs. In our context, this corresponds to pruning redundant synaptic connections.”. In short, dreaming improves the quality of the predictive model of the world in the absence of sensory information, by pruning. (more…)