Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/38/d294798561/htdocs/ben/wp/wp-includes/post-template.php on line 284

Density of Dreams, Arousal and Training Samples

Posted: March 13, 2014 at 6:55 pm

Now that all the system components have been implemented, I’ve finally had a chance to get a proper look at the system’s behaviour. Following are three images that show the display during perception, mind-wandering and dreaming:

perception

mind-wandering

dreaming

My first impressions are that (1) the dreams tend to be too sparse and don’t seem to change in sparseness due to increasing training samples; (2) dreaming and mind-wandering result in some pretty boring image sequences. Part of the reason for this is that dreaming and mind-wandering are initiated with the current state of activation, and since that state of activation reflects the image as previously seen by the system, they tend to continue the visual pattern currently seen. In mind-wandering this means that tend to look the same as the previous perceptual image, in dreaming they tend to be dark due to low light occurring before sleep. The theory was that the predictor would start at the current stimulus and diverge rapidly as feedback in the predictor leads to more and more tenuously related percepts. This does not seem to be the case; dreams and mind-wandering appear to just repeat a visual scene. Looking at the state changing over time shows that there are changing patterns of activation, but they seem to only involve the same small subset of percepts.

As the program was spending a very small amount of time perceiving (compared to dreaming and waking) I lowered the arousal threshold for triggering mind-wandering. It looks like this change has actually caused dreams to look more sparse, despite having been exposed to more training samples. As we found early on, it is quite easy for the network to learn stability rather than change when there is insufficient difference between training samples. Thus, the periods of low arousal (mind-wandering) are actually functional in the sense that they break up the slow and gradual changes of activation and focus perception on only large changes visual stimulus. So less perception actually seems to lead to better (or at least less sparse) dreaming and mind-wandering.

There is a big open issue with arousal as currently implemented. I initially designed arousal such that it would reflect the amount of change in a visual scene, and thus I annotated a portion of the input video validate measures (this this peak in arousal correspond to this change in the visual scene). I ended up with a very simple measure: the sum of the absolute difference in luminosity between subsequent frames. The bigger and more objects that move between frames, the larger the sum. Now that we have changed to a static frame for percepts (to solve the memory leak problem) foreground objects tend to be lost in the merging process, due to the long exposure effect of clusters changing over time. Presumably percepts could hold foreground objects for longer periods if there was a sufficient number of percepts, which is not the case (currently working with 1000 percepts at 1280×720 pixels). Thus, we are using a measure of arousal dependent on moving objects that themselves are very poorly represented in percepts. This is clearly a problematic disconnect. Since the shift to the fixed resolution percepts, I have considered the possibility that foreground objects would just not be well represented, and therefore not appear in dreams. So if this is the case, what is an appropriate measure of arousal at the scale of slow changes in the background where the short-term movements of foreground objects are ignored?

One option is to use the current measure and use a running average filter with a very large window. This would ignore the small changes caused by foreground objects, but would also not be very useful for controlling mind-wandering and perception, as it would change state very rarely. Actually, I have that wrong because we can use a different measure of arousal for training than we do for triggering a perception / mind-wandering. The question of what criteria to use to determine when to feed the current state of activation to the MLP remains. Another idea is to use the difference between the mean luminosity of subsequent frames, with a small amount of smoothing, this could give a better indication of large changes in the scene with less impact from moving foreground objects. Of course, this also means specifying another threshold. Another idea is to use the MSE from the predictor to determine the onset of mind-wandering: Trigger mind-wandering when the MSE decreases a certain amount. Of course, it would not be useful to determine the onset of perception as by definition the MSE would not change during mind-wandering. I would like to be able to get away from specifying these thresholds and allowing some kind of self-organizing mechanism to determine when day and night happens, and the switch between perception (waking) and mind-wandering.