Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/38/d294798561/htdocs/ben/wp/wp-includes/post-template.php on line 284

Data Stability Over Time

Posted: October 31, 2013 at 6:01 pm

After meeting with Philippe, we decided that the static dreams could be due to the variation over time in the data-set being “seen” as noise in the data by the ANN, which is attempting to generalize over time. This would explain dreams being static, because they reflect the stability and ignore the noise.

The first step in examining this issue was to look at the stability of my data, which is actually highly stable over time. This is shown in the following plot, which shows the amount of change (in %) between subsequent frames in the foreground and background data-sets:

state.delta

What this tells us is that there is very little change (except early in the background data) between frames. This is not contributing to the training process. So I tried filtering out all the data that did not have at least a 5% change between frames, which reduces the data-set down to ~400 data-points. The results of learning from this smaller data-set is as follows, left pane is the input, middle pane is the raw output, and the right pane is the the output with threshold applied: (The left most column of input results in the left most column of output)

fann_results

There is clearly more density, but the network does not behave any differently when run in a feedback loop to generate a dream:

learn_sequence_epoch-filtered-50.dream.rawOutput-crop

Where the network stabilizes to a near static pattern after the first iteration. Since this learning set is only 400 data points, that may not be enough for any learning, so to strike a balance I filtered the data such that only inputs where 10 or more clusters change state remain, which results in a subset of 8096 elements. Following are the training results and resulting dream (click on image to get full resolution version):

learn_sequence_epoch-filtered-10.results-thumb-crop

For the first time we have a significant dream of 236 iterations before the network stabilizes on a static output:

learn_sequence_epoch-filtered-10.dream-crop

There is a lot of work to be done, but this is the first time we’ve seen a dream that has not stabilized in under 10 iterations. All this still with a non-recurrent MLP. The relation between static (images) and dynamic (moving images) is interesting because children’s dreams tend to be more static, presumably due to a lack of development of their predictive systems. I wonder if the dreams of this system are less likely to be static with more training samples…