Data Stability Over Time

Posted: October 31, 2013 at 6:01 pm

After meeting with Philippe, we decided that the static dreams could be due to the variation over time in the data-set being “seen” as noise in the data by the ANN, which is attempting to generalize over time. This would explain dreams being static, because they reflect the stability and ignore the noise.

The first step in examining this issue was to look at the stability of my data, which is actually highly stable over time. This is shown in the following plot, which shows the amount of change (in %) between subsequent frames in the foreground and background data-sets:

state.delta

(more…)


Dream Simulation (failure)

Posted: October 24, 2013 at 10:40 am

After training the MLP, I thought I should try running it in feedback mode to see if it would actually be predictive. It is not. There are some shifts in input patterns at the start of the dream, but it takes only 8 iterations to settle into reproducing a stable pattern. The following image shows the results of the first ~1000 iterations of the dream (truncated from 10,000 iterations). The left image is the raw output while the right is the thresholded version. Note the slight changes in output patterns early in the sequence (far left edge of both panels).

backgroundState_sequential_dream.results (more…)


Machine Learning Progress? (non-recurrent MLP)

Posted: October 21, 2013 at 4:59 pm

I’ve put my leaking code problems aside for now to continue working on the project, the next phase being the ML stuff. So I’m now using FANN because while OpenNN was nicer, more complete and active, it did not provide functions for online / sequential learning needed for this project.

This is my second attempt to train an MLP with plausible data produced by the system. The input is a set of 41,887 state vectors (representing the presence of clusters at each moment in time) produced by a previous run of the segmentation and clustering system. Each element in the vector is a Boolean value corresponding to each perceptual cluster: 0 when the cluster is not present in the frame and 1 when it is. For training, 0 to 1 values are scaled to -1 to +1. The previous attempt appeared to work because the output resembled the input, but I realized after running prototype feedback (dreaming) code that the network was trained just to reproduce the input pattern, not the next input pattern.

The MLP here is considered a canonical case to compare with future sequential learning and contained three layers (1026 input, 103 hidden, 1026 output), and was presented the whole input set over 50 epochs. The network was presented a single state at each iteration, not a window of states over time. The code is a modified version of the FANN xor_example.cpp and uses the “rProp” learning algorithm where weights are initialized with random values between -1 and +1. (more…)


An Integrative Theory of Visual Mentation and Spontaneous Creativity

PDF Document (Submitted for Review)
PDF Document (Version of Record)

[B. D. R. Bogart, P. Pasquier, and S. J. Barnes. An integrative theory of visual mentation and spontaneous creativity. In C&C ’13: Proceedings of the 9th ACM conference on creativity and cognition, pages 264–274. ACM, 2013.]

It has been suggested that creativity can be functionally segregated into two processes: spontaneous and deliberate. In this paper, we propose that the spontaneous aspect of creativity is enabled by the same neural simulation mechanisms that have been implicated in visual mentation (e.g. visual perception, mental imagery, mind-wandering and dreaming). This proposal is developed into an Integrative Theory that serves as the foundation for a computational model of dreaming and site-specific artwork: A Machine that Dreams.