The Readability of Propagation

Posted: January 16, 2013 at 5:17 pm

I have not posted in some time due to working on a paper for Creativity and Cognition (now submitted), which I will post at some point. The paper attempts to integrate the theories of dreaming and mental imagery that have been in play in this project with creativity and the default mode network (in relation to dreaming and mind-wandering).

I’ve also been working on a “system specification” which has turned into a submission for The International Conference on Computational Creativity. As part of that process my supervisor has asked me to prototype the way activation is propagated. This aspect of the project is least rooted in theory, and whose mechanism comes directly from previous work on Dreaming Machine #2. The jist is that we sort percepts by similarity, and then activation propagates between similar (neighbouring) percepts, decaying to the degree they are dissimilar. So an initial stimulus causes the activation of very similar percepts, but then as the activation propagates, presented percepts diverge further and further away from the initial stimulus.

The idea for this iteration, as described in the proposal, is that each feature of a percept (colour, position, area, etc.) have independent networks where they are ordinally sorted to reflect the similarity of the percepts along those dimensions. This was inspired by biological neurons that can have thousands of connections. One problem with having activation propagate through all dimensions is the potential for over-activation. An early idea I had was to limit propagation to one network at a time. A signal would only be propagated along one network until it would loop on itself or get to an edge, in which case it would change the dimension of propagation. The dimension is chosen because it is the “prominent feature”, the feature that makes this percept stand out most from the crowd, of the last activated percept. This propagation is what was implemented for the New Forms Festival Prototype of the system.

The basis of limiting activation is a shortcoming of previous work, where the sequence of images in the dream was unintelligible. One of the aims of limiting propagation in this system is to make the sequence more readable by the viewer, so one could see that the presented percepts are sequenced in terms of similar features. The problem is that with 7 dimensions, and activation jumping between features, the readability of the propagation is not readily apparent. Temporal sequences (time is considered just another feature) are most easily read, which is the through which dimension most propagation happens in the prototype (due to unique frame numbers).

Since I’ve been visualizing the propagations, it seems clear to me that they are hardly readable at all, even the pattern of activation in the visualization is quite complex, due to the jumps between dimensions.  I’m considering dropping this prominent feature mechanism, as the propagations are not readable anyhow. Perhaps propagations should be even more constrained to particular networks, but it is unclear how this constraint should be determined. Additionally, the NFF prototype did not include habituation (the more often a percept is activated, the less activation will be possible in the future, which recovers slowly over time), which already does a good job of limiting activation. One thought is to have activation propagate in all dimensions at once (which is much more biologically plausible), then each feature would have its own degree of activation, and the percept’s activation would be the sum of the activation of its consistent features. Its unclear how to proceed, but Philippe has suggested some work on signal propagation I could look at, in particular Shultz and Lepper. One of the central artistic questions is whether individual propagation should be readable at all. Are we aware of what associations between elements of dreams cause their juxtaposition? Why one image leads to another, or why one context may be combined with what character?