In preparation for dreaming and mind-wandering states I was looking at the measures of system dynamics that could be used to trigger mind-wandering. At first I tried the number of clusters that changed state (present or not present in the current frame) to see if there was an increase in the number of changes where there was lots of activity. There did seem to be some link between the change in the number of clusters that changed state, but it also produced lots of spurious activity and showed activity when images appeared static. Since this is quite abstract information I tried to go down a level and track the sum of the distances between clusters and newly segmented pixel regions. After some examination, that seems an even worse indicator of the appearance of changes in the input frames. So I’m going down another level deeper and will just sum the values of pixels in the absolute difference between subsequent frames.
I was looking at the upcoming Prix Ars application process and realized that there was not really a place for my (generative) work. I looked over at the Interactive Art section to see how broadly it was defined, and found this:
“Jurors are looking forward to encountering innovative technological concepts blended with superbly effective design (usability).”
Now that it looks like segmentation and clustering are working, I’m starting to implement the system dynamics that will generate images in mind-wandering and dreaming. As a first step in this process I wanted to run the longest contiguous set of frames I have. Despite the memory leak persisting, the system was able to process this number of frames. Following is the debug output of the run:
I realized after so much work I have not been able to see how clusters behave over time. So I finally took a day to write a first crack at a rudimentary OpenGL renderer for DM3. Up to this point I was just using OpenCV functions to dump percepts and state data to disk, and then reconstructing images. In the final work, I’ll need to do this rendering anyhow, and it does seem to be working (i.e. working with the threaded application!). So I ran a 16,000 frame test where a perceptual frame was rendered for each input frame. The video runs at 30fps, but frames were captured at 1fps, so objects more really quite quickly.
The increasing jitter is due to the increasing amount clustering process where a finite number of clusters must reflect continuously shifting sense data, and the temporal instability of segmented region edges. The weights for clustering are such that new clusters are 25% new stimulus and 75% previous stimulus. Looking at this video it seems they should be even softer, I’ll next try 15% new and 85% existing. In this test the max number of clusters is 2000. The perceptual rendering is the unprocessed stimulus image in the background where the perceptual clusters are drawn on top at 75% opacity. Due to the jitter, this seems a bit too strong (too much emphasis on constructive perception), and could be reduced to 50%.