Final Steps Towards Norway

Posted: November 27, 2008 at 11:31 am

I’m finally happy with the look and behaviour of Dreaming Machine. I managed to make the dream representation much more interesting and dynamic by including the each actiavted units’ nearby neighbours. The result is a much more complex representation. I’ve used the gaussian alpha channel from the montages also in this representation. One thing is clear is that I need to move to a higher resolution image when working with stills like this. Standard def video is just not good enough. This means getting a better camera (Alpha 350?) and a pan/tilt tripod head that will work with any camera.

Here is a video of a single dream

The corresponding memory feild:

Dream montage

The visualization of the dream’s path:

dream.png


More Dreams

Posted: November 17, 2008 at 11:38 am

Here is a long (134 memory) dream from the system as it currently stands:

dream1.jpg

This is the trajectory of this particular dream: (Note that the trajectory folds back on itself a few times, causing a few complex loops.)

dream1_plot.png

This is the SOM (memory field) at the time of this dream: (Note the region of tree memories in the lower left corresponding to the start of the dream.)

montage1.jpg


Dreaming / Free-Association System

Posted: November 14, 2008 at 4:35 pm

I’ve started work on replacing MAM’s over-complex dynamic patching with a python (pyx) external. The CA-like aspect of MAM is really better suited to an OOP system than doing the work in PD. Here are the first of the system’s dreams.

The first is an early version where loops were very likely (where the same memories are retreived over and over again):

dream4-arrange-scale.jpg

Here is a dream generated by the current version of the system which avoids these endless loops. This particular dream is quite long (30 activations) which is due to the similarity between (dark) memories. In this version the histograms of the stored images are analysed for similarity (using the same method as the SOM) in order to propagate activations to the most similar memory. The signal is degraded proportional to the distance between memories. The more similar images are the longer the dream is, due to less signal degredation. Signal degredation is not visualized in these images.

dream5.jpg

This is the SOM used to generate these dreams. It was a test running over 3 days. Due to a glitch the camera recorded while I was not expecting that resulted in many dark images.

montage.jpg


High-Resolution Montage

Posted: November 6, 2008 at 10:20 am

I’ve written some python code (using PIL) to create a high-res montage (in the style of the Memory Association Machine visualization) from the stored images. The 30×30 SOM produces a near 10,000 by 7,000 pixel image. Here is a montage that shows the level of detail in the full-resolution version and a lower res version of the whole map.

montage-composition1.jpg

montage-scale.jpg


Tracing the Camera’s Path

Posted: November 4, 2008 at 9:14 am

ptz.png

This plot illustrates the path the camera takes to examine the visual context. The colours represent time, purple is older, yellow is newer. The zoom level is represented as the thickness of the lines. Thicker is more zoomed in. The units for PTZ are in elmo camera units. Age is specified as the iteration at that moment in time.


900 Units

Posted: November 3, 2008 at 1:27 pm

Now that I have switched to the moving camera the project seems to be steadily improving. The weather was very interesting this weekend, a mix of sun and rain. The result is some beautiful subject matter for the moving camera. I increased the size of the network to 30×30. This is now too big a SOM to display using the MAM visualization method (the image below was generated with ‘montage’ from imagemagick). The image does appear organized, but is still highly complex. I wonder how big a network would be needed for a non-folded SOM of this subject matter. I have to make a u-matrix visualization to see how folded the feature maps actually are.

I’m much happier with the aesthetic now. I’ve gone with a random-walk style of camera movement to give some continuity between subsequent frames. Here is the memory map from the system running over the weekend. This image is results from constant neighbourhood and learning functions (1 and 10) and represents approximatly 8 hours of stimulus. One memory slot was not associated with an image (with a red ‘X’).

sunrain-montage.jpg