Piksel Real Code

Here is the memory field of the installation (as of this morning) after running for two days.

piksel08-1-montage.jpg

The context appears extreme in terms of the SOM’s ability to organize it. This context could use more units in the SOM. Talking to Wendy Mansilla I had the idea that it would be nice to generate these montage images in an exhibition so that they can be seen by the audience.  The idea is to generate them at full resolution and have them printed and hung in the gallery around the installation over time as the exhibition continues.

This is the path of the longest dream path so far in the piksel exhibition. It contains 45 memories. It is not the longest dream (which is a loop of 8 highly similar memories) but activates the most unique memories since starting the installation. It will be interesting to see how this will change as the installation continues.

long_dream.png

Another idea I had, when talking to Alex Norman, was a way of making the camera move without random operations. The idea is that the camera will break the image up into 5 regions. The lower, upper, left and right edges; and the centre (where the centre will be larger). For each of these regions a histogram will be taken. The histograms will then be compared using the usual method. The edge histogram that has the largest difference, in comparison to the centre area, will select the direction. Just as the dreams now follow the structure of the memories, the camera will follow the structure of the visual context.

Piksel Installation

Here are some shots of the progress of installing “Dreaming Machine #1 (prototype)”. The installation went well. The folks at Lydgalleriet were great and this will be the first time I’ll get to see the system projected! Tomorrow will be just for fine-tuning and the opening will be at 7pm.

img_0131.JPG

img_0135.JPG

Final Steps Towards Norway

I’m finally happy with the look and behaviour of Dreaming Machine. I managed to make the dream representation much more interesting and dynamic by including the each actiavted units’ nearby neighbours. The result is a much more complex representation. I’ve used the gaussian alpha channel from the montages also in this representation. One thing is clear is that I need to move to a higher resolution image when working with stills like this. Standard def video is just not good enough. This means getting a better camera (Alpha 350?) and a pan/tilt tripod head that will work with any camera.

Here is a video of a single dream

The corresponding memory feild:

Dream montage

The visualization of the dream’s path:

dream.png

More Dreams

Here is a long (134 memory) dream from the system as it currently stands:

dream1.jpg

This is the trajectory of this particular dream: (Note that the trajectory folds back on itself a few times, causing a few complex loops.)

dream1_plot.png

This is the SOM (memory field) at the time of this dream: (Note the region of tree memories in the lower left corresponding to the start of the dream.)

montage1.jpg

Dreaming / Free-Association System

I’ve started work on replacing MAM’s over-complex dynamic patching with a python (pyx) external. The CA-like aspect of MAM is really better suited to an OOP system than doing the work in PD. Here are the first of the system’s dreams.

The first is an early version where loops were very likely (where the same memories are retreived over and over again):

dream4-arrange-scale.jpg

Here is a dream generated by the current version of the system which avoids these endless loops. This particular dream is quite long (30 activations) which is due to the similarity between (dark) memories. In this version the histograms of the stored images are analysed for similarity (using the same method as the SOM) in order to propagate activations to the most similar memory. The signal is degraded proportional to the distance between memories. The more similar images are the longer the dream is, due to less signal degredation. Signal degredation is not visualized in these images.

dream5.jpg

This is the SOM used to generate these dreams. It was a test running over 3 days. Due to a glitch the camera recorded while I was not expecting that resulted in many dark images.

montage.jpg

High-Resolution Montage

I’ve written some python code (using PIL) to create a high-res montage (in the style of the Memory Association Machine visualization) from the stored images. The 30×30 SOM produces a near 10,000 by 7,000 pixel image. Here is a montage that shows the level of detail in the full-resolution version and a lower res version of the whole map.

montage-composition1.jpg

montage-scale.jpg

Tracing the Camera’s Path

ptz.png

This plot illustrates the path the camera takes to examine the visual context. The colours represent time, purple is older, yellow is newer. The zoom level is represented as the thickness of the lines. Thicker is more zoomed in. The units for PTZ are in elmo camera units. Age is specified as the iteration at that moment in time.

900 Units

Now that I have switched to the moving camera the project seems to be steadily improving. The weather was very interesting this weekend, a mix of sun and rain. The result is some beautiful subject matter for the moving camera. I increased the size of the network to 30×30. This is now too big a SOM to display using the MAM visualization method (the image below was generated with ‘montage’ from imagemagick). The image does appear organized, but is still highly complex. I wonder how big a network would be needed for a non-folded SOM of this subject matter. I have to make a u-matrix visualization to see how folded the feature maps actually are.

I’m much happier with the aesthetic now. I’ve gone with a random-walk style of camera movement to give some continuity between subsequent frames. Here is the memory map from the system running over the weekend. This image is results from constant neighbourhood and learning functions (1 and 10) and represents approximatly 8 hours of stimulus. One memory slot was not associated with an image (with a red ‘X’).

sunrain-montage.jpg

The disappointing static camera

I merged the fixes for the histogram method back into the temporal patches. After running the patch overnight it was clear that the static camera is probably not going to work out well. There is just not enough variation in the environment to keep things interesting. The additional problem of the extra overhead (of the high number of sensors) means that an image can only be captured each 2s or so. An approach to revisit would be to use reference frame subtraction to feed a presence image into the SOM. I think part of the lack of interest (and organization) in the raw pixel method is that, with a static camera, the vast majority of the pixels are the same.

I’ll move the histogram patches into trunk and start working with a moving camera. The issue with a moving camera and the dream aesthetic is the relation between one image and another. A way of approaching the smoothness of a dream could be using a slow drunk-walk random movement. That way there would be a portion of the last image in the next image and give some consistency (or at least slow change) of both the images and the histograms. I should store the sequence of the images captured so that a free association could happen both through similarity (the SOM) and through time.

The results of the training using the raw pixel method overnight. There appear to be only two clusters of images. The presence of cars and such do not cluster. (white blocks are units not associated with inputs)

static_camera_raw_pixels.jpg

Histograms

It has been some time since my last post. I was trapped by a number of technical issues that meant I was not even sure my SOM was working properly. Through the debugging process I created two variations of the current system. Both use temporal timing. That is the SOM is not triggered by the motion analysis, but runs at a fixed interval. This was done to simplify the patch as much as possible. The second variation used a histogram in place of the usual pix_dump method (of feeding the raw pixel data to the SOM).

I have learned two things. The first is that the likely cause of the  lack of organization in previous work was due to timing issues. That is the first thing I will go back to fix in other code branches. The second is that the idea of using a fixed camera may be problematic. In my experiments I was able to making a working system using the pix_histo object and passing the histogram to the SOM, rather than raw pixels. This works surprisingly well for a dynamic and moving context (that is it works best if the camera is moving, not a static camera). Of course when using the histogram the system is no longer using the raw pixel data, and therefore not making a pixel by pixel comparison. In orther words the pixel-by-pixel method is most appropriate for the static camera, where the histogram method may be more appropriate for the moving camera. It is clear that the histogram method does not work very well with a static camera.

Histogram method with static camera:

result00000.jpg

Histogram method using a moving camera:

result000001.jpg

Man Rolling Plastic

I’ve just started on the first stages of integrating some of the future work in Thesis into MAM heading towards the “Dreaming Machine” project. At this point I’m working with using frame subtraction to record changes in a scene using a fixed camera. There is no SOM used at this stage.

I’ve included a number of visualization methods based on single buffering. The first is a normal long-exposure where the images build up on top of one and other with high transparency showing just a subtle shift of the background. The method of storing all images in pix_buffers means that the visualization can be decoupled from the processing. There can be many different and simultaneous visualizations of the data now. The idea is that the system would move between them. The following images show another visualization method where the difference between the reference and current frames are stored along with each frame. These are used as alpha channels so that each time-step is less transparent:

Man Rolling Plastic Wrap

Man Rolling Plastic Wrap

I’ll be showing an early version of “Dreaming Machine” for the piksel festival in Norway this December.