Towards a self-motivated camera

Before starting to impliment the ideas of having the camera control its own path I need to be able to verify what the camera is looking at. To that end I’ have determined a mapping between unit ID and pan/tilt location, in order to vizualize the field of the camera. Following is a montage of 56×11 640×480 images arranges by thier position in pan/tilt space. Future plots of camera paths will be superimposed on this feild.


The images on the far left edge are blurry because the camera was not given sufficient time, between captures, to move from the far right to the left right.

SOM Structure

Here is a U-Matrix representation of the memory map resulting from the Piksel installation:dm1-umatrix.png

The red Xs are units that were never associated with images. Note that due to a bug the histograms were truncated to 100 elements while being stored. This U-Matrix then only shows the similarity of the first 100 elements of the R channel histogram. A U-Matrix is normally calculated by the mean of the sum of differences between each unit and its neighbours. In this case the measure of similarity is based on the sum of the differences divided by the number of neighbours. The large amount of dark units means that this SOM was highly folded. Compare this to the U-Matrix and memory field below:



On Machine Experience and Emotion

The following is an excerpt from a discussion with David Jhave Johnston.

david jhave johnston wrote:

one thought i had is that dreams are so personal so energetic street footage doesn’t convey the emotivity. thats a tough ledge to get around.

I responded:

How can you compare a person’s lifetime of experiences with an installations life of 10 days in a shop window? Things could/would only really get interesting with a huge network (I upgraded my machines to 4GB, and am using a different method to make the patch as scalable as possible) so we’ll see how far I can push it. Also there is the aspect of storing components of stimulus, rather than images, which could allow the construction of imaginary memories. No idea how that would work currently. I have spoken briefly to Gabora about the emotional aspect. I’m playing with (the still Piagetian idea) that emotions could all be reduced to low level emotions related to biological state.

If all knowledge is based on sensor data (as in the Piagetian project) then why, Gabora asked, would a gun associate closely with a knife? They have little to do with one and other as sense-data is concerned. My answer was that perhaps they are bound not just by their sensor impressions, but also (and perhaps more importantly) by their effect on our internal state. They associate with one and other because they cause a similar emotional state.

The problem with that line of enquiry (for my phd) is that it would require a model of those low level emotions. Interestingly the character of those emotions would just be another channel of sense data fed into he SOM. I have still not yet figured out how to deal with multiple channels of sense data, the current idea is have a SOM for each channel, which are cross linked by temporal correlation. A free association could then move through the space of similar images, and then cross into a free-association of sound where the sound matches the image. Then cross into emotion… Ideally this should be an nD SOM, with 2 dimensions for every sensor channel. I can’t get my head around this part.

A measure of stimulation would be very interesting, where the cross-link happens where the stimulus is very strong in the SOM. (A strong stimulus could be as simple as a very close similarity between the activated unit and the input stimulus)

Distribution of Memories over Time

Here is a plot of how many memories were captured at which time. The X-axis shows the dates of the installation, the Y-axis the histogram counts. The peak on the first day of the installation is due to the accelerated collection of memories during the first 50% of collected memories.


Interesting that there are also peaks on the Friday and Saturday after the opening (on Thursday the 11th). It will be interesting to see these trends in a longer installation.