I merged the fixes for the histogram method back into the temporal patches. After running the patch overnight it was clear that the static camera is probably not going to work out well. There is just not enough variation in the environment to keep things interesting. The additional problem of the extra overhead (of the high number of sensors) means that an image can only be captured each 2s or so. An approach to revisit would be to use reference frame subtraction to feed a presence image into the SOM. I think part of the lack of interest (and organization) in the raw pixel method is that, with a static camera, the vast majority of the pixels are the same.
I’ll move the histogram patches into trunk and start working with a moving camera. The issue with a moving camera and the dream aesthetic is the relation between one image and another. A way of approaching the smoothness of a dream could be using a slow drunk-walk random movement. That way there would be a portion of the last image in the next image and give some consistency (or at least slow change) of both the images and the histograms. I should store the sequence of the images captured so that a free association could happen both through similarity (the SOM) and through time.
The results of the training using the raw pixel method overnight. There appear to be only two clusters of images. The presence of cars and such do not cluster. (white blocks are units not associated with inputs)