Sequence of Colour Fields (Second Try)

Posted: April 21, 2016 at 9:38 am

I realized that part of why the SOMs in the previous sequences are inconsistent over time is because a time seeded random number is used to rearrange of order of the segments (inputs) for each SOM, which adds significant random variation. I first tried to use serial training in ANNetGPGPU, but found that it is significantly slower than random training (serial training time: 745.579s; random training time: 12.1s). I also rewrote the code so that the next network actually uses the previous network as a starting point, rather than starting with the original training data for each neighbourhood size. The results, a selection of which follows, certainly have more cohesion, but the use of the previous network reduces some of the colour variation.

SOMWeights-B-0000066

SOMWeights-B-0000061 SOMWeights-B-0000056 SOMWeights-B-0000051 SOMWeights-B-0000046 SOMWeights-B-0000041 SOMWeights-B-0000036 SOMWeights-B-0000031 SOMWeights-B-0000026 SOMWeights-B-0000021 SOMWeights-B-0000016 SOMWeights-B-0000011 SOMWeights-B-0000006 SOMWeights-B-0000001

You can actually see how the individual areas of colour gravitate towards each other and group into larger areas as the SOM organizes areas by similarity. The larger the neighbourhood, the larger the horizon for seeing similar patterns. With small neighbourhoods, there are multiple regions of similar colour (and width / height) because the similarity ‘search’ is limited in area. For example, trace the red colours (just left and down from the tower at the centre of the top image) from the top to bottom. I just realized why the network appears non-linear where changes are very slight at the top and explode suddenly at the bottom: I’m using a divisor to shrink the neighbourhood size! The huge difference between the bottom image and the one immediately above it is due to the fact that the neighbourhood size doubles. What I want is a linear increase of neighbourhood size, which means using subtraction rather than division.

The purpose of all this is that I’m planning to use one row from each of these networks and combine them into a single visualization that is more concrete and resembles the original pano at the bottom and increasingly turns into an abstract colour field at the top. I’m so pleased by these colour fields that I’m even considering not including the corresponding segments in the final composition at all. As I’m generating whole networks for each neighbourhood, I can also generate a video where each neighbourhood size is visualizes in one frame. The above sequence is originally only 130 frames (~4seconds), which is a very short video.

The reason why I’ve used this inefficient and elaborate method is two fold: (1) I’m  generating source and binary files for each neighbourhood size because I could not find a way of changing neighbourhood size other than using a macro (meaning the size could not be supplied to the program via arguments). (2) I can’t generate a single network where the neighbourhood size changes with the Y position of the cell because ANNetGPGPU does not (yet) support such a scenario.