Multiple Neighbourhood Sizes in a Single Map

Posted: April 28, 2016 at 4:13 pm

Thanks to Daniel Frenzel, ANNetGPGPU now supports setting different neighbourhood sizes in a single network. This means I will no longer have to generate a different source and data file for each neighbourhood size. Following is an image visualizing the weights (using the old rectangle renderer for performance reasons).


The reason the map does not fill the frame is because, as seen from the videos of the previous sequences, the seeded data (original image) expand gradually from their initial positions. In this case, only the very top of the image uses the maximum neighbourhood size, slowing down expansion significantly. The white-space is due to a lack of segmented regions (due to a lack of contrast). I’ll attempt to solve this in the final image by (a) shooting on a day where there are clouds and blue sky (not only overcast) and (b) bracketing the exposure to get more contrast in the sky.

In working on my Gaussian renderer I was reminded of a bug in my segmentation code; as the code was borrowed from Watching (Blade Runner), it normalizes width and height according to the max size of an HD video frame, which clearly is inappropriate in this high resolution context. The result is that the width and height features are significantly smaller than the HSV features. I trained a SOM  where any cells that do not correspond to a segmented region (white-space) are filled with random values. As these end up with relatively larger width and height features, their final size dominates the visualization:

By only setting the cells in the top row to random values, I ended up with the following:


This has interesting conceptual and philosophical implications. The central premise of the work is that there is a continuity between sensory processes that measure the world and mental processes that project and extend structure. As the size of the neighbourhood increases, there is an increasing imposition of structure on the network. The effect of this imposition is an increasing reorganization of structure in the image according to the similarity of its components. Increasingly throwing away spatial structure in order to favour implicit structure.

When the top row is seeded with random values, these impact the final composition of the work, and yet are causally divorced from the structure of the image. Are the internal structures we impose on reality arbitrary (random), or are they informed by our experience? Rather than using random values, another option is to go through each column in the map and fill the top row with values that appear in that column, e.g. a copy of the closest segment, or the average of all regions in that column. In thinking about this I came up with a possible title of the work: As our gaze peers into the distance, imagination takes over reality.

All in all these new results are looking quite nice and are much more consistent with the initial colour fields. They lack the instability and horizontality of the sequences, which is too bad because it reminded me of a water and was consistent with the theme of the Coastal City.