More Colour Fields resulting from Different Neighbourhood Sizes

Posted: April 18, 2016 at 4:29 pm

The following images are renderings of the SOM structure trained and visualized using the same methods as previously posted. The only difference here is that much smaller neighbourhood sizes are used (top: default/150 and bottom: default/50)

still-proxy-pano-edit-genSOMVizCV-5_500-SOM-10000-SOMScale-36-sigma150 still-proxy-pano-edit-genSOMVizCV-5_500-SOM-10000-SOMScale-36-sigma50

As each segment is extracted from the original image, and the SOM structure is initialized using these segments, the structure of the SOM matches the structure of the original panorama increasingly as the neighbourhood decreases in size. In these images we clearly see the abstraction of the original. Note, although these images appear to be a blurry version of the original, the larger the neighbourhood the more segments are arranged according to their similarity.

Rather than using rectangles it would be interesting to explore drawing each block using a Gaussianoid function; this would reflect the use of the Gaussian in the underlying SOM algorithm. I’m next going to attempt to train a series of networks each with a slightly different neighbourhood sizes. I could then create an animation showing the smooth transition between a somewhat readable photographic image and a total abstraction.

ANNetGPGPU has been updated to use the new CUDA and I’ve run a test running the SOM calculation on the GPU. Training on the CPU had taken 5844.76 seconds while on the GPU the same job took only 28.1313 seconds, a 200x increase in performance.