Large Percepts Revisited and Template Matching

Posted: March 2, 2016 at 4:48 pm

In my previous post I claimed that the inclusion of large readable percepts in the output was due to the lack of filtering, but in fact I was filtering out large percepts. It seems the appearance of apparently large percepts is due to tweaks I made to the feature vector for regions. Previously, the area of percepts was very small because it was normalized relative to the largest possible region (1920×800 pixels); as percepts this large are very unlikely, the area feature had less range than the other features. I increased the weight of the area feature ten fold hoping it would increase the diversity of percept size. Instead, it seems this extra sensitivity preserves percepts composed of larger regions, increasing their visual emphasis.

Through the whole Banff Centre residency I was trying to find a midway point between pointillism and the more readable percepts; it seems I stumbled the solution. I’m still not happy with their instability, and I’m now generating new percepts where I use template matching so that regions associated with one cluster are matched according to their visual features (to an extent). I have no idea how this will look, but it could make percepts less likely to be symmetrical, since regions are no longer centred in percepts.