I thought I would try training without using the size of regions as features. The macro structure is quite nice, but it did not converge any better/faster than those using all the features. I do like the even distribution of the different sized segments over the composition. I think the previous versions are likely best, but I think they would need to be rendered larger (not possible on my current hardware if I want to ) so the large number of percepts do not overlap so much.
I wanted to do a test with a large number of segments spread evenly over the set of all segments to represent the palette of the whole film; the following image and details shows the result. Now that I’m using such a large number of percepts I’m noticing there is an dark outline around most percepts. This seems to be an anti-aliasing effect and I’m testing a version of the collage code that disables it. Due to the large number of segments, I used a relatively small number of training iterations (approximately 5 million) and thus the organization is not very good. Still, the results are interesting and quite painterly. In my next test I’ll go in the other direction and use a small (100,000) number of percepts evenly distributed over the set of all segments.