I’ve been tinkering with making collages from raw segments, rather than percepts. These have not been clustered or averaged and are simply cut out of Blade Runner frames without further processing. Thanks to ANNetGPGPU changes I’ve been able to generate some quite large scale collages. The one below (and its detail underneath) are generated from 1 million image segments (the 1 million largest out of the 30 million extracted). They take a lot of training (10 million iterations here), and still seem somewhat disorganized. I think there is potential here, but because of the number of (large) percepts, I think the my max GPU texture size (16384 x 16384) is a little small. This leads to a lot of overlap between segments, which does look quite interesting up close (see detail) but perhaps a little too dense. It’s possible that at 48″ square (as intended) that rich texture could make the overall composition successful.
I am not very happy with the lack of diversity of colours; this is because there is an over-representation of a few similar regions segmented from subsequent frames. I’m currently training a 6 million segment version using a stride (keep each 5th segment) that will hopefully result in an image more representative of the whole time-line. In the long-term the best approach may be to use stride based on frame numbers, but this information is not preserved in the current implementation.