Reconstructions using clusters learned from Blade Runner in its entirety

Posted: October 21, 2015 at 4:34 pm


After the promising results from the previous post where I generated clusters / percepts from one scene, I decided to be ambitious and generated clusters / percepts from the entire film. The images in this post are selected from an entire scene being (perceptually) reconstructed from clusters generated from all 30 million regions segmented from Blade Runner.

I generated 600,000 clusters over 10 iterations and the k-means clustering process took 19 days. To compare to the single scene documented in the previous post, the clustering resulted in a compactness of 3.65421e+07 and a mean correlation of 0.999966. Despite these promising numbers, the actual aggregations of regions into clusters are significantly more noisy, encompassing highly diverse visual material. Again, these images are ‘perceptual’ where segments in the original frame are replaced with the closest cluster. Original frames are also included for reference:

store-0056773-orig store-0056773

store-0058368-orig store-0058368

store-0058530-orig store-0058530

store-0058619-orig store-0058619

store-0058706-orig store-0058706

store-0058935-orig store-0058935

store-0059073-orig store-0059073

store-0059783-orig store-0059783

store-0059940-orig store-0059940

store-0061085-orig store-0061085

Video Excerpt: