Initial Results: Blade Runner

Posted: July 24, 2015 at 11:33 am

store-0061583

After working for a few months on developing infrastructure, I finally have some early results on the work in progress on Watching and Dreaming (Blade Runner). This is a ground-up reimplementation of Watching and Dreaming (2001: A Space Odyssey) where the same algorithms are used except this project is broken into multiple components and emphasizes offline processing. Due to this, each pass of processing involves saving uncompressed data to disk, which uses a huge amount of disk space.

The files for Blade Runner alone currently use up about 1TB of my 2TB disk. To give you a sense of the scale of the project, breaking all frames in the entire film into individual segments lead to the creation of about 30,000,000 individual segments (700GB worth). Originally, I used 768 item colour histograms to do (k-means) clustering, but decided to change that to a 5 element vector including area, aspect ratio, and mean colour (in HSV) for performance and disk-space reasons. I should be able to fit all 30,000,000 vectors in my 32GB of RAM for clustering.

Thus far, I have only worked on a single scene (571 frames), but the OpenCV k-means implementation works quite well using these features: Using 2000 clusters and 10 iterations, k-means results in a compactness of 3.03076e+06 and a mean correlation within each cluster of 0.99873. At this point these images are ‘perceptual’ (meaning they are reconstructions of the original frames not new images generated from a predictive model) where segments in the original frame are replaced with the closest cluster. Original frames are also included for reference:

store-0061692-orig store-0061692

store-0061782-orig store-0061782

store-0061834-orig store-0061834

store-0062051-orig store-0062051

store-0062131-orig store-0062131