Posted: January 5, 2017 at 5:15 pm
I’ve been doing a lot of reading and tutorials to get a sense of what I need to do for the “Dreaming” side of this project. I initially planned to use Tensorflow, but found it too low level and could not find enough examples, so I ended up using Keras. Performance using Keras should be very close, since I’m using tensorflow as the back-end. I created the following simple toy sequence to play with:
More frames from entire film (Repost)
Posted: November 27, 2016 at 10:57 am
I wanted to repost the some of the frames from this post, now that I know how to remove the alpha channels such that they appear as they should (and do in video versions):
First “Imagery” frames!
Posted: November 19, 2016 at 11:52 am
I’ve written some code that will be the interface between the data produced during “perception”, the future ML components, and the final rendering. One problem is that in perception, the clusters are rendered in the positions of the original segmented regions. This is not possible in dreaming, as the original samples are not accessible. My first approach is to calculate probabilities for the position of clusters for each frame in the ‘perceptual’ data, and then generate random positions using those probabilities to reconstruct the original image.
The results are surprisingly successful, and clearly more ephemeral and interpretive than perceptually generated images. One issue is that there are many clusters that appear in frames only once; this means that there is very little variation in their probability distributions. The image above is such a frame where the bins of the probability distribution (19×8) are visible in the image. I can break this grid by generating more samples than there were in the original data; in the images in this post the number of samples is doubled. Due to this increase of samples, they break up the grid a little bit (see below). Of course, for clusters that appear only once in perception, this means they are doubled in the output, which can be seen in the images below. The following images show the original frame, the ‘perceptual’ reconstruction and the new imagery reconstruction from probabilities. The top set of images has very little repetition of clusters, and the bottom set has a lot. (more…)
Watching (Blade Runner) Complete!
Posted: October 18, 2016 at 5:48 pm
After spending some time tweaking the audio, I’ve finally processed the entire film both visually and aurally. At this point the first half the project “Watching and Dreaming (Blade Runner)” is complete and the next steps are the “Dreaming” part which involves using the same visual and audio vocabulary where a sequence is generated through feedback within a predictive model trained on the original sequence. Following is an image and an excerpt of the final sequence.
More frames from entire film
Posted: August 23, 2016 at 10:27 am
Following is a selection of frames from the second third of the film. I’ve also finished rendering the final third, but I have not taken a close look at the results yet. Once I revisit the sound, I will have completed the “Watching” part of the project.
First look at results of processing all frames from Blade Runner
Posted: August 16, 2016 at 8:26 am
I have been slowly making progress on the project, but had my attention split due to the public art commission for the City of Vancouver. I managed to run k-means on the 30million segments of the entire film and generated 200,000 percepts (which took 10 days due to the slow USB2 external HDD I’m working from). The following images show a selection of frames from the first third of the film. I’m currently working on the second third. They show a quite nice balance between stability and noise with my fixes for bugs in the code written at the Banff Centre (which contained inappropriate scaling factors size features).
Template Matching Percepts
Posted: March 18, 2016 at 5:40 pm
After tweaking some more code, the template matched percepts look even more blocky than the previous centred ones. Due to this, I’m doing one more test using template matching, and if that does not provide better results, I’ll abandon template matching for percept generation. I’m also considering changing the area and aspect ratio features to the width and height of segments. Currently the aspect ratio is over weighted because as its not normalized; normalizing could be based on the the widest and tallest aspects, being 1920 and 800 respectively, but the different would overly weight the wide percepts.
Large Percepts Revisited and Template Matching
Posted: March 2, 2016 at 4:48 pm
In my previous post I claimed that the inclusion of large readable percepts in the output was due to the lack of filtering, but in fact I was filtering out large percepts. It seems the appearance of apparently large percepts is due to tweaks I made to the feature vector for regions. Previously, the area of percepts was very small because it was normalized relative to the largest possible region (1920×800 pixels); as percepts this large are very unlikely, the area feature had less range than the other features. I increased the weight of the area feature ten fold hoping it would increase the diversity of percept size. Instead, it seems this extra sensitivity preserves percepts composed of larger regions, increasing their visual emphasis.
Through the whole Banff Centre residency I was trying to find a midway point between pointillism and the more readable percepts; it seems I stumbled the solution. I’m still not happy with their instability, and I’m now generating new percepts where I use template matching so that regions associated with one cluster are matched according to their visual features (to an extent). I have no idea how this will look, but it could make percepts less likely to be symmetrical, since regions are no longer centred in percepts.
Posted: February 23, 2016 at 11:06 am
Since randomizing the order of samples in the clustering process worked so well I went back to not filtering out large regions before clustering. The results are more interesting as stills as they are too literal and unstable in video, thus I’ve abandoned this line of exploration. The tweaking of features for clustering has certainly helped with emphasizing the aspect ratio, and I’ve increased the weight of the area feature hoping it will increase the diversity in the size of the percepts.
“Watching (Blade Runner) (Work in Progress)” 2016
Posted: February 19, 2016 at 8:41 am
Video Sequence A (250MB) | Video Sequence B (180MB)
Watching (Blade Runner) (Work in Progress) is one channel of what is envisioned as a two channel generative video installation that was the focus of my tenure as a Banff Artist in Residence. Two seven minutes sequences were exhibited as part of the Open Studios in the Project Space at the Walter Phillips Gallery at the Banff Centre in February 2016. These two sequences use different clips from Ridley Scott’s Blade Runner and show the development of the work through the residency, as documented on the production blog. (more…)
Documentation of Open Studios at the Banff Centre
Posted: February 17, 2016 at 11:54 am
Following is photo documentation of the work-in-progress shown at the Open Studios on February 10th.
Animation of Collage
Posted: February 17, 2016 at 11:44 am
In the most recent collages I was interested in the range of aesthetic results from different relative scales of the constituent parts. To explore this, I rendered one frame for each scale setting, resulting in a smooth transition between two extremes of percept scale.
Collages from new clip
Posted: February 11, 2016 at 9:31 pm
Due to the final push to get the video of the second clip ready for the Open Studios event, I did not have a chance to create any collages from those percepts. The following are a few explorations, some of which involve sorting the percepts according to some of their features, such as the area of the region, or its hue or saturation.
Revised video from open studios
Posted: February 11, 2016 at 10:18 am
Following is a tweak of the video I showed at the Open Studios yesterday; I’ve increased the size of the percepts so they blend together more.
Tweaks to rendering previous sequence
Posted: February 11, 2016 at 10:14 am
I wanted to increase the size of rendered percepts so there was less empty space between clusters and move away a little from the pointillist aesthetic; I did not have time to try this for the Open Studio. Following are the same images as in the previous post.
Lots of colour variation between scenes in new clip!
Posted: February 10, 2016 at 8:46 am
Following is a set of stills from the latest rendering. I’m just adding sound before installing the work for Open Studios this afternoon!
Blue Clusters Revisited
Posted: February 9, 2016 at 9:29 pm
After I changed the clustering code so that samples were randomly shuffled before training, I was excited to see the following results. Unfortunately, the whole clip shows the same colour variation due to a bug in my program. The problem was that regions are represented using dissimilar clusters. This is fixed now, but it will be a challenge to get the new clip ready for the open studio tomorrow.
Descriptive Text for Open Studio tomorrow
Posted: February 9, 2016 at 7:45 pm
b. 1978. Lives in Vancouver, BC
Watching (Blade Runner)
(Work in Progress), 2016
Single Channel High Definition Video
An image is a reference to some aspect of the world which contains within its own structure and in terms of its own structure a reference to the act of cognition which generated it. It must say, not that the world is like this, but that it was recognized to have been like this by the image-maker, who leaves behind this record: not of the world, but of the act.
(Harold Cohen, What is an image? 1979)
I think of subjectivity and reality as mutually constructive. Cognitive mechanisms impose structure on reality in the form of imaginary categories that inform predictions and inferences. At the same time, the complexity of reality constantly challenges understanding. Cognition determines the most essential and salient properties of reality, but those properties are context dependent. Is the quintessential an objectively observable property of reality or a projection of imagination?
This series of works show the result of statistically oriented machine learning and computer vision algorithms attempting to understand popular cinematic depictions of Artificial Intelligence. The machine’s understanding is manifest in its ability to recognize, and eventually predict, the structure of the films it watches. The images produced are the result of both the system’s projection of imaginary structure, and the structure of the films themselves.
Video of previous clip
Posted: February 7, 2016 at 3:50 pm
If the new clip does not work out, I’ll be showing the existing one for the open studios on Wednesday:
New clip and randomizing order of samples!
Posted: February 7, 2016 at 1:02 pm
As the clip I first choose ended up (a) looking really monochromatic and (b) not having content that refers very strongly to the central essence of Blade Runner, I decided to start working on a different clip. This new clip introduces the main plot-line and the Replicants, so the content is stronger as a proxy for the whole film, and also contains a lot of colour variation. Unfortunately, this clip ended up (after a couple days of processing) just as monochromatic as the first one. The following images show selected original frames on the top and their corresponding reconstructions below.