Tweaks to rendering previous sequence

Posted: February 11, 2016 at 10:14 am

I wanted to increase the size of rendered percepts so there was less empty space between clusters and move away a little from the pointillist aesthetic; I did not have time to try this for the Open Studio. Following are the same images as in the previous post.

store-0000451 (more…)


Lots of colour variation between scenes in new clip!

Posted: February 10, 2016 at 8:46 am

Following is a set of stills from the latest rendering. I’m just adding sound before installing the work for Open Studios this afternoon!

store-0000451
(more…)


Blue Clusters Revisited

Posted: February 9, 2016 at 9:29 pm

After I changed the clustering code so that samples were randomly shuffled before training, I was excited to see the following results. Unfortunately, the whole clip shows the same colour variation due to a bug in my program. The problem was that regions are represented using dissimilar clusters. This is fixed now, but it will be a challenge to get the new clip ready for the open studio tomorrow.

store-0000061-orig store-0000061 (more…)


Descriptive Text for Open Studio tomorrow

Posted: February 9, 2016 at 7:45 pm

Ben Bogart

b. 1978. Lives in Vancouver, BC

Watching (Blade Runner)
(Work in Progress), 2016

Single Channel High Definition Video

An image is a reference to some aspect of the world which contains within its own structure and in terms of its own structure a reference to the act of cognition which generated it. It must say, not that the world is like this, but that it was recognized to have been like this by the image-maker, who leaves behind this record: not of the world, but of the act.

(Harold Cohen, What is an image? 1979)

I think of subjectivity and reality as mutually constructive. Cognitive mechanisms impose structure on reality in the form of imaginary categories that inform predictions and inferences. At the same time, the complexity of reality constantly challenges understanding. Cognition determines the most essential and salient properties of reality, but those properties are context dependent. Is the quintessential an objectively observable property of reality or a projection of imagination?

This series of works show the result of statistically oriented machine learning and computer vision algorithms attempting to understand popular cinematic depictions of Artificial Intelligence. The machine’s understanding is manifest in its ability to recognize, and eventually predict, the structure of the films it watches. The images produced are the result of both the system’s projection of imaginary structure, and the structure of the films themselves.


Video of previous clip

Posted: February 7, 2016 at 3:50 pm

If the new clip does not work out, I’ll be showing the existing one for the open studios on Wednesday:


New clip and randomizing order of samples!

Posted: February 7, 2016 at 1:02 pm

As the clip I first choose ended up (a) looking really monochromatic and (b) not having content that refers very strongly to the central essence of Blade Runner, I decided to start working on a different clip. This new clip introduces the main plot-line and the Replicants, so the content is stronger as a proxy for the whole film, and also contains a lot of colour variation. Unfortunately, this clip ended up (after a couple days of processing) just as monochromatic as the first one. The following images show selected original frames on the top and their corresponding reconstructions below.

store-0000061-orig store-0000061

(more…)


Sound with 5000 clusters

Posted: February 4, 2016 at 10:58 am

Following is the result of using 5000 clusters rather than 1000. I find it too readable now, so I’m running k-means again with 3000 clusters and hopefully strike a balance. Despite the readability of the reconstructed sound, the resulting clusters only had mean correlation of ~0.8 (where 1 would be perfect).

Audio reconstruction using 5000 clusters and 2000 bins.


The First Sounds

Posted: February 3, 2016 at 1:14 pm

While waiting for my 75,000 percept rendering to compute, I’ve returned to audio. I ended up saving the real and imaginary spectra separately, and this not dealing with any signal transformation. This is the first time I’ve heard the results and its quite striking how well it reproduces the quality of sounds (voice) without any of the specificity (words). In this case I used 1000 clusters, so 1000 sounds to represent 10,000 seconds of audio. I’m now running k-means again with 5000 clusters so that the sound could be more readable linguistically.

Audio reconstruction using 1000 clusters and 2000 bins.


Collages from Accumulations

Posted: February 2, 2016 at 2:24 pm

I did some experiments filtering percepts according to their area in response to the dominance of large black regions in the accumulations, as shown in the image below.

BAiRMontageTest-sum-1-1

(more…)


Means vs Accumulations

Posted: January 30, 2016 at 3:30 pm

I realized why I was lacking so much perceptual diversity in my previous percept representation; only the top-most region was visible in percepts that were rendered with a high weight value. I changed the code such that all constituent regions in each percept are stacked on top of one another, largest to smallest. This lead to some quite interesting forms in themselves that Peta thought I should investigate as sculptural / dimensional forms. The following images show new accumulations (pictured on top) and the corresponding mean representations of those same clusters (pictured below):

clusters

clusters-mean
(more…)


Collage Explorations

Posted: January 26, 2016 at 5:10 pm

I messed up the threshold when I started the percept generation process last night, resulting in images with very hard edges and no softness at all. So while I wait for my percepts to regenerate, I’m exploring some more options for print collages. Following are a few explorations:

BAiRMontageTest-HardPercepts-Overlap-0.05 (more…)


Percepts still lacking diversity, and YouTube copyright strike…

Posted: January 25, 2016 at 5:26 pm

I thought the new percept generation code would balance out the softness and structure / edges but it still looks about the same as before, only with a little more texture:

store-0000351
store-0000351 (more…)


Video – Reconstructions from Soft Percepts

Posted: January 23, 2016 at 11:37 am

Following is video of the reconstructions using the soft percepts using audio from the original sound-track to aid in the interpretation of the images.


Compositions of Percepts

Posted: January 23, 2016 at 10:11 am

Following are a few explorations of rendering many percepts all on the same plane with normalized scale. If I have time later in this residency, I’ll look at determining their composition using a SOM.

BAiRMontageLarge-AllPercepts-Overlap-0.1 (more…)


Reconstructions from Soft Percepts

Posted: January 23, 2016 at 9:38 am

In an effort to decrease processing time I highly simplified the code that generates percepts and the lookup table used to render reconstructions of the original frames. This which has lead to significant differences in the aesthetic. The images below show the same frames as the last post of reconstructions and how they now appear.

store-0001043 store-0001043 (more…)


Soft Percepts

Posted: January 22, 2016 at 9:50 pm

When I first started on this project and had in mind the averaging of colour segments I first thought of Jason Salavon’s averaging work which I was reminded of when I saw this image from an academic paper, Antonio Torralba & Aude Oliva (2002):

averagesPersons

The percepts are finally starting to show a similar aesthetic. Following are a few images showing multiple percepts in isolation (arranged in no particular order): (more…)


The subset of film leads to results very similar to previous work.

Posted: January 20, 2016 at 11:17 am

I thought using a smaller chunk of the film would lead to less abstract results, but that turns out not to be the case. I wanted to try the same method with less frames to see what happens without changing anything else. There is no percept filtering, and percepts are still scaled according to the difference in area between regions and their clusters. See the results below:

store-0001043 store-0001043 (more…)


Initial Installation Setup

Posted: January 15, 2016 at 10:24 pm

20160115_003

Original frame on the left, reconstruction on the right. Tomorrow I’ll start processing the subset of the film I have chosen in which Rachael accepts the fact that she is a replicant and confronts her mortality.


Going from scenes to the whole film in the future.

Posted: January 14, 2016 at 3:35 pm

I was thinking about having to use a small portion of the film during the residency and thinking about the feasibility of (ever) using the whole film. I thought of treating each scene as a separate unit and generating percepts and predictors on each separately. (more…)


Visualization of colour diversity in 10,000 frame chunks.

Posted: January 14, 2016 at 9:45 am

In addition to relevance of content, I would like to maximize the diversity of colour in the subset of the film that I select to work with for the residency. Based on this, I generated the following Jason Salavon-like image of the mean colour of each frame in 16 consecutive 10,000 frame chunks:

meanColoursChunks


Last attempt using previous clusters.

Posted: January 13, 2016 at 7:49 pm

store-0001022 store-0001022

This is my last attempt using the clusters that I took 30 days to train on the whole film before the residency. I wanted to make sure I could do everything I can to make them less abstract before I retrain on a subset of the film. (more…)


Studio Setup at the Banff Centre

Posted: January 13, 2016 at 7:38 pm

IMG_2636

I got my workstation setup in the Digital Media Studio at the Banff Centre!


Preparation for Banff

Posted: January 6, 2016 at 7:54 pm

store-0012675

I’ve been doing a little more work trying to get things ready for my residency at the Banff Centre starting next week. In attempting to reduce the amount of noise (in the form of the presence of large percepts popping in and out for each frame), I changed the code that generates percepts to ignore regions whose area is over a particular threshold. The result is exactly the opposite of what I expected where all the visible percepts are quite large (see below). (more…)


Reconstructions using clusters learned from Blade Runner in its entirety

Posted: October 21, 2015 at 4:34 pm

store-0056700

After the promising results from the previous post where I generated clusters / percepts from one scene, I decided to be ambitious and generated clusters / percepts from the entire film. The images in this post are selected from an entire scene being (perceptually) reconstructed from clusters generated from all 30 million regions segmented from Blade Runner. (more…)


Initial Results: Blade Runner

Posted: July 24, 2015 at 11:33 am

store-0061583

After working for a few months on developing infrastructure, I finally have some early results on the work in progress on Watching and Dreaming (Blade Runner). This is a ground-up reimplementation of Watching and Dreaming (2001: A Space Odyssey) where the same algorithms are used except this project is broken into multiple components and emphasizes offline processing. Due to this, each pass of processing involves saving uncompressed data to disk, which uses a huge amount of disk space. (more…)