Toy Dreams

Posted: January 5, 2017 at 5:15 pm

I’ve been doing a lot of reading and tutorials to get a sense of what I need to do for the “Dreaming” side of this project. I initially planned to use Tensorflow, but found it too low level and could not find enough examples, so I ended up using Keras. Performance using Keras should be very close, since I’m using tensorflow as the back-end. I created the following simple toy sequence to play with:

toysequenceorig

(more…)


More frames from entire film (Repost)

Posted: November 27, 2016 at 10:57 am

I wanted to repost the some of the frames from this post, now that I know how to remove the alpha channels such that they appear as they should (and do in video versions):

store-0051118 (more…)


First “Imagery” frames!

Posted: November 19, 2016 at 11:52 am

imagery-double-selective-0018879

I’ve written some code that will be the interface between the data produced during “perception”, the future ML components, and the final rendering. One problem is that in perception, the clusters are rendered in the positions of the original segmented regions. This is not possible in dreaming, as the original samples are not accessible. My first approach is to calculate probabilities for the position of clusters for each frame in the ‘perceptual’ data, and then generate random positions using those probabilities to reconstruct the original image.

The results are surprisingly successful, and clearly more ephemeral and interpretive than perceptually generated images. One issue is that there are many clusters that appear in frames only once; this means that there is very little variation in their probability distributions. The image above is such a frame where the bins of the probability distribution (19×8) are visible in the image. I can break this grid by generating more samples than there were in the original data; in the images in this post the number of samples is doubled. Due to this increase of samples, they break up the grid a little bit (see below).  Of course, for clusters that appear only once in perception, this means they are doubled in the output, which can be seen in the images below. The following images show the original frame, the ‘perceptual’ reconstruction and the new imagery reconstruction from probabilities. The top set of images has very little repetition of clusters, and the bottom set  has a lot. (more…)


Watching (Blade Runner) Complete!

Posted: October 18, 2016 at 5:48 pm

After spending some time tweaking the audio, I’ve finally processed the entire film both visually and aurally. At this point the first half the project “Watching and Dreaming (Blade Runner)” is complete and the next steps are the “Dreaming” part which involves using the same visual and audio vocabulary where a sequence is generated through feedback within a predictive model trained on the original sequence. Following is an image and an excerpt of the final sequence.

store-0061585 (more…)


On site installation!

Posted: October 13, 2016 at 12:06 pm

installation-a installation-b


Video Version

Posted: September 1, 2016 at 5:22 pm

As previously discussed, I found there was too much variation in the process that is not manifest in the final still image. I’ve thus made a video loop that shows the gradual deconstruction of the photographic image.


More frames from entire film

Posted: August 23, 2016 at 10:27 am

Following is a selection of frames from the second third of the film. I’ve also finished rendering the final third, but I have not taken a close look at the results yet. Once I revisit the sound, I will have completed the “Watching” part of the project.

store-0051118 (more…)


First look at results of processing all frames from Blade Runner

Posted: August 16, 2016 at 8:26 am

I have been slowly making progress on the project, but had my attention split due to the public art commission for the City of Vancouver. I managed to run k-means on the 30million segments of the entire film and generated 200,000 percepts (which took 10 days due to the slow USB2 external HDD I’m working from). The following images show a selection of frames from the first third of the film. I’m currently working on the second third. They show a quite nice balance between stability and noise with my fixes for bugs in the code written at the Banff Centre (which contained inappropriate scaling factors size features).

store-0046860 (more…)


Title and Project Description

Posted: August 15, 2016 at 9:17 am

As our gaze peers off into the distance, imagination takes over reality

The relation between the world as we see it and the world as we understand it to be is all wrapped up in abstraction, the act of emphasizing some details in order to dispense with others. My artistic interest in abstraction and computation is an interest in the world as it is versus the world as we read into and conceptualize it. As our gaze peers off into the distance, imagination takes over reality… is site-specific public artwork that uses a photographic image of the surrounding site as raw material. The bottom of the image shows this imperfect photographic illusion, a proxy of objective reality, that becomes increasingly abstracted towards the top. The abstraction is a proxy for the constructive powers of imagination and is the result of a machine learning process that rearranges colour values according to their similarity. The imaginary structure presented in this image is both an emergent composition and an exposition of the implicit structure of reality. The image is a representation of place (as a physical location at the site of installation) and non-place (as an ambiguous and emerging structure under constant renewal). The viewer’s experience of the tension between ambiguity and reality mirrors the mind’s constant attempt to bridge experience and truth.


Final Design

Posted: July 22, 2016 at 3:26 pm

good_result_24mm-final-small

Above is the final design for the QE Plaza. I’ve removed most of the white padding to preserve the aspect ratio of the image to match that required by the installation location. I’ve included details of the full resolution version below. I’m quite happy with artifacts due to sub-sampling used to progressively smooth the transition between the full resolution pano and the 20px per cell SOM. This leads to a quite nice tilt-shift effect that contributes to the tension between reality and imagination.

good_result_24mm-final-detail-1 (more…)


Going with the 20px cell size.

Posted: July 21, 2016 at 11:02 am

The 10px cell test with a small number of iterations (1,000,000) is shown below. While the spires are reduced they are still present, especially at full resolution. In the scaled image below, the most noticeable one is to the left of the leftmost set of red benches.

good_result_24mm-SOMScale10_h3775_n30e_i1000000

I also tired using the 7px cell test (below) with very few iterations (top: 500,000 and bottom: 250,000) and larger max neighbourhood (30²) to reduce spires; unfortunately, the lack of training shows a lack of organization compared to the results documented in this post. So I’m going to stick with the previous 20px 5,000,000 training iteration version and spend some additional time working on the transition.

good_result_24mm-SOMScale7_h3775_n30e_i500000 good_result_24mm-SOMScale7_h3775_n30e_i250000


Spires have returned…

Posted: July 20, 2016 at 2:34 pm

good_result_24mm-SOMScale7_h3775_n34e_i4000000

Clearly, the spires stick around despite the large neighbourhood size. I’m now trying a 10px cell size with only 1,000,000 iterations.


Focusing in…

Posted: July 20, 2016 at 9:49 am

After some time and consideration, I’ve decided although the spires fit with the theme of imagination and emergence they are just too visually dominant. I’ve thus been exploring using the 20px cell size (that avoids spires) with a lower horizon:

good_result_24mm-SOMScale20_h3775_n20e_i5000000

(more…)


Exponential Increase of Neighbourhood Size

Posted: July 16, 2016 at 9:55 am

I’m still tinkering with the code and the central issue is that I’d like to get better coverage of the area near the horizon, which seems to mean more than 5,000,000 iterations of training, and thus an increase of cell size. The following images show some of these explorations. I’ve also used an even more pointed Gaussian to soften the boundaries.

good_result_24mm-SOMScale10_h4615_n15e_i5000000 good_result_24mm-SOMScale20_h4615_n15e_i5000000 good_result_24mm-SOMScale10_h4615_n150_i5000000

(more…)


5 Million Iterations

Posted: July 13, 2016 at 9:03 am

In order to investigate those spires I tried running 5 million iterations of training with a smaller network (10px per cell rather than 7px previously used). The result is quite interesting, but the neighbourhood size is too large, causing the sky to dominate significantly. There is still a tendency for clusters to grow in the upper left direction (causing spires) that I cannot explain. The light-coloured spires in the previous post originate at pixels with high degrees of brightness surrounded by darkness, but in the image below we can see they are also present in lower contrast areas (e.g. the sky).

good_result_24mm-SOMScale10_h4615_n100_i5000000


Spires

Posted: July 9, 2016 at 10:25 am

The difference between the two following images is a small change in the neighbourhood function (larger on top). I can’t explain the emergence of these beige spires; they emanate from very small areas of the same colour in the original image, but I’m not sure what drives their expansion into the sky. They seem to expand increasingly according to the number of training iterations. I’m doing another run with a larger neighbourhood and see what happens.

good_result_24mm-SOMScale7_h4615_n140_i2000000

good_result_24mm-SOMScale7_h4615_n120_i2000000


Home Stretch!

Posted: July 8, 2016 at 6:50 pm

good_result_24mm-SOMScale7_h4615_n140_i200000

I found a bug in ANNetGPGPU that resulted in the neighbourhood function having a hard fall-off rather than a gradual Gaussian decay. The result really effected the final aesthetic even after millions of iterations. I also changed from HSV to RGB colour model, which was causing some artifacts. The image above shows a short training session to make sure things are going in the right direction. Extra thanks to Daniel Frenzel for the very fast turnaround fixing the bug!


Almost there…

Posted: June 28, 2016 at 5:25 pm

The following images show the latest iteration of the work. In this version the SOM is blended with the original pano, and I’ve moved the ‘horizon’ higher up. The transition between the pano and the lower resolution SOM (7px per cell) is quite nice and leads to a bit of a tilt-shift effect at the ‘horizon’. See the detail image below. I’m not entirely keen on the degree to which the blue sky over-powers the buildings, so I’m now rendering a version with a smaller max neighbourhood and we’ll see what that looks like tomorrow.

Bogart-Platforms-Coastal-City-In-Progress Bogart-Platforms-Coastal-City-In-Progress-Detail


First results using new panorama!

Posted: June 17, 2016 at 3:20 pm

The following images show initial results using the new panorama. The SOM scales are 10 and 20 (meaning 10 and 20px per SOM cell). These differing scales mean the maximum neighbourhood size (degree of abstraction at top) is double in the top image.

good_result_24mm_SOMScale20_h4263 good_result_24mm_SOMScale10_h4263


“Final” Panorama!

Posted: June 16, 2016 at 9:07 am

I’ve been very busy with other projects, including a big exhibition of Watching and Dreaming (2001: A Space Odyssey) at the Digital Art Biennial in Montreal this month. I had already shot two attempts at a “final” panorama, but have been having significant issues stitching the images together. I finally have a suitable result, but it uses the wider images, which limits the resolution to ~60,000 pixels, which is short of the 80,000px target:

good_result_24mm

(more…)