I realized why I was lacking so much perceptual diversity in my previous percept representation; only the top-most region was visible in percepts that were rendered with a high weight value. I changed the code such that all constituent regions in each percept are stacked on top of one another, largest to smallest. This lead to some quite interesting forms in themselves that Peta thought I should investigate as sculptural / dimensional forms. The following images show new accumulations (pictured on top) and the corresponding mean representations of those same clusters (pictured below):
I messed up the threshold when I started the percept generation process last night, resulting in images with very hard edges and no softness at all. So while I wait for my percepts to regenerate, I’m exploring some more options for print collages. Following are a few explorations:
I thought the new percept generation code would balance out the softness and structure / edges but it still looks about the same as before, only with a little more texture:
Following is video of the reconstructions using the soft percepts using audio from the original sound-track to aid in the interpretation of the images.
Following are a few explorations of rendering many percepts all on the same plane with normalized scale. If I have time later in this residency, I’ll look at determining their composition using a SOM.
In an effort to decrease processing time I highly simplified the code that generates percepts and the lookup table used to render reconstructions of the original frames. This which has lead to significant differences in the aesthetic. The images below show the same frames as the last post of reconstructions and how they now appear.
When I first started on this project and had in mind the averaging of colour segments I first thought of Jason Salavon’s averaging work which I was reminded of when I saw this image from an academic paper, Antonio Torralba & Aude Oliva (2002):
The percepts are finally starting to show a similar aesthetic. Following are a few images showing multiple percepts in isolation (arranged in no particular order): (more…)
I thought using a smaller chunk of the film would lead to less abstract results, but that turns out not to be the case. I wanted to try the same method with less frames to see what happens without changing anything else. There is no percept filtering, and percepts are still scaled according to the difference in area between regions and their clusters. See the results below:
Original frame on the left, reconstruction on the right. Tomorrow I’ll start processing the subset of the film I have chosen in which Rachael accepts the fact that she is a replicant and confronts her mortality.
I was thinking about having to use a small portion of the film during the residency and thinking about the feasibility of (ever) using the whole film. I thought of treating each scene as a separate unit and generating percepts and predictors on each separately. (more…)
In addition to relevance of content, I would like to maximize the diversity of colour in the subset of the film that I select to work with for the residency. Based on this, I generated the following Jason Salavon-like image of the mean colour of each frame in 16 consecutive 10,000 frame chunks:
This is my last attempt using the clusters that I took 30 days to train on the whole film before the residency. I wanted to make sure I could do everything I can to make them less abstract before I retrain on a subset of the film. (more…)
I’ve been doing a little more work trying to get things ready for my residency at the Banff Centre starting next week. In attempting to reduce the amount of noise (in the form of the presence of large percepts popping in and out for each frame), I changed the code that generates percepts to ignore regions whose area is over a particular threshold. The result is exactly the opposite of what I expected where all the visible percepts are quite large (see below). (more…)