Barnett Newman

Posted: August 2, 2018 at 6:47 pm

Covenant, Barnett Newman, 1949

Barnett Newman was one of the main references when I first starting thinking through the “zip” (vertical stripe) aesthetic for this project. While Newman fits in with Rothko in terms of a borderline fetishization of the primal, I see quite a few overlapping concerns. Like, me Newman also sees art as being rigorous and removed from chance or the arbitrary. Newman also sees art as a mission to contest styles and conventions, which is interesting the context of my thinking about art as a process of articulating what art is.

Newman was also highly preoccupied with the concept of origination, the origin of the universe in particular. In his context, this takes on a spiritual / religious connotation, whereas I am interested in scientific bases of origination. It was actually this idea of origination that motivated my grad studies and interest in dreams and creativity. This probing of origination or foundations of structure connects with his interest in starting from a tabula rasa and questioning the foundations of geometry. This is challenging to algorithmic work limited to a particular set of possible geometries computed by the machine. In my case, perhaps, the geometry of sine-waves. There is still a question of composition though, sine-waves are more a underlying vocabulary and do not define the whole of the compositional system.

Highly relevant to the ZF is Newman’s interest in the Sublime that transcends “categories of value” which relates to the critique of art as commodity implicit in the ZF. Again, we have an interesting relation between the artists work and the viewer. As described by Claudine Humblet in The New American Abstraction (1950–1970), Newman’s works’ “…sole aim is to convey the space against which Man measures himself.” This is very interesting considering my previous reflections on the work (the AI enabled work in particular) as a mirror of our understanding of ourselves. In the context of this project, this space could be the machine (as external model of cognition), or even social media itself. Newman wanted the viewer to gain an awareness of themselves through the work; not just a surface awareness of self, but self as a ‘totality’ both connected to separated from others, mirroring my interest subjectivity as the projection of imaginary boundaries.


Mark Rothko

Posted: July 26, 2018 at 6:26 pm

Red Orange Orange on Red, Mark Rothko, 1962

When I did my first sketches of colour fields I had Rothko in mind when I used a blur shader to soften some of the hard edges. Now I’m thinking about using sine waves with various contrasts to provide both stripes and softer gradients. This way I can represent highly dense images without increasing the number of parameters. For example, this could allow things like one pixel alternating stripes in complimentary colours that was inspired by the stippling patterns in some of Robert Irwin’s work.

There is certainly some romanticism of ‘primitive’ artworks in Rothko’s early surrealist works, which are strange to read from a contemporary post-colonial perspective. The background of the work certainly reads as Modernist and there is an influence of Jungian archetypes, dreams, the unconscious and states of consciousness. During this period there is an interesting attempt to fuse a social awareness with formalism. Rothko has an interesting perspective on evolution of style, as quoted from Claudine Humblet’s The New American Abstraction (1950–1970): “The progression of a painter’s work, as it travels in time from point to point, will be toward clarity: toward the elimination of all obstacles between the painter and the idea, and between the idea and the observer.” This seems quite aligned with some of my thinking during my Masters where the act of art-making allows what is technically possible to change what the concept of a work is. There is also that sense of the work blending with the experience of the viewer to create a unity of experience. One small formalist point of note is that Rothko intentionally avoided exploiting colour theory, which is an interesting precedent for the ZF, whose use of colour will be highly unconstrained.


Robert Irwin

Posted: July 14, 2018 at 5:16 pm

Untitled, Robert Irwin, 1962–63

I had not heard of Robert Irwin’s work before and I’m extemely amazed by our overlapping artistic interests. I’ve struggled with considering myself an image-maker because I’m not very interesed in images, I’m interested in the relation between images, thought and reality. This also seems to be a strong preoccuption of Irwin, who is also inspired by Merleau-Ponty. I’m particularly amazed by the relevance of his work to over 10 years of my work involving live cameras, site-specificity and the questions of objects and boundaries (Resurfacing, Memory Association Machine, Dreaming Machine, Watching).

In the context of the Zombie Formalist, he is interesting in that his work seems to be covering a territory of exploration that has been described as evolutionary. His iterest in purity as a method of removing the arbitrary is also something we both share. I may go so far as to say we are both embarking on art as philosphical inquiry. There is also an articulation of the gap between perception and cognition / recognition. Most interesting is the idea of the painting to be a result of the viewers perceptual participation. This fits very well with the ZF, that does very little without the attention of the viewer. There is also this bleeding of the work and its evironment, which is interesting in the context of a potential ZF that uses the colour palette of its context in the construction of its images.

Some of the most striking overlaps are made explicit in Claudine Humblet’s articulations from The New American Abstraction (1950–1970): “The ultimate goal of art is to renew vision and invite the viewer to recapture the meaning of the real” (p.1657). Also the idea that Irwin’s art “[q]uestion[s] the very source of perception” (p.1659).


Walter Darby Bannard

Posted: June 29, 2018 at 3:33 pm

Alexander #2, Walter Darby Bannard, 1959

Again, as with Olitski, this is not quite what I have in mind. Bannard’s use of colour is very interesting, in particular the low colour contrast between minimal components. He has also used a grid as an organizing structure in his works. This use of muted tones contributes to what has been called the “unreality” of his forms. This is interesting in the context of painting as an exploration of a space of possibilities. This also connects with an interpretation of his work as improvised but constrained by conceptualism. In the case of the Zombie Formalist the system improvises as a random generator, but where conceptual(?) constraints actually come from the system’s attempt to model the preferences of the audience.


Jules Olitski

Posted: June 29, 2018 at 3:14 pm

Pink-Grey II, Jules Olitski, 1970

I’m part way trough “The New American Abstraction 1950–1970” to get started on my post-painterly abstraction research for the Zombie Formalist but realized I’ve been looking at the third in a three part volume! So I need to go back to the other volumes that mention artists I’ve been considering as inspirations such as Mark Rothko, Elleworth Kelly and Gene Davis.

While Oliki’s aesthetic is not quite what I am after, I found a few ideas quite interesting; I’m interested in the emphasis on colour as structure, and the tention between homogeneous and hetereogeneous composition at varying scales. Paintings made with the airbrush (like the above) remind me of some of the structures that emerge from my segment collages as part of Watching and Dreaming.


Early Research for Painting Appropriation

Posted: June 13, 2018 at 5:45 pm

I’ve been thinking through the gargantuan task of deciding what 4 images from the whole history of painting I should appropriate in this project. The general focus of my practise as an image-maker is on the relation between images and reality or interiority; this project should reflect that. I have in mind a trajectory of abstraction starting with the mathematical effort of objective representation in the Renaissance and ending with abstraction and problematizations of realism in painting. Since the Zombie Formalism project will focus in particular on post-painterly abstraction and colour-field painting, I was thinking about the four painting sample as including two works from the Renaissance (perhaps early and late), one surrealist, and one cubist work. I’ve been thinking in a very top down way by considering first the movement, then the painter and eventually narrow things down to the painting. (more…)


Early sketches appropriating (consuming) paintings from the western canon.

Posted: May 18, 2018 at 12:43 pm

In addition to the “Zombie Formalist”, this new body of work includes a series of works that appropriate the painting canon. Images are deconstructed into individual pixels that are recomposed using machine learning methods such as the Self-Organizing Map (SOM) that arranges pixels according to colour similarity. The following images show the original painting along-side the SOM reconstruction.

“Lucca Madonna”, Jan van Eyck (1436)

“Palace of Windowed Rocks”, Yves Tanguy (1942)

 


Early “Zombie Formalist” Sketches

Posted: May 18, 2018 at 12:31 pm

This is my first post documenting my new body of work, “Machines of the Present Consume the Imagination of the Past”, funded by the Canada Council for the Arts. The first couple months will be early research focus but I wanted to post the current sketches developed for the grant application.

A little background: the “Zombie Formalist” is a component of this body of work where a diptych of light-boxes will generate banal and satirical formalist images inspired by 1960s post-painterly abstractionists such as Gene David, Barnett Newman and Guido Molinari. Following is a selection of aesthetically successful images using an early sketch of a painting generator. All these images are composed of vertical stripes with random colour, position, width and blurring on a coloured ground.


Video: Dreams from feedback in predictive model

Posted: October 11, 2017 at 9:04 am

Video of a dream (imaginary sequence generated from feedback in predictive model):


Dreams from feedback in predictive model

Posted: October 3, 2017 at 10:25 am

After some experimentation with LSTM topologies, I ended up with a 8 layer network with 32 LSTM units per hidden layer. These networks take a lot longer to train and the MSE for my 10,000 iteration test was 0.0201 (worse than other topologies). The amazing part is that using the feedback mechanism to reconstruct the sequence, scene transitions are preserved! In my previous single and 4 layer LSTM tests, the scene changes were not reconstructed using feedback in the model. The image below shows the results.

predictions_batch1_i10000_feedback_8Layer (more…)


Training without width and height features.

Posted: September 19, 2017 at 3:11 pm

I thought I would try training without using the size of regions as features. The macro structure is quite nice, but it did not converge any better/faster than those using all the features. I do like the even distribution of the different sized segments over the composition. I think the previous versions are likely best, but I think they would need to be rendered larger (not possible on my current hardware if I want to ) so the large number of percepts do not overlap so much.

SOMResults_Segments_60000000_0-29309289_100stride_1_1_BGR (more…)


5861858 Segment Collage

Posted: September 6, 2017 at 4:34 pm

I wanted to do a test with a large number of segments spread evenly over the set of all segments to represent the palette of the whole film; the following image and details shows the result. Now that I’m using such a large number of percepts I’m noticing there is an dark outline around most percepts. This seems to be an anti-aliasing effect and I’m testing a version of the collage code that disables it. Due to the large number of segments, I used a relatively small number of training iterations (approximately 5 million) and thus the organization is not very good. Still, the results are interesting and quite painterly. In my next test I’ll go in the other direction and use a small (100,000) number of percepts evenly distributed over the set of all segments.

SOMResults_Segments_5861858_0-29309289_1_1 (more…)


Collages from Segments (rather than Percepts)

Posted: August 23, 2017 at 2:22 pm

I’ve been tinkering with making collages from raw segments, rather than percepts. These have not been clustered or averaged and are simply cut out of Blade Runner frames without further processing. Thanks to ANNetGPGPU changes I’ve been able to generate some quite large scale collages. The one below (and its detail underneath) are generated from 1 million image segments (the 1 million largest out of the 30 million extracted). They take a lot of training (10 million iterations here), and still seem somewhat disorganized. I think there is potential here, but because of the number of (large) percepts, I think the my max GPU texture size (16384 x 16384) is a little small. This leads to a lot of overlap between segments, which does look quite interesting up close (see detail) but perhaps a little too dense. It’s possible that at 48″ square (as intended) that rich texture could make the overall composition successful.

I am not very happy with the lack of diversity of colours; this is because there is an over-representation of a few similar regions segmented from subsequent frames. I’m currently training a 6 million segment version using a stride (keep each 5th segment) that will hopefully result in an image more representative of the whole time-line. In the long-term the best approach may be to use stride based on frame numbers, but this information is not preserved in the current implementation.

SOMResults_Segments_10000000_0-999999_0.5_1

SOMResults_Segments_10000000_0-999999_0.5_1-Detail


First Dream sequence generated by predictive model!

Posted: July 26, 2017 at 6:21 pm

The following images show a comparison of three modes of visual mentation all using the restricted set of 1000 percepts. The top image is the “Watching” mode where percepts are located in the same location as they are sensed. The middle image is something like “Imagery” where the position of percepts is random but constrained by the distribution of percept positions in Watching, and therefore still tied to sensation. The bottom image is a first attempt at dreaming decoupled from sensory information. Percepts are positioned randomly, but constrained by the distribution of percepts as learned by an LSTM network. The position in time and space of each percept is wholly determined by the LSTM predictive model.

What keeps this from being ‘real’ dreaming (according to the Integrative Theory) is that the sequence of distributions generated by the predictive model are seeded by every time-step in Watching (keeping them from diverging significantly from Watching). In real dreaming, one single time-step will seed a feedback loop in the predictive model to generate a sequence that is expected to diverge significantly from Watching. I think these are working quite well; the generation of positions from distributions certainly softens a lot of structure in Watching, but holds onto some resemblance. There is some literature on the possibility of mental imagery  and dreaming being hazier and less distinct than external perception. I’ve also included a video at the bottom that shows the whole reconstructed sequence from the LSTM model.

Watching-1000c-0018879Imagery-1000c-000001 Dreaming-1000c-000001 (more…)


Frame Reconstructions from 1000 Clusters

Posted: July 18, 2017 at 11:00 am

Following from my previous post I’ve been investigating reducing the number of clusters in order to scale the predictor problem (for Dreaming) down to something feasible. The two pairs of images below show the original reconstructions with 200,000 clusters and the corresponding reconstructions with 1000 clusters. For more context, see this post. I’ll try generating a short sequence and see how they look in context.

watching-0018879Dreaming-0018879 (more…)


Collages from limited number of clusters.

Posted: July 17, 2017 at 5:13 pm

In working on Dreaming, I recalculated the K-Means segment clusters (percepts)  with only 1000 means (there were 200,000 previously). The images below show the results. It seems that when it comes to collages, the most interesting segments are the outliers (and I expect probably the raw segments). The fact that so many segments get averaged in these clusters means they end up being very small and 1000 is just not enough to capture the width and height features (hence the two very wide and very tall percepts). Clearly, the colour palette is still preserved, but that is pretty much it. The areas of colour below are so small that these images end up being only 1024px or smaller wide. These SOMs are trained over only 10,000 iterations to get a sense of what all the percepts look like together.

SOMResults_10000_0-999-Collage-1-0.0625 (more…)


Collage Triptych

Posted: May 19, 2017 at 4:41 pm

As filtering by area lead to such interesting results, I went ahead and split up the percepts into three groups according to percept areas. The triptych below shows all 200,000 percepts, but separated into three separately trained and differently sized SOMs. I’ve also included details of the latter two SOMs. I thought this approach would lead to more cohesion within each map, but the redundancy between the second and third images leads me to believe that 200,000 is too many clusters. Since I need to reduce the number of clusters for the Dreaming part of Watching and Dreaming, I’ll put the collage project aside until I’ve determined a reasonable max number of clusters for LSTM prediction and then come back to it.

SOMResults_20000000_0-200000-Montage (more…)


Filtering collage components by area (large)

Posted: May 10, 2017 at 10:51 am

After looking at the previous results I think the issue is that there is simply too much diversity in all 200,000 components to make an image with any degree of consistency. I’ve managed to implement code to filter image components based on pixel area. The following images and details are composed of the top 5,000 and 10,000 largest components. Due to the large size of these components, these are full size (no scaling) and suitable to large scale printing. I think the first image with 5,000 components is the most compelling. I will now look at making collages from the remaining smaller components, or a subset thereof.

SOMResults_20000000_0-5000-Collage-1-16384 (more…)


50,000,000 Iterations

Posted: May 8, 2017 at 4:44 pm

Following shows the results of training over the weekend. It seems with this many inputs (200,000) and the requirement for over-fitting (the number of neurons ~= the number of inputs) we need a lot of iterations. I think this is the most interesting so far, but I also had the idea to break the percepts into sets and make a different SOM for each set. This would make each one more unified (in terms of scale) and give them very different character.

SOMResults_50000000-Collage-1-4096 (more…)


5,000,000 Iteration Collage

Posted: May 5, 2017 at 5:25 pm

The following image is the result of a 5,000,000 iteration training run. Note the comparative lack of holes where no percepts are present. The more I look at these images the more I think they would need to be shown not as a print, but as a light-box. I wonder what the maximum contrast of a light-box would be… On the plus side, the collages seem to work best at a lower resolution (4096px square below) due to the small size of the percepts (extracted from a 1080p HD source); this would mean much smaller (27″ @ 150ppi, 14″ @ 300ppi) and affordable light-boxes. I wonder how the collages using the 30,000,000 segments will compare since they will not have soft edges and higher brightness and saturation. It will be a while before I get to those since the code I’m using is quite slow to return segment positions (17hours for 200,000 percepts) and is not currently scalable to the 30,000,000 segments.

SOMResults_5000000-Collage-1-4096 (more…)


Collages with pinned percepts.

Posted: May 5, 2017 at 12:14 pm

I have been working on getting large percepts to stick in the middle so they don’t push the outer edges too much. I attempted this by explicitly setting particular neurons in the middle of the SOM with features corresponding to the largest percepts. While this worked for a smaller number of training iterations (1000) it did not seem to make any difference over a large number of training iterations. The following images show the results where large percepts are scaled down to reduce the size variance. The lack of training leads to quite a few dead spots where no percepts are located. While quite dark, the black background works better for this content. I’ve included a visualization of the raw weights and a few details.

SOMResults_1000-Collage-8-16384
(more…)


Early Collage Results!

Posted: April 29, 2017 at 3:58 pm

SOMResults_500000-Collage-8192

The image above shows some early results of organizing 200,000 percepts (the same vocabulary used in “Watching (Blade Runner)“) in a collage according to their similarity (according to width, height, and colour). I’ve included a few details below showing the fine structure of the composition. The image directly below shows a visualization of the SOM that determines the composition of the work. (more…)


Blade Runner Collages

Posted: April 25, 2017 at 3:50 pm

I have not started work on making large collages from Blade Runner clusters and segments since the residency. I ended up writing some code for my public art commission (“As our gaze peers off into the distance, imagination takes over reality…“, 2016) that arranged segments using a SOM. I did not end up using that approach in the final work, so I’m now adapting it to make collages from Blade Runner clusters and then segments.

The following image shows the colour values of each of the 200,000 clusters, in no particular order:

sampleVectorsColoursBGR

(more…)


Stall on ‘Dreaming’

Posted: April 25, 2017 at 3:05 pm

I’ve stalled on the ‘Dreaming’ size of the project for now realizing that changes I made for ‘Watching’ significantly impact dreaming. With 200,000 percepts and each being able to be in multiple locations for every frame, the LSTM (prediction network) would have an input vector of 5.7 million elements (Including a 19+8 position histogram for each frame). Too big for me to even build a model (at least on my hardware). I took the opportunity to rethink what I should do and came to the conclusion that I’ll need to recompute segments to downscale the LSTM input vector to something feasible. This will be about a month of computation time, so I’ve spent some time working on other projects, such as: “Through the haze of a machine’s mind we may glimpse our collective imaginations (Blade Runner)”.


Toy Dreams

Posted: January 5, 2017 at 5:15 pm

I’ve been doing a lot of reading and tutorials to get a sense of what I need to do for the “Dreaming” side of this project. I initially planned to use Tensorflow, but found it too low level and could not find enough examples, so I ended up using Keras. Performance using Keras should be very close, since I’m using tensorflow as the back-end. I created the following simple toy sequence to play with:

toysequenceorig

(more…)


More frames from entire film (Repost)

Posted: November 27, 2016 at 10:57 am

I wanted to repost the some of the frames from this post, now that I know how to remove the alpha channels such that they appear as they should (and do in video versions):

store-0051118 (more…)


First “Imagery” frames!

Posted: November 19, 2016 at 11:52 am

imagery-double-selective-0018879

I’ve written some code that will be the interface between the data produced during “perception”, the future ML components, and the final rendering. One problem is that in perception, the clusters are rendered in the positions of the original segmented regions. This is not possible in dreaming, as the original samples are not accessible. My first approach is to calculate probabilities for the position of clusters for each frame in the ‘perceptual’ data, and then generate random positions using those probabilities to reconstruct the original image.

The results are surprisingly successful, and clearly more ephemeral and interpretive than perceptually generated images. One issue is that there are many clusters that appear in frames only once; this means that there is very little variation in their probability distributions. The image above is such a frame where the bins of the probability distribution (19×8) are visible in the image. I can break this grid by generating more samples than there were in the original data; in the images in this post the number of samples is doubled. Due to this increase of samples, they break up the grid a little bit (see below).  Of course, for clusters that appear only once in perception, this means they are doubled in the output, which can be seen in the images below. The following images show the original frame, the ‘perceptual’ reconstruction and the new imagery reconstruction from probabilities. The top set of images has very little repetition of clusters, and the bottom set  has a lot. (more…)


Watching (Blade Runner) Complete!

Posted: October 18, 2016 at 5:48 pm

After spending some time tweaking the audio, I’ve finally processed the entire film both visually and aurally. At this point the first half the project “Watching and Dreaming (Blade Runner)” is complete and the next steps are the “Dreaming” part which involves using the same visual and audio vocabulary where a sequence is generated through feedback within a predictive model trained on the original sequence. Following is an image and an excerpt of the final sequence.

store-0061585 (more…)


On site installation!

Posted: October 13, 2016 at 12:06 pm

installation-a installation-b


Video Version

Posted: September 1, 2016 at 5:22 pm

As previously discussed, I found there was too much variation in the process that is not manifest in the final still image. I’ve thus made a video loop that shows the gradual deconstruction of the photographic image.