Painting #1 Appropriation Video Work in Progress

Posted: March 25, 2020 at 11:45 am

Now that the final selection of paintings has been made I’ve been able to start working on the video works. These are videos that show the deconstruction (abstraction) of paintings by the machine learning algorithm. Pixels are increasingly reorganized according to their similarity over time. The top gallery shows my finalized print (left) along with a few explorations at HD resolution that approximate it. These are “sketches” of the final frames of the video.

The image below shows the actual final frame of the video. As each frame is the result of an epoch with a different neighbourhood size (that determines the degree of abstraction / reorganization) from smallest (least abstract) to largest (most abstract) the final structure is more spatially similar to the original because there is no initial disruption due to large initial neighbourhood sizes.

I think I can get around this by training for more iterations, as the larger neighbourhoods will have a greater effect with more iterations. The question is whether I should continue with the same neighbourhood size (168) used to generate the sketches above, or whether I should continue the rate of increase from the first set of frames (2168 in 2675 steps). The latter seems most consistent with the rest of the training process, so I should go with that. I just need to make changes to the code to allow “resuming” a sequence by starting with a frame part way through. Luckily, I saved the weights of the network for each frame so that is possible without loosing precision.

A plus of this video approach is that the images are far more smooth than they are as stills, which makes me wonder if ruled out paintings would actually make strong videos.