Revisiting Painting #22 with Epoch Training

Posted: June 29, 2020 at 4:15 pm


Modifying Features for Extreme Offsets.

Posted: June 29, 2020 at 4:12 pm

As each composition uses 5 layers, I wanted to create the illusion of less density without changing the number of parameters. To do this, I allow for the possibility of offsets where each layer slides completed out of view, making it invisible. This allows for compositions of only the background colour, as well as simplified compositions where only a few layers are visible.

The problem with this from an ML perspective is that the parameters of the layers that are not visible are still in the training data; this is because the training data represents the instructions for making the image, not the image itself. This causes a problem for the ML because the training data still holds the features of the layer, even if it’s not visible. I thought I would run another hyperparameter search where I zero out all the parameters for layers that are not visible. I reran an older experiment to test against and the results are promising.

(more…)

Revisiting Painting #19 with Epoch Training

Posted: June 24, 2020 at 6:48 pm


Revisiting Painting #9 with Epoch Training

Posted: June 22, 2020 at 9:25 pm


Classifying Using Only Twitter Data as Label with Incidental Filtering and Class Balancing.

Posted: June 19, 2020 at 11:50 am

For this experiment I used the Twitter data (likes and retweets) alone to generate labels where ‘good’ compositions have at least 1 like or retweet. There are relatively few compositions that receive any likes or retweets (presumably due to upload timing and the twitter algorithm). Due to this, I random sample the ‘bad’ compositions to balance the classes, leading to 197 ‘good’ and 197 ‘bad’ samples. The best model archives an accuracy of 76.5% for the validation set and 56.6% on the test set. The best model archived f1-scores of 75% (bad) and 78% (good) for the validation set and 55% (bad) and 58% (good) for the test set. The following image shows the confusion matrix for the test set. The performance on the validation set is very good, but that does not generalize to the test set, likely because there is just too little data here to work with.

I was just thinking about this separation of likes from attention and realized that since compositions with little attention don’t get uploaded to twitter, they certainly have no likes; I should only be comparing compositions that have been uploaded to twitter if I’m using the twitter data without attention to generate labels. The set used in the experiment discussed herein contains 320 uploaded compositions and 74 compositions that were not uploaded. I don’t think it makes sense to bother with redoing this experiment with only the uploaded compositions because there are just too few samples to make any progress at this time.

In this data-set 755 compositions were uploaded and 197 received likes or retweets. For the data-collection in progress as of last night 172 compositions have been uploaded and 86 have received likes or retweets. So it’s going to be quite the wait until this test collects enough data to move the ML side of the project forward.


Classifying Using Only Attention as Label with Incidental Filtering

Posted: June 18, 2020 at 7:11 pm

The results from my second attempt using the attention only to determine label and filtering out samples with attention < 6 are in! This unbalanced data-set has much higher validation (74.2%) and test (66.5%) accuracies. The f1 scores archived by the best model are much better also: For the validation set 36% (bad) and 84% (good) and for the test set 27% (bad) and 78% (good). As this data-set is quite unbalanced and the aim is to predict ‘good’ compositions, not ‘bad’ ones, I think these results are promising. I thus chose not to balance the classes for this one because true positives are more important than true negatives so throwing away ‘good’ samples does not make sense.

It is unclear whether this improvement is due to fewer bad samples, or whether the samples with attention < 6 are noise without aesthetic meaning. The test confusion matrix is below, and shows how rarely predictions of ‘bad’ compositions are made, as well as a higher number of ‘bad’ compositions predicted to be ‘good’.


Classifying Using Only Attention as Label.

Posted: June 18, 2020 at 4:39 pm

Following from my previous ML post, I ran an experiment doing hyperparameter search using only the attention data, ignoring the Twitter data for now. The results are surprisingly poor with the best model achieving no better than chance accuracy and f1 scores on the test set! For the validation set, the best model achieved an accuracy of 65%. The following image shows the confusion matrix for the test set:

The f1 scores show that this model is equally poor at predicting good and bad classes: The f1 score for the validation set was 67% for bad classes and 62% for good. In the test set the f1 scores are very poor at 55% for the bad class and 45% for the good class.

As I mentioned in the previous post, I think a lot of noise is added with incidental interactions where someone walks by without actually attending to the composition. Watching behaviour around it, I’ve determined that attention values below 6 are very likely to be incidental. I’m now running a second experiment using the same setup as this one except where these low attention samples are removed. Of course this unbalances the data-set, in this case in favour of the ‘good’ compositions (754) compared to ‘bad’ compositions (339). As there is so little data here I’m not going to do more filtering of ‘good’ results to balance classes. After that I’ll repeat these results with the Twitter data and see where this leaves things.


Revisiting Painting #5 with Epoch Training

Posted: June 18, 2020 at 10:47 am


Revisiting Painting #4 with Epoch Training

Posted: June 14, 2020 at 9:08 am


Returning to Machine Learning with Twitter Data

Posted: June 10, 2020 at 3:31 pm

Now that I have the system running, uploading to Twitter and collected a pretty good amount of data, I’ve done some early ML work using this new data set! I spent a week looking at doing this as a regression (predicting scores) task vs a classification (predicting “good” or “bad” classes). The regression was not working well at all and I abandoned it; it was also impossible to compare results with previous classification work. I’ve returned to framing this as a classification problem and run a few parameter searches.

(more…)

Pausing the Zombie Formalist: Stripes Fixed!

Posted: June 4, 2020 at 10:43 am

The Zombie Formalist is taking a break from posting compositions to Twitter to create space for, amplify, and be in solidarity with black and indigenous people being facing death, violence and harassment as facilitated by white colonial systems.

I took this pause in generation to tweak the code that generates stripes. Now the offsets don’t cut off the stripes because they use the frequency to determine appropriate places to cut (troughs). The following image shows a random selection of images using the new code. This change replaced a lot of work-around code (blurring, padding, etc.) and resulted in opening up aesthetic variation that was not previously possible.


Revisiting Painting #3 with Epoch Training

Posted: May 18, 2020 at 7:45 am

All of these results are looking consistently better so I think I’m just going to post the new progress (on top) and the previous best result (below) for comparison from now on.


Revisiting Painting #2 with Epoch Training

Posted: May 13, 2020 at 10:28 am

I’m quite happy with the results of the epoch training on the previous results! My favourite latest selection is the large image below. Under it is the previous best result on the left with another exploration using epoch training on the right. The top image is structurally equivalent to the previous results except without the artifacts and greater smoothness, which has been the case for all the epoch training explorations.


#1 Final Refinements

Posted: May 11, 2020 at 8:18 am

I’m torn between these two options. While the top is less organized (and thus more resembles the original) it’s structure is less central and the shift of the bright area from the chest to the top of the head is quite nice.


Revisiting #1 Appropriation Using Epoch Training.

Posted: May 1, 2020 at 10:47 am

Following from the previous post, I ran a test with a different training procedure. Previously I had been doing the canonical SOM training where the neighbourhood starts large and shrinks monotonically over time. For the videos, I want an increase of the degree of reorganization over time, so I train over a number of epochs where the starting neighbourhood size for each epoch increases over time. Within each epoch, that maximum neighbourhood size still shrinks for each training sample. In this test, results pictured in the large image below along with previous results underneath, I do multiple epochs where the maximum neighbourhood stays the same for every epoch.

(more…)

Stills vs Videos: Painting Appropriation

Posted: April 1, 2020 at 11:46 am

After doing a little work on the painting appropriation videos I’m realizing that the very soft boundaries that I’ve been after for the stills just happen in the videos, “for free”. The gallery below shows the video approach (right) next to the finalized print version (left). Note The lack of reorganization (areas of contrasting colour) in the still versions; e.g. the green and purple in the upper right quadrant of the top left next to the bright blob in the centre.

(more…)

Painting #1 Appropriation Video

Posted: March 31, 2020 at 2:04 pm


Painting #1 Appropriation Video Work in Progress

Posted: March 25, 2020 at 11:45 am

Now that the final selection of paintings has been made I’ve been able to start working on the video works. These are videos that show the deconstruction (abstraction) of paintings by the machine learning algorithm. Pixels are increasingly reorganized according to their similarity over time. The top gallery shows my finalized print (left) along with a few explorations at HD resolution that approximate it. These are “sketches” of the final frames of the video.

The image below shows the actual final frame of the video. As each frame is the result of an epoch with a different neighbourhood size (that determines the degree of abstraction / reorganization) from smallest (least abstract) to largest (most abstract) the final structure is more spatially similar to the original because there is no initial disruption due to large initial neighbourhood sizes.

I think I can get around this by training for more iterations, as the larger neighbourhoods will have a greater effect with more iterations. The question is whether I should continue with the same neighbourhood size (168) used to generate the sketches above, or whether I should continue the rate of increase from the first set of frames (2168 in 2675 steps). The latter seems most consistent with the rest of the training process, so I should go with that. I just need to make changes to the code to allow “resuming” a sequence by starting with a frame part way through. Luckily, I saved the weights of the network for each frame so that is possible without loosing precision.

A plus of this video approach is that the images are far more smooth than they are as stills, which makes me wonder if ruled out paintings would actually make strong videos.


Society6 Shop Opens!

Posted: March 25, 2020 at 8:11 am

13 Zombie Formalist compositions are now available on many products at society6!


Short-List Selection of Appropriated Paintings

Posted: March 17, 2020 at 11:05 am

The following gallery shows the short-list of paintings that have been selected for printing. #19 is an edge case and may not be printed depending on the printing costs. The idea is to inkjet print on stretched canvas where the canvas heights match the height of the Zombie Formalist.

I’ve also included a couple mock-ups with ZF-matched frames and without. I’m not sure about the depth of the frames / stretchers for these; should they be on 3 in stretchers so that the stick out from the wall about the same distance as the Zombie formalist (which will have about a 4in depth, but that could be squeezed closer to 3in)? I think the lack of contrast with the black frames is not great, so a little white matte seems to be the strongest choice while emphasizing consistency. Also a little white matte could be easy to add to the ZF also (bottom mock-up).

The next stage is to get the source paintings down to 4K or HD resolution and work towards the videos of the learning process. This will be interesting because there is enough emergence in these systems that even changing the source resolution can change the results significantly. Thus the videos will not match the prints. Also, spending 10 hours per frame is impractical. Instead, I’ll be using the previous frame (with a smaller max neighbourhood size) as the source for the current frame. I’ll be training and retraining (with a much smaller number of iterations per frame) where the final frame will be the aggregation of many training epochs. This causes even more unpredictability and emergence in the structure of the final frames.


#22 Finalization

Posted: March 12, 2020 at 11:46 am

After quite a few explorations of #22, I was not able to make anything that is more interesting than the previous runs. I’ve included the best previous result on the left below, and the new best result on the right; these look very much equivalent to me and I don’t see any improvement of smoothness (despite the 4x increase of training iterations).

The following images are the remainder of the explorations. This ends my tweaking of these images; I’ll now rank the strongest results for final printing. One difficulty is that the number of prints is unclear because the budget is a little up in the air since I don’t know how much my Zombie Formalist fabrication costs will be (as my quoted fabricator pulled out and I’ve yet to find a replacement).


#4 Finalization

Posted: March 3, 2020 at 1:10 pm

I meant to post these earlier, but turns out I forgot to! I ended up with one strong result (at the top) and a few explorations (below). The top image still has a bit of a “black hole” in the centre, but I’ve included in the long-list for final selection.


#5 Finalization Revisit

Posted: February 29, 2020 at 9:34 pm

I realized working on finalizing #07, that the code was not using the Gaussian neighbourhood function as I had been previously using so I did did #5. The best result (on top left) is quite similar to the previous result (top right), but a little less smooth. I think the top left is a strong result. I’ve also included the other explorations using the appropriate Gaussian function in the gallery below. The training process for the Gaussian function is slower (since the fall off of learning effect is quite steep).


Zombie Formalist Images on Meural!

Posted: February 25, 2020 at 1:02 pm

Early in 2019 I was invited by the Lumen Prize to submit some new work to be available on the Meural digital art frame. I submitted some early sketches for the Zombie Formalist but did not hear back. In Googling the Zombie Formalist (as one does) I found that my submission was accepted and is available here for Meural!


Twitter Response to “Bad” Compositions.

Posted: February 25, 2020 at 12:48 pm

I uploaded a random sampling of 108 “bad” compositions to Twitter, following the “good” compositions from this post using the same A-HOG data set. The “bad” set has a marginally lower mean number of likes (0.52), but more than double the mean retweets (0.44). The total number of likes for the “bad” set was 56 (compared to 68 for the “good” set); the total number of retweets for the “bad” set was 48 (compared to only 19 for the “good” set). Of course an uncontrolled variable is the size of the growing twitter audience for the Zombie Formalist. Following is a plot analogous to this post. I’ve also included the compositions from this set with the most likes and retweets (corresponding to the 5 peaks below)

(more…)

Ruling out #7

Posted: February 23, 2020 at 12:43 pm

After working through a few variations, see below, I was unable to get #7 to look smooth; the ‘camo’ aesthetic persists, even with much smaller learning rates and more iterations. I’ve decided to remove this from the running for the final selections.

I’ve also included an interesting error here for future reference, shown below. This occurred when I used a learning rate of 2 (where the max should be 1), which caused the neighbourhood function to wrap around in the middle of each neighbourhood. This causes an interesting aesthetic that reminds me of photography through water droplets on glass where spots of focus (lack of re-organization) punctuate areas of order (re-organization) due to lensing effects.


Initial Twitter Response to “Good” Compositions.

Posted: February 11, 2020 at 4:06 pm

I uploaded 110 “good” compositions to Twitter; “Good” was defined by thresholding (> 50) the attention (number of frames where faces are detected) for each composition generated in the last (A-HOG) integrated test. The max number of likes was 6 and the max retweets 2. The mean likes was 0.62 and the mean retweets was 0.17. The following plot shows the likes (red), retweets (green) and their sum (blue) on the y axis for each composition (x axis). The peaks in the sum indicate one very successful composition (6 likes + 2 retweets) and 5 quite successful compositions. These compositions are included in the gallery below.

(more…)

#5 Finalization

Posted: February 11, 2020 at 8:18 am

The image on the top is the best of these final explorations. I went through a few more iterations than I was expecting to; I learned a lot more through the process of going through the painting long-list and spending more time on the early entries makes sense. The image on the top is the final selection and the gallery below the explorations.


Twitter API Progress

Posted: January 31, 2020 at 2:53 pm

After creating my @AutoArtMachine twitter account I’ve been (manually) constructing a brand identity and profile as well as following and collecting followers. At the same time I’ve been looking into how to do the twitter automation for the Zombie Formalist.

My first attempt to apply for a twitter developer account failed (as I was considering automating following, liking, etc.); this is not encouraged by Twitter, so I’ve shifted my intention such that the Zombie Formalist will only post compositions and retrieve the number of “likes” for those tweets. Based on this revised use case, my developer account was accepted.

This morning I used my new Twitter Developer account to generate access keys for the API and successful ran some sample twitcurl code in C++! I only got as far as logging into twitter and getting followers’ IDs, but it is working. There is a problem in that twitcurl does not appear to be maintained and I was not sure the API was even going to work; so far it does. One issue is that this version of the library does not support uploading media, but I found this fork that does and will try getting that to work. There is very little out there on interfacing twitter and C++. If I get stuck, I’ll need to switch to python and figure out how to run python code inside C++.


Results of Integrated Test Using New Face Detector

Posted: January 24, 2020 at 5:53 pm

So I ran a test over a few days with the new HOG face detector (from dlib) to see how it worked in terms of visual attention and whether attention is a valid proxy for aesthetic value. The results seem quite good both in terms of the response of the system to attention and also how attention (albeit attention in the contrived context of my own home as the test context). The following images show the “good” (top), over 50 frames of attention, and “bad” (bottom), under 50 frames of attention, compositions.

(more…)