Returning to Machine Learning with Twitter Data

Now that I have the system running, uploading to Twitter and collected a pretty good amount of data, I’ve done some early ML work using this new data set! I spent a week looking at doing this as a regression (predicting scores) task vs a classification (predicting “good” or “bad” classes). The regression was not working well at all and I abandoned it; it was also impossible to compare results with previous classification work. I’ve returned to framing this as a classification problem and run a few parameter searches.

Read more

Pausing the Zombie Formalist: Stripes Fixed!

The Zombie Formalist is taking a break from posting compositions to Twitter to create space for, amplify, and be in solidarity with black and indigenous people being facing death, violence and harassment as facilitated by white colonial systems.

I took this pause in generation to tweak the code that generates stripes. Now the offsets don’t cut off the stripes because they use the frequency to determine appropriate places to cut (troughs). The following image shows a random selection of images using the new code. This change replaced a lot of work-around code (blurring, padding, etc.) and resulted in opening up aesthetic variation that was not previously possible.

Revisiting Painting #2 with Epoch Training

I’m quite happy with the results of the epoch training on the previous results! My favourite latest selection is the large image below. Under it is the previous best result on the left with another exploration using epoch training on the right. The top image is structurally equivalent to the previous results except without the artifacts and greater smoothness, which has been the case for all the epoch training explorations.

Revisiting #1 Appropriation Using Epoch Training.

Following from the previous post, I ran a test with a different training procedure. Previously I had been doing the canonical SOM training where the neighbourhood starts large and shrinks monotonically over time. For the videos, I want an increase of the degree of reorganization over time, so I train over a number of epochs where the starting neighbourhood size for each epoch increases over time. Within each epoch, that maximum neighbourhood size still shrinks for each training sample. In this test, results pictured in the large image below along with previous results underneath, I do multiple epochs where the maximum neighbourhood stays the same for every epoch.

Read more

Stills vs Videos: Painting Appropriation

After doing a little work on the painting appropriation videos I’m realizing that the very soft boundaries that I’ve been after for the stills just happen in the videos, “for free”. The gallery below shows the video approach (right) next to the finalized print version (left). Note The lack of reorganization (areas of contrasting colour) in the still versions; e.g. the green and purple in the upper right quadrant of the top left next to the bright blob in the centre.

Read more

Painting #1 Appropriation Video Work in Progress

Now that the final selection of paintings has been made I’ve been able to start working on the video works. These are videos that show the deconstruction (abstraction) of paintings by the machine learning algorithm. Pixels are increasingly reorganized according to their similarity over time. The top gallery shows my finalized print (left) along with a few explorations at HD resolution that approximate it. These are “sketches” of the final frames of the video.

The image below shows the actual final frame of the video. As each frame is the result of an epoch with a different neighbourhood size (that determines the degree of abstraction / reorganization) from smallest (least abstract) to largest (most abstract) the final structure is more spatially similar to the original because there is no initial disruption due to large initial neighbourhood sizes.

I think I can get around this by training for more iterations, as the larger neighbourhoods will have a greater effect with more iterations. The question is whether I should continue with the same neighbourhood size (168) used to generate the sketches above, or whether I should continue the rate of increase from the first set of frames (2168 in 2675 steps). The latter seems most consistent with the rest of the training process, so I should go with that. I just need to make changes to the code to allow “resuming” a sequence by starting with a frame part way through. Luckily, I saved the weights of the network for each frame so that is possible without loosing precision.

A plus of this video approach is that the images are far more smooth than they are as stills, which makes me wonder if ruled out paintings would actually make strong videos.

Short-List Selection of Appropriated Paintings

The following gallery shows the short-list of paintings that have been selected for printing. #19 is an edge case and may not be printed depending on the printing costs. The idea is to inkjet print on stretched canvas where the canvas heights match the height of the Zombie Formalist.

I’ve also included a couple mock-ups with ZF-matched frames and without. I’m not sure about the depth of the frames / stretchers for these; should they be on 3 in stretchers so that the stick out from the wall about the same distance as the Zombie formalist (which will have about a 4in depth, but that could be squeezed closer to 3in)? I think the lack of contrast with the black frames is not great, so a little white matte seems to be the strongest choice while emphasizing consistency. Also a little white matte could be easy to add to the ZF also (bottom mock-up).

The next stage is to get the source paintings down to 4K or HD resolution and work towards the videos of the learning process. This will be interesting because there is enough emergence in these systems that even changing the source resolution can change the results significantly. Thus the videos will not match the prints. Also, spending 10 hours per frame is impractical. Instead, I’ll be using the previous frame (with a smaller max neighbourhood size) as the source for the current frame. I’ll be training and retraining (with a much smaller number of iterations per frame) where the final frame will be the aggregation of many training epochs. This causes even more unpredictability and emergence in the structure of the final frames.

#22 Finalization

After quite a few explorations of #22, I was not able to make anything that is more interesting than the previous runs. I’ve included the best previous result on the left below, and the new best result on the right; these look very much equivalent to me and I don’t see any improvement of smoothness (despite the 4x increase of training iterations).

The following images are the remainder of the explorations. This ends my tweaking of these images; I’ll now rank the strongest results for final printing. One difficulty is that the number of prints is unclear because the budget is a little up in the air since I don’t know how much my Zombie Formalist fabrication costs will be (as my quoted fabricator pulled out and I’ve yet to find a replacement).

#5 Finalization Revisit

I realized working on finalizing #07, that the code was not using the Gaussian neighbourhood function as I had been previously using so I did did #5. The best result (on top left) is quite similar to the previous result (top right), but a little less smooth. I think the top left is a strong result. I’ve also included the other explorations using the appropriate Gaussian function in the gallery below. The training process for the Gaussian function is slower (since the fall off of learning effect is quite steep).

Twitter Response to “Bad” Compositions.

I uploaded a random sampling of 108 “bad” compositions to Twitter, following the “good” compositions from this post using the same A-HOG data set. The “bad” set has a marginally lower mean number of likes (0.52), but more than double the mean retweets (0.44). The total number of likes for the “bad” set was 56 (compared to 68 for the “good” set); the total number of retweets for the “bad” set was 48 (compared to only 19 for the “good” set). Of course an uncontrolled variable is the size of the growing twitter audience for the Zombie Formalist. Following is a plot analogous to this post. I’ve also included the compositions from this set with the most likes and retweets (corresponding to the 5 peaks below)

Read more

Ruling out #7

After working through a few variations, see below, I was unable to get #7 to look smooth; the ‘camo’ aesthetic persists, even with much smaller learning rates and more iterations. I’ve decided to remove this from the running for the final selections.

I’ve also included an interesting error here for future reference, shown below. This occurred when I used a learning rate of 2 (where the max should be 1), which caused the neighbourhood function to wrap around in the middle of each neighbourhood. This causes an interesting aesthetic that reminds me of photography through water droplets on glass where spots of focus (lack of re-organization) punctuate areas of order (re-organization) due to lensing effects.

Initial Twitter Response to “Good” Compositions.

I uploaded 110 “good” compositions to Twitter; “Good” was defined by thresholding (> 50) the attention (number of frames where faces are detected) for each composition generated in the last (A-HOG) integrated test. The max number of likes was 6 and the max retweets 2. The mean likes was 0.62 and the mean retweets was 0.17. The following plot shows the likes (red), retweets (green) and their sum (blue) on the y axis for each composition (x axis). The peaks in the sum indicate one very successful composition (6 likes + 2 retweets) and 5 quite successful compositions. These compositions are included in the gallery below.

Read more

#5 Finalization

The image on the top is the best of these final explorations. I went through a few more iterations than I was expecting to; I learned a lot more through the process of going through the painting long-list and spending more time on the early entries makes sense. The image on the top is the final selection and the gallery below the explorations.

Twitter API Progress

After creating my @AutoArtMachine twitter account I’ve been (manually) constructing a brand identity and profile as well as following and collecting followers. At the same time I’ve been looking into how to do the twitter automation for the Zombie Formalist.

My first attempt to apply for a twitter developer account failed (as I was considering automating following, liking, etc.); this is not encouraged by Twitter, so I’ve shifted my intention such that the Zombie Formalist will only post compositions and retrieve the number of “likes” for those tweets. Based on this revised use case, my developer account was accepted.

This morning I used my new Twitter Developer account to generate access keys for the API and successful ran some sample twitcurl code in C++! I only got as far as logging into twitter and getting followers’ IDs, but it is working. There is a problem in that twitcurl does not appear to be maintained and I was not sure the API was even going to work; so far it does. One issue is that this version of the library does not support uploading media, but I found this fork that does and will try getting that to work. There is very little out there on interfacing twitter and C++. If I get stuck, I’ll need to switch to python and figure out how to run python code inside C++.

Results of Integrated Test Using New Face Detector

So I ran a test over a few days with the new HOG face detector (from dlib) to see how it worked in terms of visual attention and whether attention is a valid proxy for aesthetic value. The results seem quite good both in terms of the response of the system to attention and also how attention (albeit attention in the contrived context of my own home as the test context). The following images show the “good” (top), over 50 frames of attention, and “bad” (bottom), under 50 frames of attention, compositions.

Read more

Social Media and Brand Identity

Based on my previous post about social media I’ve settled on Twitter and am working on a few materials for brand identity for this satirical company that makes the “AutoArtMachine” (aka Zombie Formalist).

I’ve also made a few mock-ups of the Zombie Formalist in public domain living-room images, and written a promo text:

Do you tire of the art hanging on your walls? When was the last time you even looked at the art you display in your home? Only to spark some conversation when guests come over? Imagine if the artwork in your home changed when you get bored of it. An infinite variety of abstract artworks could be presented in your home and without the effort of selecting it! The Zombie Formalist is the first product of it’s kind; it has an infinite capacity to create new artistic compositions and it uses AI to learn your individual aesthetic preferences and shows you more of what you want to see. The Zombie Formalist pays attention to artworks what you pay attention to and creates new artworks in that same style. The Zombie Formalist is an artist in itself: it creates new and unique works just for you and matching your preferences. There are no surveys or questions, it just pays attention to what you appreciate and learns what you like. Want to share the artworks made by your Zombie Formalist? The Zombie Formalist can also upload generated artworks to social media and your friends’ and followers’ likes can influence the aesthetic decisions of the Zombie Formalist, ensuring the creation of works your friends will appreciate. The Zombie Formalist is like having your very own artist in your home, creating new works on the fly that are always fresh and new.

The Zombie Formalist on Social Media

I’ve been doing reflecting and discussing my envisioned use of social media for the Zombie Formalist and the issue is much more complex than I had expected. The purpose of using social media is that ‘likes’ would be one way the machine could determine the ‘value’ of compositions and use that in the training process and model what is liked on social media. After some discussion with a social media savvy person, I came up with three possible strategies for my use of social media, in order of preference:

  1. The Zombie Formalist has it’s own social media profile and all content generated is uploaded.
  2. I create a satirical identity for a company that makes the “Zombie Formalist” as a tech gadget (not artwork it itself) that has a social media presence. The profile would appear to exist only to ‘sell’ the product.
  3. I create a social media profile for myself where the Zombie Formalist output is one component in of a social media presence for my practise in general.

Only #1 allows for social media to be used “in the loop” to attribute value to compositions. #2 and #3 would be more promotional mechanisms, but not be literally connected to the Zombie Formalist hardware as I have envisioned. #2 would be a lot of work in developing marketing and branding; this is an interesting approach, but the required investment makes that a separate project that requires much more time; I could always revisit this when the rest of the project is complete or in a future iteration. #3 would be a very standard use of social media and while it would provide promotional value for my practise, it does not actually have anything to do with this particular project. As it’s a major part of the concept of the work (social media determining value) #1 is the priority.

I was initially inclined to select Instagram because of it’s image-centrism and how it’s used by artists for both promotion and sales. Unfortunately it is not suitable for #1 for a few different reasons: On a technical level, Instagram does not allow posting through the public API, only through the official app and through “partners” who are presumably licensed to allow uploading of content independently of the app. On the social level, from what I understand, success on Instagram means highly curated high-quality content with a strong emphasis on individual brand. Since the Zombie Formalist will generate a lot of mediocre images, where the social media audience defines value, that means that Instagram fits best with options #3, or perhaps #2, but rules it out for #1.

#2 could work well on Facebook also, but Facebook seems to be have a unified API with Instagram and no longer allows creating posts algorithmically (except presumably for those who pay for licenses). When it comes to #1 it seems the only technically and socially viable option is Twitter. The wildness of twitter and the permissive API seem to be much better fits for #1. No wonder there are so many bots on there! The text orientation and the way images are treated on Twitter are not ideal; I find the seemingly arbitrary wide-screen cropping of thumbnails and the compositional emphasis on metadata (hashtags, etc.) particularly unpleasant to deal with… I wonder if there are ways to renter Twitter differently to be more… well… Instagram looking. At least this reflection gives me a direction to work within and I can work on some code and perhaps experiment with uploading (a subset?) of my labelled data-set and see how that works.

Enclosure Fabrication

I have finally gotten a rough sketch of the design for the Zombie Formalist; see the images below for details. The idea is that a minimal structure would be waterjet cut and bent from a single sheet of metal that would hold the screen and parts; a wood frame would slide over that to occlude the technology and make the whole thing appear like a normal contemporary art frame. I’ve approached a few fabricators and will post as that aspect of the project moves along.

DNN Face Detection Confidence — Part 4

I ran the OpenCV DNN-based face detector while I was working and the results are much better than I previous saw with the jetson-inference example. I presume the difference in the performance is due to the use of a different model. The following plot shows my face run (red) on top of the noFace run from the previous post (blue). The face mean confidence was .935 (compared to the mean noFace confidence of 0.11) and there is a clear gap between confidence where a face is present and where no faces are present, as shown in the plot. It seems this is the method I should use; I’ll try integrating it into my existing code and see how problematic the ability to recognize face profiles is.

DNN Face Detection Confidence — Part 3

Following from the previous post, I tried to load the caffe weights used in this example in the jetson optimized inference example; the model could not be loaded, so I guess the architecture / format was not compatible (they are both caffe models for object detection). On the plus side, I managed to compile and run the DNN face detection code from the opencv examples! The problem was the arguments not being passed properly. (Amazing how many code examples I’m finding that don’t actually work out without modification.)

The good news is that the model and opencv code work very well, actually very very well. In my two hour test with no faces, and a confidence threshold set to 0.1, the max confidence for non faces was only .19! Compare this to the model / jetson inference code, where the same conditions lead to non-faces being recognized with confidence as high as 0.96! The following plot shows the results of the test:

I had to clip the first 1000 or so data samples because my partially visible face was present and that caused spikes in confidence as high as 0.83! The implication is that this detector is much more sensitive to partial / profile faces and that may mean that viewers would have to really look away from the Zombie Formalist for it to generate new images. Technically, I don’t want it to detect profiles as faces. The next stage is to do a test with face present and determine what the range of confidence is and how much of a problem face profile detection causes…