Initial Twitter Response to “Good” Compositions.

Posted: February 11, 2020 at 4:06 pm

I uploaded 110 “good” compositions to Twitter; “Good” was defined by thresholding (> 50) the attention (number of frames where faces are detected) for each composition generated in the last (A-HOG) integrated test. The max number of likes was 6 and the max retweets 2. The mean likes was 0.62 and the mean retweets was 0.17. The following plot shows the likes (red), retweets (green) and their sum (blue) on the y axis for each composition (x axis). The peaks in the sum indicate one very successful composition (6 likes + 2 retweets) and 5 quite successful compositions. These compositions are included in the gallery below.

(more…)

#5 Finalization

Posted: February 11, 2020 at 8:18 am

The image on the top is the best of these final explorations. I went through a few more iterations than I was expecting to; I learned a lot more through the process of going through the painting long-list and spending more time on the early entries makes sense. The image on the top is the final selection and the gallery below the explorations.


Twitter API Progress

Posted: January 31, 2020 at 2:53 pm

After creating my @AutoArtMachine twitter account I’ve been (manually) constructing a brand identity and profile as well as following and collecting followers. At the same time I’ve been looking into how to do the twitter automation for the Zombie Formalist.

My first attempt to apply for a twitter developer account failed (as I was considering automating following, liking, etc.); this is not encouraged by Twitter, so I’ve shifted my intention such that the Zombie Formalist will only post compositions and retrieve the number of “likes” for those tweets. Based on this revised use case, my developer account was accepted.

This morning I used my new Twitter Developer account to generate access keys for the API and successful ran some sample twitcurl code in C++! I only got as far as logging into twitter and getting followers’ IDs, but it is working. There is a problem in that twitcurl does not appear to be maintained and I was not sure the API was even going to work; so far it does. One issue is that this version of the library does not support uploading media, but I found this fork that does and will try getting that to work. There is very little out there on interfacing twitter and C++. If I get stuck, I’ll need to switch to python and figure out how to run python code inside C++.


Results of Integrated Test Using New Face Detector

Posted: January 24, 2020 at 5:53 pm

So I ran a test over a few days with the new HOG face detector (from dlib) to see how it worked in terms of visual attention and whether attention is a valid proxy for aesthetic value. The results seem quite good both in terms of the response of the system to attention and also how attention (albeit attention in the contrived context of my own home as the test context). The following images show the “good” (top), over 50 frames of attention, and “bad” (bottom), under 50 frames of attention, compositions.

(more…)

zombieformalist.com

Posted: January 23, 2020 at 10:24 am

I made a quick website using the domain I had already registered for the Zombie Formalist. The text is written as a marketing tease that frames the ZF as a commercial product, merging strategies #1 and #2 discussed here.

www.zombieformalist.com


Social Media and Brand Identity

Posted: January 14, 2020 at 4:29 pm

Based on my previous post about social media I’ve settled on Twitter and am working on a few materials for brand identity for this satirical company that makes the “AutoArtMachine” (aka Zombie Formalist).

I’ve also made a few mock-ups of the Zombie Formalist in public domain living-room images, and written a promo text:

Do you tire of the art hanging on your walls? When was the last time you even looked at the art you display in your home? Only to spark some conversation when guests come over? Imagine if the artwork in your home changed when you get bored of it. An infinite variety of abstract artworks could be presented in your home and without the effort of selecting it! The Zombie Formalist is the first product of it’s kind; it has an infinite capacity to create new artistic compositions and it uses AI to learn your individual aesthetic preferences and shows you more of what you want to see. The Zombie Formalist pays attention to artworks what you pay attention to and creates new artworks in that same style. The Zombie Formalist is an artist in itself: it creates new and unique works just for you and matching your preferences. There are no surveys or questions, it just pays attention to what you appreciate and learns what you like. Want to share the artworks made by your Zombie Formalist? The Zombie Formalist can also upload generated artworks to social media and your friends’ and followers’ likes can influence the aesthetic decisions of the Zombie Formalist, ensuring the creation of works your friends will appreciate. The Zombie Formalist is like having your very own artist in your home, creating new works on the fly that are always fresh and new.


Short-List of Appropriated Paintings

Posted: December 31, 2019 at 5:33 pm

The following gallery shows the images I’ll continue to refine. I should soon schedule a meeting with my printer to determine how many I can make, hoping for 4–6.


The Zombie Formalist on Social Media

Posted: December 31, 2019 at 4:01 pm

I’ve been doing reflecting and discussing my envisioned use of social media for the Zombie Formalist and the issue is much more complex than I had expected. The purpose of using social media is that ‘likes’ would be one way the machine could determine the ‘value’ of compositions and use that in the training process and model what is liked on social media. After some discussion with a social media savvy person, I came up with three possible strategies for my use of social media, in order of preference:

  1. The Zombie Formalist has it’s own social media profile and all content generated is uploaded.
  2. I create a satirical identity for a company that makes the “Zombie Formalist” as a tech gadget (not artwork it itself) that has a social media presence. The profile would appear to exist only to ‘sell’ the product.
  3. I create a social media profile for myself where the Zombie Formalist output is one component in of a social media presence for my practise in general.

Only #1 allows for social media to be used “in the loop” to attribute value to compositions. #2 and #3 would be more promotional mechanisms, but not be literally connected to the Zombie Formalist hardware as I have envisioned. #2 would be a lot of work in developing marketing and branding; this is an interesting approach, but the required investment makes that a separate project that requires much more time; I could always revisit this when the rest of the project is complete or in a future iteration. #3 would be a very standard use of social media and while it would provide promotional value for my practise, it does not actually have anything to do with this particular project. As it’s a major part of the concept of the work (social media determining value) #1 is the priority.

I was initially inclined to select Instagram because of it’s image-centrism and how it’s used by artists for both promotion and sales. Unfortunately it is not suitable for #1 for a few different reasons: On a technical level, Instagram does not allow posting through the public API, only through the official app and through “partners” who are presumably licensed to allow uploading of content independently of the app. On the social level, from what I understand, success on Instagram means highly curated high-quality content with a strong emphasis on individual brand. Since the Zombie Formalist will generate a lot of mediocre images, where the social media audience defines value, that means that Instagram fits best with options #3, or perhaps #2, but rules it out for #1.

#2 could work well on Facebook also, but Facebook seems to be have a unified API with Instagram and no longer allows creating posts algorithmically (except presumably for those who pay for licenses). When it comes to #1 it seems the only technically and socially viable option is Twitter. The wildness of twitter and the permissive API seem to be much better fits for #1. No wonder there are so many bots on there! The text orientation and the way images are treated on Twitter are not ideal; I find the seemingly arbitrary wide-screen cropping of thumbnails and the compositional emphasis on metadata (hashtags, etc.) particularly unpleasant to deal with… I wonder if there are ways to renter Twitter differently to be more… well… Instagram looking. At least this reflection gives me a direction to work within and I can work on some code and perhaps experiment with uploading (a subset?) of my labelled data-set and see how that works.


Enclosure Fabrication

Posted: December 16, 2019 at 5:01 pm

I have finally gotten a rough sketch of the design for the Zombie Formalist; see the images below for details. The idea is that a minimal structure would be waterjet cut and bent from a single sheet of metal that would hold the screen and parts; a wood frame would slide over that to occlude the technology and make the whole thing appear like a normal contemporary art frame. I’ve approached a few fabricators and will post as that aspect of the project moves along.


DNN Face Detection Confidence — Part 4

Posted: December 2, 2019 at 7:23 pm

I ran the OpenCV DNN-based face detector while I was working and the results are much better than I previous saw with the jetson-inference example. I presume the difference in the performance is due to the use of a different model. The following plot shows my face run (red) on top of the noFace run from the previous post (blue). The face mean confidence was .935 (compared to the mean noFace confidence of 0.11) and there is a clear gap between confidence where a face is present and where no faces are present, as shown in the plot. It seems this is the method I should use; I’ll try integrating it into my existing code and see how problematic the ability to recognize face profiles is.


DNN Face Detection Confidence — Part 3

Posted: November 27, 2019 at 5:29 pm

Following from the previous post, I tried to load the caffe weights used in this example in the jetson optimized inference example; the model could not be loaded, so I guess the architecture / format was not compatible (they are both caffe models for object detection). On the plus side, I managed to compile and run the DNN face detection code from the opencv examples! The problem was the arguments not being passed properly. (Amazing how many code examples I’m finding that don’t actually work out without modification.)

The good news is that the model and opencv code work very well, actually very very well. In my two hour test with no faces, and a confidence threshold set to 0.1, the max confidence for non faces was only .19! Compare this to the model / jetson inference code, where the same conditions lead to non-faces being recognized with confidence as high as 0.96! The following plot shows the results of the test:

I had to clip the first 1000 or so data samples because my partially visible face was present and that caused spikes in confidence as high as 0.83! The implication is that this detector is much more sensitive to partial / profile faces and that may mean that viewers would have to really look away from the Zombie Formalist for it to generate new images. Technically, I don’t want it to detect profiles as faces. The next stage is to do a test with face present and determine what the range of confidence is and how much of a problem face profile detection causes…


DNN Face Detection Confidence — Part 2

Posted: November 21, 2019 at 5:39 pm

I ran a whole day (~8 hours) test when no one was home with a low threshold of confidence (0.1) for deep face detection. As I had previously seen, non-faces can be attributed with very high confidence values. Before the sunset (leading strangely high confidence in noise) the confidence wavers around quite a lot and the max confidence remains .96.

The following image shows the extreme wavering of confidence over time where no faces are present (blue) shown with the short face test (red). The horizontal lines show the means of face and noface sets. It seems that under certain (lighting) conditions, like the dip below, the DNN reports very low confidence values (0.36) that would be easily differentiated from the true positive faces. Since I’m working with example code, I have not been dumping the frames from the camera corresponding with these values. I may need this to determine under what conditions the DNN does perform well. Tomorrow I’ll run a test while I’m working (with face present) and see if I can make sure there are no false positives and collect more samples. Over this larger data-set I have determined that the bump of no face samples around 0.8 confidence does not happen in appropriate (bright) lighting conditions, see histogram below.

Without more information it’s unclear what confidence threshold would be appropriate or even whether the DNN face detector is indeed performing better than the haar-based detector. This reference showed a significant difference in performance between the DNN and Haar methods, so I’ll see what model they used and hope for better performance using that…


DNN Face Detection Confidence

Posted: November 18, 2019 at 6:17 pm

As I mentioned in the previous post, I was curious whether the DNN method would be any harder to “fool” than the old haar method. The bad news is that a DNN will report quite high confidence when there are no faces, and even in a dark room where most of the signal is actually sensor noise. The following plot shows the confidence over time in face (red) and no face (blue) cases. The no face case involved the sun setting and the room getting dark, which can be seen in the increase of variance of the confidence over time (compared to the relatively stable confidence of the face case. The confidence threshold for the face case was 0.6 and 0.1 for the no face case.

(more…)

Deep Face Detection

Posted: November 17, 2019 at 6:46 pm

Following from my realization that the haar-based classifier is extremely noisy for face detection, I decided to look into deep-network based face detection methods. I found example code optimized for the jetson to do inference using deep models. Some bugs in the code has made it hard to test, but I’ve fixed enough of those bugs to start an early evaluation at least.

On first blush, the DNN method (using the facenet-120 model) is quite robust, but one of the bugs is a reset of the USB camera’s brightness and focus so that makes evaluation difficult. It does appear that there are very very few false positives. Unfortunately there are quite a lot of false negatives also. It does appear that a complex background is a problem for the DNN face detector as it was for the haar-classifier.

I’m now dumping a bunch of confidence values in a context in which I know there is only one face being detected to get a sense of variance… Then I’ll do a run where I know there will be no faces in the images and see what the variance of confidence is for that case. There is also come DNN-based face detection code in OpenCV that looks to be compatible I’m also trying to figure out.


Face Detection Inaccuracy

Posted: November 8, 2019 at 10:09 am

After getting the new rendering code and face detection into an integrated prototype that I can test (and generate training data) I’m realizing the old school haar classifier running on the GPU works very very poorly. Running the system with suitable lighting (I stopped labelling data once the images got too dark) yielded the detection of 628 faces; of those 325 were false positives. This is not great and the complex background did not help, see image below. I did not keep track of the number of frames processed (true negatives), so these numbers appear much worse than they actually are in terms of accuracy. There were likely 1000s of true negatives. In a gallery context there would be much more control of background, but I should try some example code using a trained CNN to detect faces and see how that seems to perform.

False positive in complex background

More Images of Compositions with X and Y Layer Offsets

Posted: November 3, 2019 at 11:14 am

The following image is a selection of some “good” results using the new renderer with 2D offsets.


New Compositions with X and Y Layer Offsets

Posted: October 30, 2019 at 2:08 pm

The following image shows 25 randomly generated compositions where the layers can be offset in both directions. This allows for a lot more variation and also for circles to include radial stripes that do not terminate in the middle. I’m about to meet with my tech, Bobbi Kozinuk, to talk about my new idea for a case design and talk about any technical implications. I’ll also create a prototype that will collect the time I look at each composition as a new data-set for training.


Long-List of Appropriated Paintings

Posted: October 30, 2019 at 11:37 am

The gallery below shows the strongest of all my explorations and refinements of the painting explorations. I’ll use this to set to narrow down to a shortlist that will be finalized and produced. I’m not yet sure about the print media or size, but was thinking normalizing them to ~19″ high to match the height of the Zombie Formalist. This would mean the tallest in this long-list would be ~8.5″ x 19″ (W x H) and the widest ~43″ x 19″. For media, I was thinking inkjet on canvas would emphasize painting.


AA Solution

Posted: October 25, 2019 at 4:02 pm

I ended up adding the padding only to the right edge, which cleans up the hard outer edges of circles, which is where it bothered me the most. I also realized that there were dark pixels around the feathered edges. This was due to a blending error where I was setting a framebuffer to transparent black rather than transparent with the background colour. There are still some jaggies, as shown in the images below, but they are working quite well.

I also made some quick changes realizing that radial lines are never offset inwards or outwards from the circle, this is because offsets were only applied in 1D. I’ve added a second offset parameter for 2D offsets and there is a lot of additional variety. I just realized this also means my previously trained model is no longer useful (due to the additional parameter), but I’ll need to train on some actual attention data anyhow. I’ll post some of those new compositions soon.


AA Edges (Again)…

Posted: October 25, 2019 at 11:24 am

After more testing I realized the padding approach previously posted is includes some unintended consequences; Since all edges had padding, the circles are no longer continuous and the padding introduces a seam where 0 = 360 degrees, as shown in the following image. I also noticed that in some cases the background colour can be totally obscured by the stripes, which makes the padding look like a thin frame in a very different colour than the rest of the composition. In the end, while these changes make the edges look less digital, they introduce more problems than they solve.


AA Edges in Zombie Formalist Renderer

Posted: October 24, 2019 at 12:14 pm

In my test data for machine learning I was not very happy with the results because of strong jaggies, especially in outer edges where the edge of the texture cuts off the sine-wave gradient. I added some padding on each single row layer on the left and right edges and used a 1D shader blur to soften those cut off edges. This works quite well, but as shown below only works on the left and right edges; the top and bottom stay jaggy: (note, due to the orientation of layers, sometimes these ‘outer’ jaggies are radial and sometimes circular.)

(more…)

#9 Exploration

Posted: October 11, 2019 at 9:50 am

This is the last painting on the long-list to be explored! I’ll re-post a gallery of the final images that will be the set that I’ll select from for the final works. The top image is my selection, and I’ve included two explorations below it.


#4 Exploration and Refinement

Posted: October 9, 2019 at 11:29 am

The composition of this painting has a large black hole in the middle. The abstraction process seems to emphasize this and I’m not totally sure by the results. The best image (top one) does seem a little too abstract, but the emphasis on that dark area is reduced. I think I’ll try something in between sigma 500 and 600 if this image makes the final cut. Explorations below.


#12 Exploration

Posted: October 7, 2019 at 11:14 am

I’ve ruled this painting out due to the lack of contrast.


#13 Exploration

Posted: October 5, 2019 at 5:16 pm

I can’t say I find anything really interesting about this one, so I’m ruling it out. Following are my explorations.


#2 Exploration and Refinement

Posted: October 4, 2019 at 1:05 pm

I’m quite happy with these results; the top image significantly diverges from the figure shape which is still dominant in the two explorations below.


#19 Exploration

Posted: October 3, 2019 at 11:08 am

I think these are a little too colourful, but I think the version on the left is sufficient for comparison in the set. I’m getting close to finishing the medium resolution images and I may have to scale down a few high resolution images (paintings 13, 12, 4 and 9), which have been crashing the machine due to memory use.


#6 Exploration

Posted: October 1, 2019 at 4:56 pm

I’m quite fond of how this one turned out, but on closer inspection I realized the image I’m working from is a scan of a half-tone reproduction (see detail below). If this image makes the selection, I’ll have to find a photographic source. The best image is the largest above with two explorations beneath it.

(more…)

#25 Exploration

Posted: October 1, 2019 at 4:44 pm


Final Experiment Using Colour Histogram Features

Posted: September 27, 2019 at 11:29 am

My talos search using a 24 bin colour histogram finished. The best model achieved accuracies of 76.6% (training), 74.6% (validation) and 74.2% (test). Compare this to accuracies of 93.3% (training), 71.2% (validation) and 72.0% (test) for the previous best model using initial features. On the test set, this is an improvement of only ~2%. The confusion matrix is quite a lot more skewed with 224 false positives and only 78 false negatives. Compare this to 191 false positives and 136 false negatives for the previous best model using initial features. As the histogram features would need to be calculated after rendering, I think it’s best to stick with the initial features where the output of a generator can be classified before rendering, which will be much more efficient.

The following images show the new 100 compositions classified by the best model using these histogram features.

“Good” Compositions
“Bad” Compositions
(more…)