Stills vs Videos: Painting Appropriation

Posted: April 1, 2020 at 11:46 am

After doing a little work on the painting appropriation videos I’m realizing that the very soft boundaries that I’ve been after for the stills just happen in the videos, “for free”. The gallery below shows the video approach (right) next to the finalized print version (left). Note The lack of reorganization (areas of contrasting colour) in the still versions; e.g. the green and purple in the upper right quadrant of the top left next to the bright blob in the centre.

(more…)

Painting #1 Appropriation Video

Posted: March 31, 2020 at 2:04 pm


Painting #1 Appropriation Video Work in Progress

Posted: March 25, 2020 at 11:45 am

Now that the final selection of paintings has been made I’ve been able to start working on the video works. These are videos that show the deconstruction (abstraction) of paintings by the machine learning algorithm. Pixels are increasingly reorganized according to their similarity over time. The top gallery shows my finalized print (left) along with a few explorations at HD resolution that approximate it. These are “sketches” of the final frames of the video.

The image below shows the actual final frame of the video. As each frame is the result of an epoch with a different neighbourhood size (that determines the degree of abstraction / reorganization) from smallest (least abstract) to largest (most abstract) the final structure is more spatially similar to the original because there is no initial disruption due to large initial neighbourhood sizes.

I think I can get around this by training for more iterations, as the larger neighbourhoods will have a greater effect with more iterations. The question is whether I should continue with the same neighbourhood size (168) used to generate the sketches above, or whether I should continue the rate of increase from the first set of frames (2168 in 2675 steps). The latter seems most consistent with the rest of the training process, so I should go with that. I just need to make changes to the code to allow “resuming” a sequence by starting with a frame part way through. Luckily, I saved the weights of the network for each frame so that is possible without loosing precision.

A plus of this video approach is that the images are far more smooth than they are as stills, which makes me wonder if ruled out paintings would actually make strong videos.


Society6 Shop Opens!

Posted: March 25, 2020 at 8:11 am

13 Zombie Formalist compositions are now available on many products at society6!


Short-List Selection of Appropriated Paintings

Posted: March 17, 2020 at 11:05 am

The following gallery shows the short-list of paintings that have been selected for printing. #19 is an edge case and may not be printed depending on the printing costs. The idea is to inkjet print on stretched canvas where the canvas heights match the height of the Zombie Formalist.

I’ve also included a couple mock-ups with ZF-matched frames and without. I’m not sure about the depth of the frames / stretchers for these; should they be on 3 in stretchers so that the stick out from the wall about the same distance as the Zombie formalist (which will have about a 4in depth, but that could be squeezed closer to 3in)? I think the lack of contrast with the black frames is not great, so a little white matte seems to be the strongest choice while emphasizing consistency. Also a little white matte could be easy to add to the ZF also (bottom mock-up).

The next stage is to get the source paintings down to 4K or HD resolution and work towards the videos of the learning process. This will be interesting because there is enough emergence in these systems that even changing the source resolution can change the results significantly. Thus the videos will not match the prints. Also, spending 10 hours per frame is impractical. Instead, I’ll be using the previous frame (with a smaller max neighbourhood size) as the source for the current frame. I’ll be training and retraining (with a much smaller number of iterations per frame) where the final frame will be the aggregation of many training epochs. This causes even more unpredictability and emergence in the structure of the final frames.


#22 Finalization

Posted: March 12, 2020 at 11:46 am

After quite a few explorations of #22, I was not able to make anything that is more interesting than the previous runs. I’ve included the best previous result on the left below, and the new best result on the right; these look very much equivalent to me and I don’t see any improvement of smoothness (despite the 4x increase of training iterations).

The following images are the remainder of the explorations. This ends my tweaking of these images; I’ll now rank the strongest results for final printing. One difficulty is that the number of prints is unclear because the budget is a little up in the air since I don’t know how much my Zombie Formalist fabrication costs will be (as my quoted fabricator pulled out and I’ve yet to find a replacement).


#4 Finalization

Posted: March 3, 2020 at 1:10 pm

I meant to post these earlier, but turns out I forgot to! I ended up with one strong result (at the top) and a few explorations (below). The top image still has a bit of a “black hole” in the centre, but I’ve included in the long-list for final selection.


#5 Finalization Revisit

Posted: February 29, 2020 at 9:34 pm

I realized working on finalizing #07, that the code was not using the Gaussian neighbourhood function as I had been previously using so I did did #5. The best result (on top left) is quite similar to the previous result (top right), but a little less smooth. I think the top left is a strong result. I’ve also included the other explorations using the appropriate Gaussian function in the gallery below. The training process for the Gaussian function is slower (since the fall off of learning effect is quite steep).


Zombie Formalist Images on Meural!

Posted: February 25, 2020 at 1:02 pm

Early in 2019 I was invited by the Lumen Prize to submit some new work to be available on the Meural digital art frame. I submitted some early sketches for the Zombie Formalist but did not hear back. In Googling the Zombie Formalist (as one does) I found that my submission was accepted and is available here for Meural!


Twitter Response to “Bad” Compositions.

Posted: February 25, 2020 at 12:48 pm

I uploaded a random sampling of 108 “bad” compositions to Twitter, following the “good” compositions from this post using the same A-HOG data set. The “bad” set has a marginally lower mean number of likes (0.52), but more than double the mean retweets (0.44). The total number of likes for the “bad” set was 56 (compared to 68 for the “good” set); the total number of retweets for the “bad” set was 48 (compared to only 19 for the “good” set). Of course an uncontrolled variable is the size of the growing twitter audience for the Zombie Formalist. Following is a plot analogous to this post. I’ve also included the compositions from this set with the most likes and retweets (corresponding to the 5 peaks below)

(more…)

Ruling out #7

Posted: February 23, 2020 at 12:43 pm

After working through a few variations, see below, I was unable to get #7 to look smooth; the ‘camo’ aesthetic persists, even with much smaller learning rates and more iterations. I’ve decided to remove this from the running for the final selections.

I’ve also included an interesting error here for future reference, shown below. This occurred when I used a learning rate of 2 (where the max should be 1), which caused the neighbourhood function to wrap around in the middle of each neighbourhood. This causes an interesting aesthetic that reminds me of photography through water droplets on glass where spots of focus (lack of re-organization) punctuate areas of order (re-organization) due to lensing effects.


Initial Twitter Response to “Good” Compositions.

Posted: February 11, 2020 at 4:06 pm

I uploaded 110 “good” compositions to Twitter; “Good” was defined by thresholding (> 50) the attention (number of frames where faces are detected) for each composition generated in the last (A-HOG) integrated test. The max number of likes was 6 and the max retweets 2. The mean likes was 0.62 and the mean retweets was 0.17. The following plot shows the likes (red), retweets (green) and their sum (blue) on the y axis for each composition (x axis). The peaks in the sum indicate one very successful composition (6 likes + 2 retweets) and 5 quite successful compositions. These compositions are included in the gallery below.

(more…)

#5 Finalization

Posted: February 11, 2020 at 8:18 am

The image on the top is the best of these final explorations. I went through a few more iterations than I was expecting to; I learned a lot more through the process of going through the painting long-list and spending more time on the early entries makes sense. The image on the top is the final selection and the gallery below the explorations.


Twitter API Progress

Posted: January 31, 2020 at 2:53 pm

After creating my @AutoArtMachine twitter account I’ve been (manually) constructing a brand identity and profile as well as following and collecting followers. At the same time I’ve been looking into how to do the twitter automation for the Zombie Formalist.

My first attempt to apply for a twitter developer account failed (as I was considering automating following, liking, etc.); this is not encouraged by Twitter, so I’ve shifted my intention such that the Zombie Formalist will only post compositions and retrieve the number of “likes” for those tweets. Based on this revised use case, my developer account was accepted.

This morning I used my new Twitter Developer account to generate access keys for the API and successful ran some sample twitcurl code in C++! I only got as far as logging into twitter and getting followers’ IDs, but it is working. There is a problem in that twitcurl does not appear to be maintained and I was not sure the API was even going to work; so far it does. One issue is that this version of the library does not support uploading media, but I found this fork that does and will try getting that to work. There is very little out there on interfacing twitter and C++. If I get stuck, I’ll need to switch to python and figure out how to run python code inside C++.


Results of Integrated Test Using New Face Detector

Posted: January 24, 2020 at 5:53 pm

So I ran a test over a few days with the new HOG face detector (from dlib) to see how it worked in terms of visual attention and whether attention is a valid proxy for aesthetic value. The results seem quite good both in terms of the response of the system to attention and also how attention (albeit attention in the contrived context of my own home as the test context). The following images show the “good” (top), over 50 frames of attention, and “bad” (bottom), under 50 frames of attention, compositions.

(more…)

zombieformalist.com

Posted: January 23, 2020 at 10:24 am

I made a quick website using the domain I had already registered for the Zombie Formalist. The text is written as a marketing tease that frames the ZF as a commercial product, merging strategies #1 and #2 discussed here.

www.zombieformalist.com


Social Media and Brand Identity

Posted: January 14, 2020 at 4:29 pm

Based on my previous post about social media I’ve settled on Twitter and am working on a few materials for brand identity for this satirical company that makes the “AutoArtMachine” (aka Zombie Formalist).

I’ve also made a few mock-ups of the Zombie Formalist in public domain living-room images, and written a promo text:

Do you tire of the art hanging on your walls? When was the last time you even looked at the art you display in your home? Only to spark some conversation when guests come over? Imagine if the artwork in your home changed when you get bored of it. An infinite variety of abstract artworks could be presented in your home and without the effort of selecting it! The Zombie Formalist is the first product of it’s kind; it has an infinite capacity to create new artistic compositions and it uses AI to learn your individual aesthetic preferences and shows you more of what you want to see. The Zombie Formalist pays attention to artworks what you pay attention to and creates new artworks in that same style. The Zombie Formalist is an artist in itself: it creates new and unique works just for you and matching your preferences. There are no surveys or questions, it just pays attention to what you appreciate and learns what you like. Want to share the artworks made by your Zombie Formalist? The Zombie Formalist can also upload generated artworks to social media and your friends’ and followers’ likes can influence the aesthetic decisions of the Zombie Formalist, ensuring the creation of works your friends will appreciate. The Zombie Formalist is like having your very own artist in your home, creating new works on the fly that are always fresh and new.


Short-List of Appropriated Paintings

Posted: December 31, 2019 at 5:33 pm

The following gallery shows the images I’ll continue to refine. I should soon schedule a meeting with my printer to determine how many I can make, hoping for 4–6.


The Zombie Formalist on Social Media

Posted: December 31, 2019 at 4:01 pm

I’ve been doing reflecting and discussing my envisioned use of social media for the Zombie Formalist and the issue is much more complex than I had expected. The purpose of using social media is that ‘likes’ would be one way the machine could determine the ‘value’ of compositions and use that in the training process and model what is liked on social media. After some discussion with a social media savvy person, I came up with three possible strategies for my use of social media, in order of preference:

  1. The Zombie Formalist has it’s own social media profile and all content generated is uploaded.
  2. I create a satirical identity for a company that makes the “Zombie Formalist” as a tech gadget (not artwork it itself) that has a social media presence. The profile would appear to exist only to ‘sell’ the product.
  3. I create a social media profile for myself where the Zombie Formalist output is one component in of a social media presence for my practise in general.

Only #1 allows for social media to be used “in the loop” to attribute value to compositions. #2 and #3 would be more promotional mechanisms, but not be literally connected to the Zombie Formalist hardware as I have envisioned. #2 would be a lot of work in developing marketing and branding; this is an interesting approach, but the required investment makes that a separate project that requires much more time; I could always revisit this when the rest of the project is complete or in a future iteration. #3 would be a very standard use of social media and while it would provide promotional value for my practise, it does not actually have anything to do with this particular project. As it’s a major part of the concept of the work (social media determining value) #1 is the priority.

I was initially inclined to select Instagram because of it’s image-centrism and how it’s used by artists for both promotion and sales. Unfortunately it is not suitable for #1 for a few different reasons: On a technical level, Instagram does not allow posting through the public API, only through the official app and through “partners” who are presumably licensed to allow uploading of content independently of the app. On the social level, from what I understand, success on Instagram means highly curated high-quality content with a strong emphasis on individual brand. Since the Zombie Formalist will generate a lot of mediocre images, where the social media audience defines value, that means that Instagram fits best with options #3, or perhaps #2, but rules it out for #1.

#2 could work well on Facebook also, but Facebook seems to be have a unified API with Instagram and no longer allows creating posts algorithmically (except presumably for those who pay for licenses). When it comes to #1 it seems the only technically and socially viable option is Twitter. The wildness of twitter and the permissive API seem to be much better fits for #1. No wonder there are so many bots on there! The text orientation and the way images are treated on Twitter are not ideal; I find the seemingly arbitrary wide-screen cropping of thumbnails and the compositional emphasis on metadata (hashtags, etc.) particularly unpleasant to deal with… I wonder if there are ways to renter Twitter differently to be more… well… Instagram looking. At least this reflection gives me a direction to work within and I can work on some code and perhaps experiment with uploading (a subset?) of my labelled data-set and see how that works.


Enclosure Fabrication

Posted: December 16, 2019 at 5:01 pm

I have finally gotten a rough sketch of the design for the Zombie Formalist; see the images below for details. The idea is that a minimal structure would be waterjet cut and bent from a single sheet of metal that would hold the screen and parts; a wood frame would slide over that to occlude the technology and make the whole thing appear like a normal contemporary art frame. I’ve approached a few fabricators and will post as that aspect of the project moves along.


DNN Face Detection Confidence — Part 4

Posted: December 2, 2019 at 7:23 pm

I ran the OpenCV DNN-based face detector while I was working and the results are much better than I previous saw with the jetson-inference example. I presume the difference in the performance is due to the use of a different model. The following plot shows my face run (red) on top of the noFace run from the previous post (blue). The face mean confidence was .935 (compared to the mean noFace confidence of 0.11) and there is a clear gap between confidence where a face is present and where no faces are present, as shown in the plot. It seems this is the method I should use; I’ll try integrating it into my existing code and see how problematic the ability to recognize face profiles is.


DNN Face Detection Confidence — Part 3

Posted: November 27, 2019 at 5:29 pm

Following from the previous post, I tried to load the caffe weights used in this example in the jetson optimized inference example; the model could not be loaded, so I guess the architecture / format was not compatible (they are both caffe models for object detection). On the plus side, I managed to compile and run the DNN face detection code from the opencv examples! The problem was the arguments not being passed properly. (Amazing how many code examples I’m finding that don’t actually work out without modification.)

The good news is that the model and opencv code work very well, actually very very well. In my two hour test with no faces, and a confidence threshold set to 0.1, the max confidence for non faces was only .19! Compare this to the model / jetson inference code, where the same conditions lead to non-faces being recognized with confidence as high as 0.96! The following plot shows the results of the test:

I had to clip the first 1000 or so data samples because my partially visible face was present and that caused spikes in confidence as high as 0.83! The implication is that this detector is much more sensitive to partial / profile faces and that may mean that viewers would have to really look away from the Zombie Formalist for it to generate new images. Technically, I don’t want it to detect profiles as faces. The next stage is to do a test with face present and determine what the range of confidence is and how much of a problem face profile detection causes…


DNN Face Detection Confidence — Part 2

Posted: November 21, 2019 at 5:39 pm

I ran a whole day (~8 hours) test when no one was home with a low threshold of confidence (0.1) for deep face detection. As I had previously seen, non-faces can be attributed with very high confidence values. Before the sunset (leading strangely high confidence in noise) the confidence wavers around quite a lot and the max confidence remains .96.

The following image shows the extreme wavering of confidence over time where no faces are present (blue) shown with the short face test (red). The horizontal lines show the means of face and noface sets. It seems that under certain (lighting) conditions, like the dip below, the DNN reports very low confidence values (0.36) that would be easily differentiated from the true positive faces. Since I’m working with example code, I have not been dumping the frames from the camera corresponding with these values. I may need this to determine under what conditions the DNN does perform well. Tomorrow I’ll run a test while I’m working (with face present) and see if I can make sure there are no false positives and collect more samples. Over this larger data-set I have determined that the bump of no face samples around 0.8 confidence does not happen in appropriate (bright) lighting conditions, see histogram below.

Without more information it’s unclear what confidence threshold would be appropriate or even whether the DNN face detector is indeed performing better than the haar-based detector. This reference showed a significant difference in performance between the DNN and Haar methods, so I’ll see what model they used and hope for better performance using that…


DNN Face Detection Confidence

Posted: November 18, 2019 at 6:17 pm

As I mentioned in the previous post, I was curious whether the DNN method would be any harder to “fool” than the old haar method. The bad news is that a DNN will report quite high confidence when there are no faces, and even in a dark room where most of the signal is actually sensor noise. The following plot shows the confidence over time in face (red) and no face (blue) cases. The no face case involved the sun setting and the room getting dark, which can be seen in the increase of variance of the confidence over time (compared to the relatively stable confidence of the face case. The confidence threshold for the face case was 0.6 and 0.1 for the no face case.

(more…)

Deep Face Detection

Posted: November 17, 2019 at 6:46 pm

Following from my realization that the haar-based classifier is extremely noisy for face detection, I decided to look into deep-network based face detection methods. I found example code optimized for the jetson to do inference using deep models. Some bugs in the code has made it hard to test, but I’ve fixed enough of those bugs to start an early evaluation at least.

On first blush, the DNN method (using the facenet-120 model) is quite robust, but one of the bugs is a reset of the USB camera’s brightness and focus so that makes evaluation difficult. It does appear that there are very very few false positives. Unfortunately there are quite a lot of false negatives also. It does appear that a complex background is a problem for the DNN face detector as it was for the haar-classifier.

I’m now dumping a bunch of confidence values in a context in which I know there is only one face being detected to get a sense of variance… Then I’ll do a run where I know there will be no faces in the images and see what the variance of confidence is for that case. There is also come DNN-based face detection code in OpenCV that looks to be compatible I’m also trying to figure out.


Face Detection Inaccuracy

Posted: November 8, 2019 at 10:09 am

After getting the new rendering code and face detection into an integrated prototype that I can test (and generate training data) I’m realizing the old school haar classifier running on the GPU works very very poorly. Running the system with suitable lighting (I stopped labelling data once the images got too dark) yielded the detection of 628 faces; of those 325 were false positives. This is not great and the complex background did not help, see image below. I did not keep track of the number of frames processed (true negatives), so these numbers appear much worse than they actually are in terms of accuracy. There were likely 1000s of true negatives. In a gallery context there would be much more control of background, but I should try some example code using a trained CNN to detect faces and see how that seems to perform.

False positive in complex background

More Images of Compositions with X and Y Layer Offsets

Posted: November 3, 2019 at 11:14 am

The following image is a selection of some “good” results using the new renderer with 2D offsets.


New Compositions with X and Y Layer Offsets

Posted: October 30, 2019 at 2:08 pm

The following image shows 25 randomly generated compositions where the layers can be offset in both directions. This allows for a lot more variation and also for circles to include radial stripes that do not terminate in the middle. I’m about to meet with my tech, Bobbi Kozinuk, to talk about my new idea for a case design and talk about any technical implications. I’ll also create a prototype that will collect the time I look at each composition as a new data-set for training.


Long-List of Appropriated Paintings

Posted: October 30, 2019 at 11:37 am

The gallery below shows the strongest of all my explorations and refinements of the painting explorations. I’ll use this to set to narrow down to a shortlist that will be finalized and produced. I’m not yet sure about the print media or size, but was thinking normalizing them to ~19″ high to match the height of the Zombie Formalist. This would mean the tallest in this long-list would be ~8.5″ x 19″ (W x H) and the widest ~43″ x 19″. For media, I was thinking inkjet on canvas would emphasize painting.


AA Solution

Posted: October 25, 2019 at 4:02 pm

I ended up adding the padding only to the right edge, which cleans up the hard outer edges of circles, which is where it bothered me the most. I also realized that there were dark pixels around the feathered edges. This was due to a blending error where I was setting a framebuffer to transparent black rather than transparent with the background colour. There are still some jaggies, as shown in the images below, but they are working quite well.

I also made some quick changes realizing that radial lines are never offset inwards or outwards from the circle, this is because offsets were only applied in 1D. I’ve added a second offset parameter for 2D offsets and there is a lot of additional variety. I just realized this also means my previously trained model is no longer useful (due to the additional parameter), but I’ll need to train on some actual attention data anyhow. I’ll post some of those new compositions soon.