Results of Integrated Test Using New Face Detector

So I ran a test over a few days with the new HOG face detector (from dlib) to see how it worked in terms of visual attention and whether attention is a valid proxy for aesthetic value. The results seem quite good both in terms of the response of the system to attention and also how attention (albeit attention in the contrived context of my own home as the test context). The following images show the “good” (top), over 50 frames of attention, and “bad” (bottom), under 50 frames of attention, compositions.

In terms of the role of Twitter in the integrated system the idea is that in-person interaction will be privileged. Only compositions deemed “good” in person will be uploaded to Twitter for social refinement. It is unclear at this time how the Twitter likes will be manifest in the machine learning approach.

Based on the pattern of interaction in this test, about 30 images will be posted per day to Twitter. Of course this will depend on the in person audience as no images will be uploaded if there is no one attending to the work. In an exhibition situation, more images could be uploaded, but that also depends on how much attention each composition gets. Only a full test with a large public audience will allow this to be determined.

In my test 554 images were captured of faces and only 10 were false positives, so the HOG accuracy is very good in real variable light conditions.