DNN Face Detection Confidence

Posted: November 18, 2019 at 6:17 pm

As I mentioned in the previous post, I was curious whether the DNN method would be any harder to “fool” than the old haar method. The bad news is that a DNN will report quite high confidence when there are no faces, and even in a dark room where most of the signal is actually sensor noise. The following plot shows the confidence over time in face (red) and no face (blue) cases. The no face case involved the sun setting and the room getting dark, which can be seen in the increase of variance of the confidence over time (compared to the relatively stable confidence of the face case. The confidence threshold for the face case was 0.6 and 0.1 for the no face case.

I can’t explain the drop of confidence in the no face case as no one was home at the time and there were no faces present. Some change in outdoor lighting conditions must have been the cause. So what I was curious about was whether there was much separation between face and no face cases, but there is a lot of overlap. The lowest confidence of faces was 0.6, but the highest confidence for non faces was .96!! The following images show the distributions of confidences in both cases where faces are quite evenly distributed, but high confidence non-faces are quite rare. Still, the small hump at ~0.8 confidence if concerning. I’ll have to rerun the no face test with more stable lighting conditions.

All of this is a pain since it’s holding back my progress generating a more realistic training set. In the worse-case either DNN or Haar methods are likely to perform quite well with white non-complex backgrounds, which are likely in a gallery, but that would still be an irritating limitation… Additionally, integrating the DNN method into the current code is going to be a lot of work. I’ll see about doing a no face test in better lighting conditions this week.