The Return of Segmentation

Posted: July 11, 2012 at 5:20 pm

I have not posted in some time because I have been concentrating on my PhD proposal, which needs to be complete by August 13th. In order to generate some images and a prototype system for the proposal I’ve rethought the architecture to some degree (partially discussed in the previous post, details forthcoming) and am now attempting to implement some of those ideas.

The first key is to use background subtraction to generate a background and foreground image. The idea is to segment the background image and use those masks to extract portions of the live image. As background subtraction is a well known method, it should be feasible and robust.

The problem seems to be that the background model produced by BackgroundSubtractorMOG2 is not exactly clean and would be extremely poor if I tried to segment it:

Compare this to a simpler accumulation over the exact same frames (model = ((model/preroll)*(preroll-1)) + (inputImage/preroll), where preroll = 200):

It seems pretty clear that although darker, the simple accumulation model will be better for segmentation than the MOG2 model. So we are left with some possible issues: (1) Since I have not tried to do foreground subtraction using this model, I’m not sure it will perform as well as the MOG2 model. (2) It’s unclear to me why the image is so dark.