Segmentation with OpenCV GPU

Posted: November 26, 2011 at 3:40 pm

In playing with code for deep learning (DBNs in particular) I’ve had to install CUDA to do matrix processing on the GPU. So I went ahead and recompiled openCV to use CUDA. Following is a similar segmentation image to the one in previous posts. This one was computed in 2.7s on the GPU, and on a different machine. The non GPU version did it in 6s, and the old machine did it in 9s. This machine is already a few years old, and has an older 8600GTS that barely supports CUDA. A faster machine, or perhaps just a faster GPU, may get these numbers down to something more real-time.