Work in Progress – Perceptual Rendering

Posted: January 3, 2014 at 10:32 am

I realized after so much work I have not been able to see how clusters behave over time. So I finally took a day to write a first crack at a rudimentary OpenGL renderer for DM3. Up to this point I was just using OpenCV functions to dump percepts and state data to disk, and then reconstructing images. In the final work, I’ll need to do this rendering anyhow, and it does seem to be working (i.e. working with the threaded application!). So I ran a 16,000 frame test where a perceptual frame was rendered for each input frame. The video runs at 30fps, but frames were captured at 1fps, so objects more really quite quickly.

The increasing jitter is due to the increasing amount clustering process where a finite number of clusters must reflect continuously shifting sense data, and the temporal instability of segmented region edges. The weights for clustering are such that new clusters are 25% new stimulus and 75% previous stimulus. Looking at this video it seems they should be even softer, I’ll next try 15% new and 85% existing. In this test the max number of clusters is 2000. The perceptual rendering is the unprocessed stimulus image in the background where the perceptual clusters are drawn on top at 75% opacity. Due to the jitter, this seems a bit too strong (too much emphasis on constructive perception), and could be reduced to 50%.