Noncropped Percepts

Posted: February 20, 2014 at 10:04 pm

I changed the percepUnit class so that percepts are stored at a fixed resolution (the full resolution of input frames). This way opencv does not try and reallocate any memory on merging percepts, and indeed my leak is gone. That is the good news. The bad news is that because of the size of the input frames (1920×1080) and the fact that all percepts segmented from a single frame hold their own copy of the same data, the memory usage is extreme. Before I could have probably been able to hold  3000+ percepts in memory, and now I can fit only 300. This makes sense since there are about 100 percepts per input frame. Following are the percepts after 25,000 frame test generating 200 percepts:



Sparse Dreams Revisited

Posted: February 19, 2014 at 11:14 am

After the discussion in the previous post I took a look at the state data dumped by the program. While the waking and non-waking (dreaming and mind-wandering) states are clearly differentiable according to the quality of the state data (see this state plot), it seems they are not so easily to distinguish in terms of the number of activated percepts per frame. Actually it turns out that the distribution of the number of activated percepts per frame is very close in waking and non-waking cases. This indicates to me that the predictor is doing a good job learning, but that there is something missing that is manifest in the quality of dreaming and mind-wandering, which seems to be much more stable over time compared to waking. Following are histograms of the number of percepts active for each frame in 84990 and 256574 frame tests. In these tests the exogenous activation was disabled so we can look directly at the output of the predictor.


Ephemeral Percepts and Sparse Dreams

Posted: February 18, 2014 at 10:26 am

I’m currently running a long test of the longest contiguous part of the data-set, but in previous tests (with much shorter training periods) the dreams have been shown to be quite static. Following is a sample frame from one of these runs. In this case the dream is the sum of perceptual activation and predictor feedback:



System Dynamics (84,990 frames)

Posted: February 12, 2014 at 6:17 pm

Now that the prediction feedback mechanism and arousal have been written, I’ve been able to do some early tests to see what the system’s behaviour is like. Right now I’ve only been running short tests of one day/night cycle. So the degree of learning from the predictor is quite low.

This test is implemented as the system is expected to work, where ongoing external stimulus adds noise to the predictor feedback loop. The dynamics are quite simple now, where the three system states (dreaming, mind-wandering and waking) are all discrete and mutually exclusive. Mind-wandering and dreaming are both identical, where the current state of activation is the initial input to the predictor. While the system continues to mind-wander or dream, the next state is the predictor output combined with external stimulus activation. Mind-wandering is triggered by a lack of arousal (change over time) in external stimulus and dreaming is triggered by the circadian clock. As hard thresholds are used to trigger mind-wandering and dreaming, there are some oscillations between states due to the noisy arousal and / or brightness. Following is the system state plotted over time, the subtle dynamics are difficult to see because of the large number of frames.