First Dream sequence generated by predictive model!

Posted: July 26, 2017 at 6:21 pm

The following images show a comparison of three modes of visual mentation all using the restricted set of 1000 percepts. The top image is the “Watching” mode where percepts are located in the same location as they are sensed. The middle image is something like “Imagery” where the position of percepts is random but constrained by the distribution of percept positions in Watching, and therefore still tied to sensation. The bottom image is a first attempt at dreaming decoupled from sensory information. Percepts are positioned randomly, but constrained by the distribution of percepts as learned by an LSTM network. The position in time and space of each percept is wholly determined by the LSTM predictive model.

What keeps this from being ‘real’ dreaming (according to the Integrative Theory) is that the sequence of distributions generated by the predictive model are seeded by every time-step in Watching (keeping them from diverging significantly from Watching). In real dreaming, one single time-step will seed a feedback loop in the predictive model to generate a sequence that is expected to diverge significantly from Watching. I think these are working quite well; the generation of positions from distributions certainly softens a lot of structure in Watching, but holds onto some resemblance. There is some literature on the possibility of mental imagery  and dreaming being hazier and less distinct than external perception. I’ve also included a video at the bottom that shows the whole reconstructed sequence from the LSTM model.

Watching-1000c-0018879Imagery-1000c-000001 Dreaming-1000c-000001

Watching-1000c-0019379 Imagery-1000c-000500 Dreaming-1000c-000500