First Dream sequence generated by predictive model!

Posted: July 26, 2017 at 6:21 pm

The following images show a comparison of three modes of visual mentation all using the restricted set of 1000 percepts. The top image is the “Watching” mode where percepts are located in the same location as they are sensed. The middle image is something like “Imagery” where the position of percepts is random but constrained by the distribution of percept positions in Watching, and therefore still tied to sensation. The bottom image is a first attempt at dreaming decoupled from sensory information. Percepts are positioned randomly, but constrained by the distribution of percepts as learned by an LSTM network. The position in time and space of each percept is wholly determined by the LSTM predictive model.

What keeps this from being ‘real’ dreaming (according to the Integrative Theory) is that the sequence of distributions generated by the predictive model are seeded by every time-step in Watching (keeping them from diverging significantly from Watching). In real dreaming, one single time-step will seed a feedback loop in the predictive model to generate a sequence that is expected to diverge significantly from Watching. I think these are working quite well; the generation of positions from distributions certainly softens a lot of structure in Watching, but holds onto some resemblance. There is some literature on the possibility of mental imagery  and dreaming being hazier and less distinct than external perception. I’ve also included a video at the bottom that shows the whole reconstructed sequence from the LSTM model.

Watching-1000c-0018879Imagery-1000c-000001 Dreaming-1000c-000001 (more…)


Frame Reconstructions from 1000 Clusters

Posted: July 18, 2017 at 11:00 am

Following from my previous post I’ve been investigating reducing the number of clusters in order to scale the predictor problem (for Dreaming) down to something feasible. The two pairs of images below show the original reconstructions with 200,000 clusters and the corresponding reconstructions with 1000 clusters. For more context, see this post. I’ll try generating a short sequence and see how they look in context.

watching-0018879Dreaming-0018879 (more…)


Collages from limited number of clusters.

Posted: July 17, 2017 at 5:13 pm

In working on Dreaming, I recalculated the K-Means segment clusters (percepts)  with only 1000 means (there were 200,000 previously). The images below show the results. It seems that when it comes to collages, the most interesting segments are the outliers (and I expect probably the raw segments). The fact that so many segments get averaged in these clusters means they end up being very small and 1000 is just not enough to capture the width and height features (hence the two very wide and very tall percepts). Clearly, the colour palette is still preserved, but that is pretty much it. The areas of colour below are so small that these images end up being only 1024px or smaller wide. These SOMs are trained over only 10,000 iterations to get a sense of what all the percepts look like together.

SOMResults_10000_0-999-Collage-1-0.0625 (more…)