Synthetic Dataset – Replay

Posted: May 15, 2014 at 6:38 pm

Due to the results from previous posts, I thought I would try another approach: train the network by feeding it not the state at one moment in time, but concatenate multiple moments of time into a single vector such that the network has some history to learn from. In implementing this, I did not find any improvement over the old method: (PHASE2 is the old method, PHASE3 is the new method)

phase3.errorIf anything, it appears that the window method is worse. This could be due to the fact that more inputs mean more random initial weights. In testing this I regenerated the previous PHASE2 results, and noticed two very interesting cases of predictor feedback:

states-training-1-epoch PHASE2-5_15Hidden_constantLearning_30Epochs-valFeedbackCont
states-training-1-epoch PHASE2-1_15Hidden_constantLearning_30Epochs-valFeedbackCont

The image on the left is the training input, the images on the right are the result of predictor feedback after 30 epochs, 15 hidden units and constant learning. The only difference between these (and others like the ones shown in previous posts) are the initial weights. So the good news is that it is possible for the MLP to reproduce the input (replay the sequence) in feedback. The bad news is that this behaviour seems to be quite rare and I can’t see of any way to make the network produce them regularly, even in this simple toy case. This behaviour has not been seen in the PHASE3 method, and thus it seems sticking with the initial report is the best plan.

In parallel I have been running a test of the full data-set (using the knowledge gleamed from the synthetic dataset) with a very high threshold of arousal which causes clustering to happen quite rarely; This should increase the visual diversity of percepts, and therefore also enrich dreams and mind-wandering.