After trying to add a little bit of noise to the state of the inputs to the predictor, I noticed that the noise had no effect on the output. I thought there was an issue with my implementation of the feedback mechanism, so I rewrote it. The behaviour did not change. This must be due to the noise tolerance of the MLP. I ran a test last night where I inserted, every 50 iterations, a dense random vector (random selection of which clusters are present or not) which was ORed with the previous network output before feeding back. The result is that the network does clearly change the network behaviour, but it only does so for 1-2 iterations before the network stabilizes again to a static / periodic / complex pattern.
The complex behaviour seems quite common, but the problem is that those percepts that are present at one time step, tend to either stay present (static), turn on and off (periodic) or exhibit some other apparently chaotic behaviour (complex), while those percepts that are not present, tend to stay non-present. Thus a small set of percepts are activated in a complex pattern in feedback, but that does not seem to result in the activation of percepts that were not present earlier. In short, dream activation seems highly constrained to the latent perceptual activation that initiated it.
So it seems the idea of injecting periodic Boolean noise is a non-starter because it seems in order to elicit a small change in network behaviour, the randomness inserted would dominate the activation, and thus contrast highly with the behaviour of the network outside of that scope. There seem to be a few options: rather than injecting noise at the boolean level, add a little bit of constant floating point noise to the values after discretization. This means adding a new method to the predictor class that adds noise to the values fed to the network. I’m currently trying another idea where I shift (and wrap) the state vector by one unit each 50 frames. Since this is the same vector modified, it could cause more lasting change, would certainly have the same density of percepts as feedback, and would involve the activation of percepts adjacent to those previously activated. The latter point is an issue because there is no meaningful relationship between neighbouring percepts in the vector. Shifting the vector seems to have had no impact on the network’s output, it seems to be considered noise and ignored. Seems it is time to try implementing some continuous noise in the predictor class.