It seems that one of the foundational ideas of the singularity is that the machines we build will exceed our intentions and act in ways beyond our will and control. Now, I do see how machines act in ways we do not expect, and in the case of a machine with a lot of power over aspects important to human life, that can lead to problems. This happens all the time due to bugs and human error and even happens in the most non-intelligent machines. The centre of the problem seems to be the power we give over to automated systems.
After the previous tests I’ve gotten a better sense of the prediction problem. We realized there may not be enough data in my previous tests to get sufficient training (corresponding to a few days of the full system processing). Additionally, I found a few issues with the segmentation code that could have changed the behaviour of clusters over time. I took the 6 days necessary to train a new data-set. The full data-set is composed of only the day time periods (including sunset and sunrise), and includes approximately 300,000 frames. In 6 days I processed approximately half the set, 150,000 frames. Note that this is actually significantly more data (for the same number of frames) compared to previous examples due to their inclusion of night frames. Following is the resulting error from same MLP learning procedure as used previously, presenting each pattern only once without repeated epoch training, and reporting error after each iteration:
I ran a test training the MLP with data filtered such that at least one cluster must change state between subsequent frames. This time, the result is a periodic dream that does not stabilize over 10,000 iterations: