Dreaming Machine #3 Test with Tower View

Posted: February 6, 2015 at 11:27 am

This test only lasted a few hours, and the weather was really terrible (constant heavy rain); following are the results.

output-0000004 (more…)


ISEA 2014 in Dubai

Posted: November 20, 2014 at 4:54 pm

Following are a few images I captured of the Dreaming Machine while uninstalling, and thus have quite limited variation. Night was artificially triggered by putting the lens cap on the camera. It was quite difficult to get an appropriate view for the camera, and this was the best possible from Zayed University (due to constraints on capturing images of women in U.A.E.).

20141103_001 (more…)


Prep for ISEA 2014 in Dubai

Posted: October 29, 2014 at 10:16 am

While I had attempted to get real-time transition into the Dreaming Machine #3 software, I had a last minute issue with an interaction between system updates and performance. Rendering each frame went from 0.033 to 0.2 seconds, too slow to do a smooth transition between images generated in the thread. After spending a week trying to fix it, I gave up. So the version to be shown at ISEA will not have any transitions. The good news is that the shader-based renderer uses half the RAM, so I’ve increased percepts from 4000 to 6000 (more caused problems with the MLP). I also fixed crashes related to the IP camera being unreachable, so that should solve the issues that occurred during ACM Creativity and Cognition. Following is a selection of images produced by the system during testing using a live camera feed of my living room:

dubai-prep-13 (more…)


PhD Defence – Passed!

Posted: September 10, 2014 at 12:15 pm

On September 9th, 2014 I successfully defended my PhD Dissertation, entitled A Machine That Dreams: An Artistic Enquiry Leading to Integrative Theory and Computational Artwork.

PhD Defence Slides


Watching and Dreaming (2001: A Space Odyssey) (2)

Posted: July 16, 2014 at 6:24 pm

Some more work in progress on Watching and Dreaming, this time at the correct aspect ratio:

output-0000775 (more…)


Watching and Dreaming (2001: A Space Odyssey)

Posted: July 2, 2014 at 12:57 pm

Some work-in-progress using the Dreaming Machine #3 system to generate imagery learned from Kubrick’s 2001: A Space Odyssey, tentatively titled Watching and Dreaming (2001: A Space Odyssey):

shortlist48 (more…)


Static Dreams (again)

Posted: June 18, 2014 at 4:31 pm

After trying to add a little bit of noise to the state of the inputs to the predictor, I noticed that the noise had no effect on the output. I thought there was an issue with my implementation of the feedback mechanism, so I rewrote it. The behaviour did not change. This must be due to the noise tolerance of the MLP. I ran a test last night where I inserted, every 50 iterations, a dense random vector (random selection of which clusters are present or not) which was ORed with the previous network output before feeding back. The result is that the network does clearly change the network behaviour, but it only does so for 1-2 iterations before the network stabilizes again to a static / periodic / complex pattern.

The complex behaviour seems quite common, but the problem is that those percepts that are present at one time step, tend to either stay present (static), turn on and off (periodic) or exhibit some other apparently chaotic behaviour (complex), while those percepts that are not present, tend to stay non-present. Thus a small set of percepts are activated in a complex pattern in feedback, but that does not seem to result in the activation of percepts that were not present earlier. In short, dream activation seems highly constrained to the latent perceptual activation that initiated it.

So it seems the idea of injecting periodic Boolean noise is a non-starter because it seems in order to elicit a small change in network behaviour, the randomness inserted would dominate the activation, and thus contrast highly with the behaviour of the network outside of that scope. There seem to be a few options: rather than injecting noise at the boolean level, add a little bit of constant floating point noise to the values after discretization. This means adding a new method to the predictor class that adds noise to the values fed to the network. I’m currently trying another idea where I shift (and wrap) the state vector by one unit each 50 frames. Since this is the same vector modified, it could cause more lasting change, would certainly have the same density of percepts as feedback, and would involve the activation of percepts adjacent to those previously activated. The latter point is an issue because there is no meaningful relationship between neighbouring percepts in the vector. Shifting the vector seems to have had no impact on the network’s output, it seems to be considered noise and ignored. Seems it is time to try implementing some continuous noise in the predictor class.


Percept Diversity – Videos

Posted: May 27, 2014 at 2:48 pm

Following are some short videos that show some of the more interesting dreams generated by the system in the last test. They give a sense of the periodicity of some dreams and how dreams look with these very noisy percepts:

http://www.ekran.org/ben/video/supermicro_256575_highArousal_A.avi

http://www.ekran.org/ben/video/supermicro_256575_highArousal_B.avi

http://www.ekran.org/ben/video/supermicro_256575_highArousal_C.avi


Percept Diversity

Posted: May 16, 2014 at 6:20 pm

As refining the MLP beyond what it already does seems no easy task, I thought I would move to the previous problem: To make sure that there is enough temporal diversity in the percepts for the predictor to learn from longer temporal sequences. I’m running a proper test now, but following are a few frames selected from a botched previous test. The percepts are drawn on a black background, and there is no visual difference between perception, mind-wandering or dreaming.

shortlist19 (more…)


Synthetic Dataset – Replay

Posted: May 15, 2014 at 6:38 pm

Due to the results from previous posts, I thought I would try another approach: train the network by feeding it not the state at one moment in time, but concatenate multiple moments of time into a single vector such that the network has some history to learn from. In implementing this, I did not find any improvement over the old method: (PHASE2 is the old method, PHASE3 is the new method)

phase3.error (more…)


Synthetic Dataset Continued

Posted: May 5, 2014 at 10:24 pm

If the MLP is able to learn a sequence, and demonstrates that learning by producing the correct pattern for a particular input, then feedback should result in the network replaying the sequence. There is no difference between feeding the network state t+1 no matter where that pattern comes from. So why is the network apparently learning the sequence in the previous post, while feedback does not result in replaying the learned sequence? (more…)


Synthetic Dataset

Posted: May 1, 2014 at 5:33 pm

In order to deal with the problem of static dreams Philippe asked me to create a synthetic data set that has particular temporal properties. The idea is that we can use it to get a sense of both the distribution of percepts over time and the resemblance / boundedness of dreaming and mind wandering compared to perception. The data-set is 2000 frames and contains three objects: a blue circle that gets bigger, a red circle that gets smaller, and a green rectangle that moves from the left to the right. The background toggles between white and grey every 200 frames. Additionally, there was a single all black frame at the end of the data-set to mark epochs. Following are the first and last frames of the synthetic data set, not including the trailing black frame: (more…)


90,000 frame test on new machine

Posted: April 4, 2014 at 5:03 pm

The first test on the new machine went quite well. The system is now storing 4000 percepts, and seems to be performing quite well. Following is a dump of all percepts (stacked on top of each other) after ~90,000 frames:

percepts (more…)


Performance of New Machine

Posted: April 1, 2014 at 5:32 pm

Thanks to my supervisor I have a new faster shuttle to work with. The previous machine was almost 5 years old, and considering the heft of the computation involved in this project a faster machine goes a long way. The new machine is an Intel quad core with 32GB of RAM and a 3GB GeForce 780 graphics card (which was so big, it needed a little tweaking by the folks at CNS to fit, and involved removing the whole drive bay assembly, leaving SSD as the only storage option). Here is a comparison of performance between the two systems. (“micro” is the old shuttle, and “supermicro” the new one.)

performance

This is the real-time (in seconds) for the OpenCV thread to process. Note the huge range in time between frames on micro, compared to the much more consistent and compact processing time on the new machine. The mean time per frame (during day frames) on micro was 0.58 and on super micro it is 0.21; An almost three-fold increase in performance. Now we’ll see about performance when maxing the number of percepts (up to ~4000 from 900) and filling up those 32GBs of RAM.


Some examples images from last test

Posted: March 26, 2014 at 10:11 am

Some recent images generated by the system. This is a mix of perceptual and dreaming / mind-wandering processes. The perceptual images are generally brighter and more cohesive, as in they reflect a full frame with a lot of information. The dreaming / mind-wandering images are more fragmented. There are some exceptions, where the women walking by is actually perception. Note that perception can lead to impossible images, like the partially present car, cyclist and the man walking by who seems fused with a piece of a trunk. These are perceptual errors (visual illusions) that result from the constructive nature of perception. They are bizarre because of the very limited perceptual ability of the system.

shortlist2

(more…)


Aesthetic Variation

Posted: March 20, 2014 at 11:36 am

I started dumping the most recent time (frame number) each percept was clustered to get a sense of the range of time encapsulated by the percepts. Turns out that the range is very small, in the last test the difference between the min and max times was 29, which represents only about one minute of real-time. So even if the predictor was making broad predictions, the precepts would not represent them! My intuition about this is that there are not enough percepts to represent the complexity of a real-world scene for more than a short period of time. Of course since all new percepts are clustered, this makes sense. Not doing so would mean blindness to the novel. Indeed since percepts are weighted clusters, so they hold more information than is represented in the time of the last clustering operation. Following is a composite of all the percepts after ~90,000 frames, which clearly appears to be fairly cohesive in time and lighting:

percepts

(more…)


Density of Dreams, Arousal and Training Samples

Posted: March 13, 2014 at 6:55 pm

Now that all the system components have been implemented, I’ve finally had a chance to get a proper look at the system’s behaviour. Following are three images that show the display during perception, mind-wandering and dreaming:

perception

(more…)


Memory Leak Solved (for real this time)

Posted: March 4, 2014 at 2:19 pm

While out of town I ran the new non-cropped code over the whole ~250,000 frame data-set. The results clearly show that keeping the percept images to a fixed size solves the memory leak problem. Unfortunately, the program crashed before writing the percepts to disk by attempting to load a non-existent frame past the end of the data-set. Thus, I have no idea what the 350 percepts generated by the system looked like by the end of processing.

noCrop_256574.debug

 


Noncropped Percepts

Posted: February 20, 2014 at 10:04 pm

I changed the percepUnit class so that percepts are stored at a fixed resolution (the full resolution of input frames). This way opencv does not try and reallocate any memory on merging percepts, and indeed my leak is gone. That is the good news. The bad news is that because of the size of the input frames (1920×1080) and the fact that all percepts segmented from a single frame hold their own copy of the same data, the memory usage is extreme. Before I could have probably been able to hold  3000+ percepts in memory, and now I can fit only 300. This makes sense since there are about 100 percepts per input frame. Following are the percepts after 25,000 frame test generating 200 percepts:

percepts

(more…)


Sparse Dreams Revisited

Posted: February 19, 2014 at 11:14 am

After the discussion in the previous post I took a look at the state data dumped by the program. While the waking and non-waking (dreaming and mind-wandering) states are clearly differentiable according to the quality of the state data (see this state plot), it seems they are not so easily to distinguish in terms of the number of activated percepts per frame. Actually it turns out that the distribution of the number of activated percepts per frame is very close in waking and non-waking cases. This indicates to me that the predictor is doing a good job learning, but that there is something missing that is manifest in the quality of dreaming and mind-wandering, which seems to be much more stable over time compared to waking. Following are histograms of the number of percepts active for each frame in 84990 and 256574 frame tests. In these tests the exogenous activation was disabled so we can look directly at the output of the predictor.

(more…)