Clustering and Aesthetics

Posted: March 20, 2013 at 1:38 pm

The clustering code is working pretty well for background percepts. Following is a video that shows the raw frames on the left (in 720p) and the resulting clustered output on the right (also in 720p) through ~300 consecutive frames. Note the video is quite high resolution (2560×720) and best performance is likely attained by downloading (right click and “save video as”) and using a native video player. For each new frame all regions in the previous frame are compared and clustered: if they are sufficiently similar, then the corresponding regions in both frames are merged by averaging into a single percept.

(more…)


Perceptual Clustering

Posted: March 15, 2013 at 4:57 pm

I have integrated the existing segmentation code into openframeworks and also implemented a first version of the new clustering algorithm (based on BSAS). This clustering algorithm is quite simple and I’m currently using all features provided by segmentation — position (x, y), size (width, height, area), colour (mean of CIE L, u, v channels) — similarity being Euclidean distance in multiple dimensions. I have only tested thus far with background percepts, with the following results:

Background Clustering Example (more…)


Free Energy, Prediction and MDP

Posted: March 5, 2013 at 10:31 am

I just finished reading a new Hobson paper (“Waking and dreaming consciousness: Neurobiological and functional considerations”), which is an update on Hobson’s AIM model integrating Friston’s “free energy formulation”. The key points are that we can consider waking perception as a learning process where the difference between what happens and what is expected drives more accurate predictions. REM sleep and waking are contiguous processes, where the lack of external stimulus during REM means there are no prediction errors, which triggers dream experiences. In my reading it appears that Hobson proposes that visual images in dreams are the result of the ocular movements themselves (REMs) predicting visual percepts. Hobson proposes that dreams are of functional use because they manifest an optimization process: “one can improve models by removing redundant parameters to optimize prior beliefs. In our context, this corresponds to pruning redundant synaptic connections.”. In short, dreaming improves the quality of the predictive model of the world in the absence of sensory information, by pruning. (more…)


Update on Priming and Confidence

Posted: February 21, 2013 at 2:22 pm

I met with Steven Barnes today to talk about the previous post and through discussion we clarified some of the points of the learning algorithm proposed earlier. Lets go through an example of three subsequent frames (t=1,2,3) in a perceptual case. For simplicity we are do not describe the clustering process here, though the algorithm may have unforeseen consequences in relation to clustering. The basic premise is that priming is predictive and therefore that future percepts are expected to be in the same context as current percepts. (more…)


Priming, Perception and Prediction

Posted: February 18, 2013 at 5:02 pm

The importance of simulation (prediction) in dreaming and mind-wandering literature should be integrated into the current conception of DM3. There are two aspects of continuing development: (1) The current propagation of activation for free-association is inherited from previous projects (MAM, DM1 and DM2) and not well situated in theory. (2) There is no feedback from the world that can be used as a reward that could be used to drive intrinsic motivation. (more…)


The Readability of Propagation

Posted: January 16, 2013 at 5:17 pm

I have not posted in some time due to working on a paper for Creativity and Cognition (now submitted), which I will post at some point. The paper attempts to integrate the theories of dreaming and mental imagery that have been in play in this project with creativity and the default mode network (in relation to dreaming and mind-wandering). (more…)


Abstraction and Meaning
(Human vs Machine Creativity)

Posted: November 29, 2012 at 9:24 am

I have said that my interest in art, and my approach to art making, is not in terms of the traditional role of artistic “expression”, but rather art is used as a methodology to explore “expression”, or more specifically examine the notion of meaning itself. I have talked about being more interested in “doing” than “representing” and in “exploring” over “expressing”. This results from an dissatisfaction early my my B.F.A. with contemporary art where I could not glean the relation between the title and text accompanying a work, and the form of the work itself. I found this very frustrating and often considered it a failure of the work. Part of my interest in using computational mechanisms, and scientific knowledge, to build artworks is to formalize and make rigorous the relation between the concept (what the work is supposed to be) and the form (what the work is). (more…)


Levels of Description, Material and Process

Posted: October 15, 2012 at 5:54 pm

Philippe has been asking me to do a “system specification” for quite a long time now, and each time I think I’m doing it, I hear back from Philippe saying that I have not yet produced a specification. The idea of the specification is that it is the design of the whole system in as much detail as to allow implementation. The “specifications” I have been producing up to this point are natural language descriptions of modules and processes, with some math equations in areas that I have a concrete idea of what should be happening. For Philippe, this is not a specification because it lacks detail and natural language is too vague. (more…)


Long Term Memory

Posted: October 4, 2012 at 2:07 pm

Philippe has encouraged me to work through the design process of developing the system through writing a paper for Computational Creativity. After taking a day to write an introduction and a section on theories of dreaming and mental imagery, I started working through the diagram from the PhD proposal keeping in mind new insights regarding long term memory that came out during New Forms and the proposal defence: (more…)


Homoeostasis / Learning

Posted: October 2, 2012 at 9:43 am

If the action of the system is a choice to merge two subsequent images as the same percept, then this could be a root action of the system, not an action in the world, but an action that changes the perception of the world. In order for the system to compare these constructed percepts with external stimulus, a different distance measure (or different threshold of the same measure) would be needed because percepts are already defined to approximate the external stimulus sufficiently. (more…)


Ph.D. Proposal – Passed!

Posted: October 1, 2012 at 6:00 pm

I passed my PhD proposal on September 25th!

PhD Proposal Presentation

(more…)


After New Forms Festival

Posted: September 28, 2012 at 11:05 am

I have not posted in a while due to keeping busy with preparing for the New Forms Festival. I collected quite a lot of images (~750,000), enough for a few full day cycles which will be good to test with later. I spent most of the prep time before the exhibition working on loading percepts saved to disk by the segmentation system, and generating images in openframeworks. It was not until I was in the exhibition space for the mini-residency that I started working on a prototype of the dreaming system. I skipped the high level features and just used the low level features for association (mean L, u, v for colour, area for size, X and Y position and frame number for time). (more…)


Keeping Percept Growth in Check

Posted: August 28, 2012 at 9:48 am

For every 5th frame the percepts are filtered so that only the ones with the most merges, and the earliest and latest time are kept for each region. Every 20th frame, regions are merged such that the contained percepts are concatenated, and then filtered. This will work well enough for NFF.


Progress towards NFF

Posted: August 24, 2012 at 2:21 pm

The plot above shows the exponential increase in the number of background percepts created by the system (and the corresponding increase in processing time per frame). I thought I could throw some away with a threshold to keep things in check, but even when throwing away all percepts that have been merged less than the mean number of merges, things still go out of control. Seems the only reasonable choice is to put in the density calculation and throw away percepts that are already well represented by a particular location in the feature space. Unfortunately that means figuring out the feature space distribution stuff. I suppose I’ll start with location in frame and see how well that works.


Automated Merging of Foregroud Percepts

Posted: August 22, 2012 at 6:06 pm

The same set of foreground percepts from the last post are merged by the system (using histogram correlation and some position and aspect ratio constraints).


Foreground Segmentation

Posted: August 20, 2012 at 3:54 pm

After all this effort on background segmentation here is a first early image of segmented foreground regions. There is no merging in this code, just the foreground percepts shown in the locations they were captured from multiple frames.


Ph.D. Proposal Submitted

Posted: August 17, 2012 at 4:11 pm

The reason why I have not been logging my progress recently has been my PhD proposal, which I submitted today!

Ben-Bogart-PhD-Proposal


The Return of Segmentation

Posted: July 11, 2012 at 5:20 pm

I have not posted in some time because I have been concentrating on my PhD proposal, which needs to be complete by August 13th. In order to generate some images and a prototype system for the proposal I’ve rethought the architecture to some degree (partially discussed in the previous post, details forthcoming) and am now attempting to implement some of those ideas.

The first key is to use background subtraction to generate a background and foreground image. The idea is to segment the background image and use those masks to extract portions of the live image. As background subtraction is a well known method, it should be feasible and robust. (more…)


Where am I? (In the process)

Posted: May 30, 2012 at 2:31 pm

After meeting with Philippe we discussed my (lack of) progress on the project. The system design has not really changed, and I’ve spent a lot of time doing what? Hacking around with OpenCV and C++. All the while I’ve been trying to write a paper with Steven explicitly relating dreaming and mental imagery to propose an atypical way of thinking about those processes. Through all this hacking away I’ve been pretty stagnant in regards to theory and implementation. (more…)


Nonmerging test

Posted: April 25, 2012 at 2:58 pm

Here is an image of a run where I disabled the merging code. So this is just an accumulation of patches until they fill up memory (once the program allocated 7.8GB):

This image contains 198,100 patches from 734 frames. The mean processing time is 3.9 seconds, with a max of 5.3 (excluding the first frame when the CUDA stuff gets complied for the GPU, which was 7.365). The new (good) segmentation method based on floodFill() is a bit slow, but 4s per frame is good enough for now.


Merging Patches into Percepts

Posted: April 25, 2012 at 10:39 am

After spending far too much time on early perception I really need to move on. Above is a somewhat aesthetically interesting reconstruction from patches that could not be merged. I think the current very rough sketchy state of the system is ok for now, but merging is problematic. (more…)


Feature Extraction (between frames for 5 subsequent labelled patches)

Posted: April 13, 2012 at 3:48 pm

Since I’ve been flying by the seat of my pants, I thought I should collect some real data on the feature extraction process before continuing. The only measure I could think of for seeing the distribution of features for a particular patch of pixels was to segment and hand label patches. I’ve done so for a small set of 5 subsequent frames, and following are the results. (more…)


Segmentation Update

Posted: March 2, 2012 at 2:56 pm

Indeed the third segmentation method described in the previous post is working well enough for now. It does appear to be very stable over time. Early indications are that for two perceptually identical frames, one with 183 and the other with 192 patches, using simply the distance between patch centres, 183 of the patches will be merged. These results are slightly suspect though, as the code used to actually merge percepts has some bugs and requires a rewrite. Still, over 20 frames, the top-left (first) patch was very stable across all these frames. Following is a sample reconstruction. In order to remove edge effects (resulting from morphology), the images are cropped, which seems to have thus far solved the patch-the-size-of-the-whole-frame-problem described earlier.


Naïve Image Segmentation using FloodFill

Posted: February 27, 2012 at 5:30 pm

I’ve been experimenting with using floodFill directly on the morphology output, effectively doing the same thing as my early segmentation approach using mean-shift, but bypassing the mean-shift stage (which was highly computationally intensive). The results are promising:

(more…)


Canny based Segmentation

Posted: February 24, 2012 at 5:00 pm

Following from my previous post on the temporal instability of mean-shift segmentation I’ve been looking at the feasibility of using an edge detector to do segmentation. A simple test with Canny() showed that the edges are very very stable over time. So I went ahead and used standard OpenCV methods, using findContours() and approximating them into polygons. At this point I realized a familiar problem has returned. The vast majority of segmented regions are not regions at all, but the empty space between regions. Following is an image of the Canny output (with some morphology operations to reduce noise): (more…)


System Architecture

Posted: February 17, 2012 at 4:59 pm

Following is a diagram of the current conception of the system. It is a high level overview where many details are omitted. A number of modules have been added from the last diagram, which are filled in blue. After looking more at LIDA it has become clear that it will not be that useful for this system. That being said, there are overlaps between these modules and some of the LIDA modules.

(more…)


Temporal Stability of Current Segmentation

Posted: February 16, 2012 at 4:03 pm

So I had a chance to look at the data I dumped on Monday that shows the features of patches in relation to frame numbers. The unfortunate realization is that the centre position of the patches is not a good indicator that they should be merged. The reason why is that the segmentation is very unstable over time. My tolerance for merging patches is currently 3 pixels, but after looking at the data, some patches are as much as 70 pixels off from frame to frame, because the edges are so unstable. I hope this is caused by the mean-shift segmentation, which causes a huge computational load.

My next steps are to see if another method may be more stable. I asked on the OpenCV mailinglist and someone did mention that task independent segmentation is inherently problematic, as in the general case of segmentation is an open problem. It has been argued that humans can only do it so well because of top-down control processes influencing perception. Since the segmentation does not have to be perfect I’ll look at other, less perceptual, segmentation methods that may be more stable. One promising method is using an edge detector to bound a floodfill operation. This should be more stable over time, but the regions may be strangely shaped. Better than nothing.


A Dream, Analysis, and Daily Experience.

Posted: February 16, 2012 at 10:26 am

I had a dream last night. I’m writing about it here because it is relevant to the project in that it is an usual dream for me. Also after spending so much time reading about dreaming, I found a number of features of it quite interesting. I’ll start with the dream itself… (more…)


Open Percptual Problems: Over Stimulation, Stability Over Time and Task Independence

Posted: February 13, 2012 at 10:56 am

I’m stuck on a couple of problems, and wanted to post about them before continuing the work. These problems are highly interrelated and highly relevant to the link between theory and implementation. All problems are rooted in a single core problem, the over stimulation of the system, in relation to memory, and not activation. (more…)


Segmentation Revisited

Posted: February 10, 2012 at 11:36 am

After spending a week trying to debug memory corruption error I found the problem, I was attempting to write outside of the bounds on the reconstruction image, because a number of percepts were the size of the entire frame. Once I put in a condition to ignore these large percepts (segmentation failures), I got reconstructions that look like this:

(more…)