Meeting with Neurophilosophers

Posted: June 22, 2011 at 10:44 am

I’ve finally had a chance to listen to the recording, made during my meeting with Neurophilosophers Lyle Crawford and Simon Pollon, and one of my committee members, Dr. Steven Barnes, and write some rough notes. The purpose of the meeting was to explore philosophy as a overarching framework to constrain choices that impact artistic, computational and physiological aspects of the project. The discussion was valuable and we discussed a number of issues I had not considered. We discussed a philosophical perspective on the project and particular issues around conceptual development and meaning. This post is organized into sections that reflect the various themes covered.

What is the “Dreaming Machine”?

The previous projects, Memory Association Machine (MAM), Dreaming Machine #1 and Dreaming Machine #2, use a Self-Organizing Map as their memory field. This is a content addressable finite grid of memories organized such that similar memories are located nearby one and other. A new image, captured by the camera, activates the most similar image in the map, which sets off a cascade of activations. In “Dreaming Machine #3” (DM3) I want to replace this memory field with a growing network whose structure is rooted in the structure of images captured by the installation.

The conception of meaning in MAM was very strict and mechanistic. The images in memory are representations that are causally related (grounded) to the world. The “meaning” of these representations is their relation to previous recorded representations (their position in the memory field, and the content of the neighbouring locations).

The images produced by DM3 are not meant to be immediately intelligible but their reference to the shared visual world becomes apparent after short encounters repeated over a long period of time. The system implements models of sleep and dreaming which are not strictly human. The system is not meant to “experience” the world in a way similar to a human, and therefore its conceptual structure and dreams would be alien.

What is a Dream?

We discussed bottom-up and top-down conceptions of dreaming. According to a bottom-up theory, the “Activation Synthesis Hypothesis”, dreams are the result of the brain attempting to make sense of random activations originating in the brain-stem, and therefore have no function. This theory is associated with Hobson, whose new theory, the “AIM” model, considers dreams as functional, but still randomly initiated.

In an alternative theory (Nir & Tononi), dreams are not the result of random activations but result from the activation of mental imagery systems that construct mental images. Evidence that supports this argument is the correlation of childhood dreams and their ability in mental imagery, and not their linguistic ability. Mental imagery is considered highly related to conceptual development.

The conception of dreaming that is currently in play for this project is a top-down mechanism where dreams are mental images that are executed by high level conceptual structures. If these activations are not random, from where do they originate? The proposal is that latent activation of the brain (due to waking experience) primes certain structures that serve as a starting point for dream activation. This correlates with studies that show a strong correlation between waking experience before sleep and dream content. If we accept that dreams are correlated with mental imagery and not linguistic ability, then it begs the question: Are the kinds of concepts used in mental imagery functionally different than the concepts that are used in language?

We discussed possible functions of dreaming. For Hobson the random activations of the brain result in a “virtual reality” which would foster development of self. Crick and Mitchison proposed in 1983 that dreams could function to “prune” spurious connections from a growing neural network. This is analogous to dreams as a process of “garbage collection” where unused information is discarded. One of the arguments in support of a function of dream sleep is that the metabolic needs of REM sleep rival waking. Such a commitment of energy does not align with recuperative models and implies an important function for REM sleep.

System Design

The sketch for the design of DM3 posted previously was discussed. The basic design is that visual prototypes (clusters of input stimulus based on features such as histograms, haar-like features, etc. ) are clustered by vector quantization. These prototypes are associated with neuron-like units that interconnect proportionally to the degree of similarity of the prototypes. New prototypes are generated when there is no existing prototype that sufficiently encapsulates its features. These neuron-like units are not structured hierarchically, but hierarchy is implicit in the structure of the connections between them. A central question is whether a concept “that makes any sense” (Simon) can result from object-centric features? Like in the system proposed for IAT 888, object and spatial features are considered, but its unclear how these would be linked in one representation. The problem is analogous to multi-model integration. It has been argued that the movement and location of objects in space could be the foundation of abstract concepts that are not object-centric, for example enclosure.

The system has a sleep model that regulates the circadian rhythms of the machine. The idea is for this cycle to be entrained by the ambient light levels, and/or the degree of activation of the neuron-like network. REM and slow wave sleep have very different qualities, both in terms of the kinds of dreams that occur in those states, and their corresponding patterns of activation. It is unclear if the neuron-like network should exhibit correlates with these states, perhaps showing varying degrees of neural synchrony.

We discussed the possibility of feedback, where the results of the dream are fed back into the system as a new perceptual input. This has interesting implications for the relation between mental imagery and perceptual input. It does appear that mental images will take over perceptual attention during mental imagination. One idea is to have activations due to perception, and activation from imagination compete, that is to have feedback in terms of activation, not in terms of prototype generation. Perhaps some method of feedback could be used to allow weak connections to be pruned.

Would components of the dreaming machine be mapped to various brain regions? Not arguing for such a mapping would allow significantly more freedom in the design, and is consistent with the expectation that the system is alien. Non-REM sleep is correlated with recuperative aspects of sleeping, but dreams also occur in these stages. It’s not clear how a recuperative sleep cycle would function in a machine.

Pruning could be enabled by lowering connection strengths over the whole network. This would destroy all but the most strong connections, which could be brought back up to higher levels using a feedback process.

What is a concept?

The nature of concepts is certainly an open area. For this project, and at this point in time, concepts are considered abstract entities that relate multiple atoms in the world. Connections between perceptual prototypes, based on their features,  are the atoms that compose concepts. They are not associated with particular neurons, but are patterns of activation whose size, and perhaps shape, change the degree of abstraction. For example the concept “red” is manifest as a link between objects that have “red” in them, to a degree proportional to the amount of red. A small activation function of a red object (constrained to the feature of redness)  activates the next object with the most red. A larger activation function would activate other objects with less red, perhaps even orange. An extremely large activation function would activate all perceptual prototypes, as they all contain redness to some extent, even if a minuscule degree. An open question is the shape of the activation, in which directions should the activation propagate—In terms of colour, shape or temporal occurrence? Concepts like “justice” would not be possible, but its unclear how these concepts could ever be learned.

What is meaning?

Meaning is extremely voluminous notion. Other related words are often used, both the philosophers and the developmentalist have asked me: when you say meaning do you mean significance? Meaning appears to be socially constructed and somehow particular to humans. It is thought that animals may have a sense of significance, but not meaning the way we think of it. A weak sense of meaning is the notion of “representational content”—the meaning of a representation is what it refers to (independent of how that reference is formed).

As this work has begun with a notion of meaning from a development perspective, see this post for more information, that is where the discussion began. It was clear to the philosophers that my conception of embodiment, the ability for the machine effect the world through its screens and sense the world through its camera, did not match the needs of embodied cognition. Under this hypothesis it is claimed that much of cognition happens physically, not in the mind alone but through the intentional manipulation of the world. As DM3 will not have this ability, conceptual generation from an embodied cognition perspective may not be appropriate. Indeed it seems for a machine to learn like a child, it has to be attended to as a child. I don’t think this is appropriate for this artwork.

DM3 has no homunculus: there is no place where consciousness or will is to be implemented. Those representations that are in the system straddle two notions of meaning. (1) For the viewer, the components of the machine dreams are meant to be recognized, to some degree, as aspects of the shared environment. These objects then have meaning for the viewer. Additionally, the ability of the human mind to make sense of complexity has the potential to allow the abstract dreams of the machine to gain significance. (2) For the machine meaning is a causal and simplistic affair. The image of an object from the camera is only an abstract symbol that is situated in relation to other symbols in the system.

One of the major issues with constructivist development, for this project, is this notion of differentiated vs undifferentiated signifiers. In my interpretation, undifferentiated signifiers are the case when the world impacts the organism. It is totally reactive, a particular stimulus will illicit a particular response. Through development signifiers become differentiated. Once differentiated, the infant can initiate a response without the stimulus. I never quite understood how this could work, as the self of the infant is supposed to develop in parallel. For this project, the expectation is that the growing neuron-like network becomes complex enough to support auto-activation.

According to the empiricist position, conceptual structures result from raw sense data in the world and there are little or no innate concepts. On the nativist side, innate structures are used to restructure the world and transform sense data and conceptual structures. Dreaming machine certainly has innate mechanisms, those that are computationally implemented by myself. I find the nativist position suspect. What are the foundations of innate mechanisms but other innate mechanisms? It seems that such an argument implies an infinite regress. What is the generator for origination? Is origination itself a flawed concept, and all that is has already been and will always be (infinite and cyclic), just recombined and restructured? There is also the discussion of randomness as an originator, but I don’t believe randomness exists—it is simply structure that is not yet understood, and is therefore a transformation of that which already exists. This is the very center of the artistic enquiry: can a mechanistic process originate through an interface with a complex and existing physical world?

I have this notion that the mind/brain is a massive correlation machine, that the significance of the world becomes meaningful through its relation to other things in the world (remembered or not). Each atom of meaning, whatever that is, is linked to billions of other atoms, in thousands of dimensions. We have no computational tools that match the parallel and multidimensionality of links in the brain/mind, so its hard to say what such a system allows. Free will and consciousness are not easily placed in the framework of brain as correlation machine, if they exist at all.

We had a few discussions about fidelity: what kinds of sensory data are needed?  Multimodel? Spacial and object-centric features? The question becomes: if the mind/brain, or at least the conceptual system, is a massive correlation machine, then what is the minimal complexity for a rich conceptual structure to develop? Do humans, and animals, have rich representational systems because of our mobile and multi-modal experience of the world? Or would the representational system grow even from an extremely limited number of sensory dimensions?

The immobility of the system makes a strict adherence to theories of development problematic. Those theories depend on assumptions that are not transferable to machines, like biologically rooted desires. The interest in origination continues to make these theories inspirational, but a faithful implementation is inappropriate.

Learning

One notion of learning is that the behaviour of the organism, in response to a stimulus, changes over time. In high level learning this results in the ability to refine a skill to accomplish a task. The simplest possible form of learning is habituation, where a stimulus is ignored. This seems to be the roots of attention, and is how tests on infants are often done (Infants tend to look longer at objects that have not been habituated, and it is assumed these objects lack the conceptual (protoconceptual) background of habituated objects. In this system the vector quanitization method and distance function give the system a measure of the newness of a stimulus, the more similar it is to an existing (known) perceptual schema, the more habituated the system is to that pattern.

If habituation is the root of learning, then could the perceptual system be considered a learning system? From the start it was my intention to use standard computer vision methods to provide visual data for the prototypes, in order to concentrate on the other aspects of the project. It could be an option to use a custom computer vision method, but that would take a lot of time and effort, and perhaps not even work.

For the empiricist, you see red objects and those eventually develop into a concept of redness. According to the correlation notion described above, many stimuli are all presented to the system and labeled red (by caregiver for example). The “red” label correlates all these objects together, and the only dimension on which they can be correlated, in terms of object features, is redness. The “red” label is then associated with a dimension of the correlation, which is arbitrary. The label “blue” could be used, but if red is the feature that links all stimuli, then the concept of red would still be generated, it would just be called “blue”.

The nativist would say, in order to recognize redness, don’t you require an innate sense of redness that already implies a concept of redness? I think there needs to be a distinction made between the sensory reality of redness (continuous, unbounded, contextual) with the generated concept of redness (context-independent, bounded). In terms of sensory data, red and green may be connected by a continuous gradient, but that space is carved conceptually into red and green. I think the nativist argument depends on a conflation of sense-redness and conceptual redness. The sensory ability to see red is innate, and the correlation system is innate, but the concept of “redness” is emergent.

What is the nature of the screen?

I had never given this much thought. It seemed clear to me that this is an artwork, manifested as a process, and its output is displayed on a screen. The problem lies in that minds/brains don’t have direct output. Cognitive processes are manifest in invisible patterns of activation and realized in the behaviour of the organism. The screen is currently considered a window, it directly shows the cognitive dream process. If a human mind had a window, we would not understand it, because the patterns of activation are not objectively meaningful, they are experienced by the subjective consciousness, and only have meaning for that consciousness. DM3 has no consciousness, there is no internal observer for its cognitive processes. What should the screen be? Should it show what the machine perceives, thinks, imagines?

If the system is a cohesive entity, then it would not have a screen, there should be no output, just the internal machinations of its own experience of that dream. The machine has no awareness, so the dreams are not meaningful for it. Even in its internal bounded sense of meaning, the meaning of those dream components is the dream. It’s a representation that directly manifests the relation between components in the conceptual structure. Does that make the viewer a homunculus? Is the screen a light used to present the conscious mind with the results of unconscious cognitive processes? How do these processes relate to the viewer? Can we consider them part of a mind, even if they are independent of consciousness? This highlights the very problem of the homunculus: Its causal distance from cognitive processes makes consciousness, experience, and free will, illusions.

What the viewer sees on the screen is not the same as what the machine would experience if it was conscious. This is because those visual representations are designed to resemble the world and be meaningful for the viewer. The machine itself would have no use for this resemblance representations.

What if the machine has no output at all? What if the screen is a component of the dream process and is exposed to the viewer? For example, the image of the screen could be fed back into the system as a new perception. This would certainly increase the complexity of representations in the system, but its unclear how to make this feedback integral. If its not integral, it may just as well be an output.

I have said before that I’m more interested in the process than the output—it is simply an entry point for the viewer. If the process is so important, why show the output at all? Why not show the machine itself? Or visualize the patterns of electrical activity in the machine? I think it would be too abstract to show brain imaging of the machine (internal electrical patterns). The screen output would always be deceptively concrete though, but how else for the viewer to enter the work? Perhaps considering the viewer as homunculus is best, as it also highlights the problems with a causal disconnection between the self and the brain. We do have a cultural repertoire of representations of dreams, and showing the machines dreams would tap into that. The image is for the viewer after all, so it should be penetrable by the viewer.

Waking vs Dreaming

During waking, the machine activates perceptual prototypes and builds a conceptual structure. What the screen would show is these prototypes, shown in relation to their activation in the system. During sleep, latent activation would show prototypes such that reflects their activation in the dream state. Could the viewer tell the difference between the dream and the perceived reality? Presumably the non-dream would be more tightly bound to the here and now world that is shared with the viewer. If this screen is a portal into the mind of the machine, then is there a difference between reality and dreaming? With the exception of lucid dreaming, don’t we think of our dream experiences as reality? Would someone looking into our minds be able to tell the difference between waking and dreaming? This begs the question what is the relationship between dreaming and hallucination. If both are enabled by mental imagery, then they should have some shared character. Perhaps if the machine is kept awake by too much stimulation it begins to hallucinate while awake. I’ll end this post with a quote from Anti Revonsuo (from the BBC documentary “Why do we dream?”):

“We are dreaming all the time, its just that our dreams are shaped by our perceptions when awake, and therefore constrained.”