Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/38/d294798561/htdocs/ben/wp/wp-includes/post-template.php on line 284

Global Workspace Theory, Free Will and The Location of Mental Images.

Posted: November 11, 2011 at 12:32 pm

I’m continuing to listen to some of Franklin’s lectures about LIDA and cognitive modelling in general. Yesterday I got through the one explaining Global Workspace Theory (GWT). Little did I know, but I had already come across this theory during my Masters research, and discounted it due to its causal disconnection between consciousness and cognitive processes. While I continue to find this problematic as a model of sentient creatures, the notion of “functional consciousness” is certainly interesting and useful in the case of machines and AI.

According to GWT the mind is a “theatre” of sorts. The stage is a workspace (working memory), populated by players (cognitive processes). The “spotlight” illuminates players which become the content of consciousness. In the backstage are “contexts” (coalitions of players / cognitive processes) which are not accessible to consciousness. I’m still not quite sure what a coalition of processes technically means. The “audience” is a collection of other unconscious processes that are able to receive messages from players on the stage. One key point (in my incomplete interpretation) is that players illicit the attention of the spotlight. The spotlight is not voluntarily moved (which solves the homunculus problem).

The theory does include some notion of voluntary attention, but that seems to be a function of meta-cognition. Perhaps I’m wrong about this causal disconnection, but this view is not rare. At the core of this causal disconnection is the notion of free will. There seem to be three options: (1) choices we ‘think’ we make are deterministic resulting from a set of operations outside of our control, (2) choices are the result of randomness (quantum noise perhaps) and finally (3) choices are ours, caused by consciousness exerting a force on the world.

If the world is totally deterministic (an identical cause leads to an identical effect) then #1 makes sense and #2 acts outside of determinism.  #3  is “incoherent” (according to Aaron Sloman). If we accept determinism then every “choice” every person has ever made was determined at the moment of the big bang. Even randomness is subject to this determinism, the same “random” numbers occurring in the same sequence for the same initial conditions. Even if randomness was not deterministic, and always resulted in a different outcome for the same conditions, then that would hardly qualify as “free will”. Sloman considers “free will” a non issue, as all choices are constrained by determinism. He considers a continuum of degrees of will where higher degrees appear to be due to more complex causal relations within a system, like planning and deliberation, resulting in actions.

So if the world is deterministic, and free will is an illusion, then its natural that “consciousness” would be functional, and unable to impact the world beyond what is caused by those processes that impact it.

Franklin does discuss mental images in the context of GWT. Mental images are the state of the global workspace, as it is available to consciousness. This appears to align with Pylyshyn’s rejection of mental images. For him mental images are an experience of symbolic thinking where those symbols correspond to visual stimuli. Another view is Kosslyn’s, who proposes that mental images are decoded from long term memory into the early visual cortex. There have been a number of interesting fMRI studies (take this recent example) where blood flow in the visual cortex is mapped to visual stimulus such that the model uses activity in the visual cortex to predict the corresponding perceptual image. These studies do not yet address mental images, but dreams, mental images and hallucinations are described by the authors as possible future work.

So if I am to use LIDA faithfully, this rejection of mental images is problematic. The purpose of this project is to make a machine dream, but more specifically it is expected to construct dream images that are perceived by the audience. If we accept the link between mental images and dreams, and that mental images are in the visual cortex, then the dreaming machine should produce actual images, not just abstract representations of the internal activations of symbolic notions. It has been pointed out by the neurophilosophers (discussed here) that the presentation of machine dreams for the audience is philosophically problematic. The notion of a machine that does not produce images at all, but “dreams” only through its internal machinations—where there is no consciousness to experience them—is both philosophically and artistically problematic.

So lets assume the LIDA architecture can tolerate the construction of visual images in the “visual buffer”, as is pictured in the previous post. This begs the question: what is the relation between perceptual and imagined images? It’s clear enough in the case of dreaming as mental images in the absence of external stimulus, but what of hallucination? What happens when imagined and perceptual images conflict? Does the high degree of stimulation from perceptual experiences drown out the ongoing generation of dreams (according to Revonsuo)? Try a little experiment yourself. Can you imagine a blue square superimposed upon your current perceptual experience? For me, I seem to only be able to attend to one or the other, they appear to conflict with one and other, implying being written into the same buffer. If I can manage the illusion, I realize I’m actually imagining a copy of the perceptual image along with the square. As soon as something in the visual perception moves or changes, the square disappears. It seems that the degree of  stimulation from perceptual experiences can drown out imaginary content. Would this be the case if Pylyshyn is correct?

If we return to LIDA, and assume dreams and mental images happen in the global workspace (the results of perceptual and memory structures being promoted into consciousness), then the notion of dream as free association breaks down. This is because the global workspace does not directly interact with memory structures. In order for dreams to be associations, the would have to occur in the workspaces that interact with episodic memory. This dream would have no form, it would only be a set of activated memory components.