Attention, Representation & Embodiment

Posted: May 26, 2010 at 12:20 pm

As I’m reading on mental imagery, and mental representation I’m torn between the Constructivist and Empiricist positions on development, as described by Muller, Sokol and Overton. On one hand their is the idea that mental representation is consciously constructed in relation to the world in an embodied fashion. The cornerstone of this position is that representation exists to serve the intentionally directed will of the agent. On the other hand there is the notion that the relationship between mental representation and the world is a causal one. The world  impacts itself on the agent, who passively receives signals that are transformed into representations.

The former requires a proactive agent to make the link between the world and representation, which is obviously tricky for AI and computational implementation. The latter has no such requirement, the agent is simply impacted upon by the world, and has no intention of its own.

MAM, and Dreaming Machines #1 and #2 all take on the latter notion, they are impacted by the world, and are without conscious intention. At the same time the SOMs organize the incoming data based on the representation of items in the world. Where a memory is located on a map is a choice, but a choice implemented by the same code, every stimulus is treated exactly the same.

Lets take the example of the current perception-synthesis system. The world causally impacts itself on the sensory apparatus of the system. These impressions are registered as images, which are organized into a series of visual prototypes. What if the method of generating the prototypes could be changed by the system? Just as we can attend to certain features of the world, the system could change how representations are formed, based on some evaluation of the representations. This could be as simple as changing the method of choosing a BMU to use different features, and distance measures. The obvious question is what motivates these choices? Some analysis of the prototypes? Could the presence of the viewer be used? Perhaps the longer the viewer sits, the more successful the prototypes are considered? The system would be unable to tell why the viewer has stayed longer, it could be the prototype, but which one? It could be the context, it could be any aspect of the external properties of the artwork. A really weak idea is to give the viewer to the choice to press one of two buttons, one when the viewer recognizes the image from the content, another if not. Another idea is to count the number of nearby faces in the scene as social attention. This is interesting in that the involvement of the viewer would change how the system makes sense of the world, making it dependent on a minimal social situation.

In the constructivist approach the world is impacted upon by the agent, the world then becomes highly coupled to the internal representations of the agent. The thought of the cup is directly connected to the cup itself. The agents representations transform as the agent transforms the world. If DM3 continues on the line of “embodiment” from MAM, where it is impacted by the world but cannot impact the world, the system is embodied not in the physical world, but a world of representations, the only things the system can manipulate. The machine lives in its own world, that is impacted upon by the physical context, but is also independent of it. It is not quite true that the system does not impact the world, it has a mass, it takes up space, and has a visual presence that can be changed by the system. This image has the ability to change the world by altering the behaviour of the viewers.