Can a LIDA imagine its own behaviour?

Posted: November 4, 2011 at 5:02 pm

In attempting to think through a possible integration of LIDA and the current conception of the dreaming system, I’ve stumbled upon the above question. In the previous sketch, the content of memory (the current state of the “conceptual system”) is used to generate visual sensory data that is the dream content. In LIDA, it becomes somewhat unclear from which module(s) the content of the dream would arise. Clearly, it would involve longer term memory structures, such as those in episodic memory, which are only explicitly manifest in the workspace and then “broadcast” into the global workspace. So where does the dream occur? In the workspace or in the global workspace? Since the agent is only conscious of what is in the global workspace, then that appears to be the logical location.

The global workplace appears to have one way communication to procedural memory, which then sets off the “Action Selection Phase” of the cognitive cycle. Perhaps my still shallow understanding of the architecture is at fault here, but if the communication between the Global Workspace → Procedural Memory → Action Selection is one way, then how can the agent store memories of the selected actions? Not only that, but what is the LIDA explanation for phenomena such as imagination or visualization? Clearly there is a difference between imagined actions and actual actions, but the LIDA architecture does not appear to make such a distinction. Final action selections appear to only effect the external environment, they are not folded back into a workspace.

If my interpretation of the architecture is correct, and dreams also involve action selection operations, then dreams would have to occur in every LIDA module. What, then, would be the difference between dream cognition and waking cognition? Also as the dream would not occur only in any of the memory structures or workspaces,  the dream is manifest in the state of the whole system. This makes dreams difficult to analyze and decompose.

After posting this initially I started listening one of Franklin’s lectures on LIDA. He explicitly discusses dreaming and imagination in this lecture, and states that he considers actions imagined in the mind as similar to actions that impact the environment. Franklin sums up the discussion with the following question, which he hopes to answer by the end of the lecture: “To what extent is a person who is asleep an autonomous agent?”. Franklin is clear that LIDA models autonomous agents that have sensors, effectors, and drives. Perhaps the goals present in dreams are illusions and not goals at all, making LIDA inappropriate.

In this lecture he also clearly states that mental images are retrieved from episodic memory, presumably into workspace and global workspace. Is the agent is consciously aware of these within the glbal workspace? This sounds like Pylyshyn, where images are simply abstract symbolic representations. The current state of the sketch of the DM3 design subscribes to Kosslyn’s view, where mental images are synthesized into the early visual cortex, that is into “perceptual memory” / “visual buffer”. In a diagram depicting IDA (the non-learning precursor to LIDA) There is a clear link between action selection and perceptual memory. Why is this link missing in the LIDA diagram, where actions appear to only impact the perceptual memory via the environment?