After writing the previous post I had a meeting with my supervisor yesterday. He suggested that the answers to these questions of feature abstraction should be contextualized by the machine learning method used to organize these perceptual units. So I’m putting further development on the back burner to look at the big picture and do more reading. Following is a figure of the overall structure of the whole system, as I currently imagine it.
Much of the previous system diagram is encapsulated in the “perception” module, and the remainder attempted to make some sense of the “conceptual system”. External stimulus is fed into a visual buffer, mediated by a gating system (corresponding to the reticular activating system). The content of the visual buffer (corresponding to early visual cortex) is segmented and abstracted into features, which are both passed onto the conceptual system (corresponding to the medial temporal lobe) that stores and organizes those perceptual units. During dreaming the sleep / dream system controls the activation of those units in the conceptual system (based on latent activation from previous waking state), and controls the gating of external stimulus. The activation of perceptual units allows the construction of new images, which are produced in the visual buffer, which roughly corresponds to Kosslyn’s conception of encoded memories in the medial temporal lobe constructing images in the early visual cortex.