In macaque ventral premotor cortex, we recorded the activity of neurons that responded to both visual and tactile stimuli. For these bimodal cells, the visual receptive field extended from the tactile receptive field into the adjacent space. Their tactile receptive fields were organized topographically, with the arms represented medially, the face represented in the middle, and the inside of the mouth represented laterally. For many neurons, both the visual and tactile responses were directionally selective, although many neurons also responded to stationary stimuli. In the awake monkeys, for 70% of bimodal neurons with a tactile response on the arm, the visual receptive field moved when the arm was moved. In contrast, for 0% the visual receptive field moved when the eye or head moved. Thus the visual receptive fields of most "arm + visual" cells were anchored to the arm, not to the eye or head. In the anesthetized monkey, the effect of arm position was similar. For 95% of bimodal neurons with a tactile response on the face, the visual receptive field moved as the head was rotated. In contrast, for 15% the visual receptive field moved with the eye and for 0% it moved with the arm. Thus the visual receptive fields of most "face + visual" cells were anchored to the head, not to the eye or arm. To construct a visual receptive field anchored to the arm, it is necessary to integrate the position of the arm, head, and eye. For arm + visual cells, the spontaneous activity, the magnitude of the visual response, and sometimes both were modulated by the position of the arm (37%), the head (75%), and the eye (58%). In contrast, to construct a visual receptive field that is anchored to the head, it is necessary to use the position of the eye, but not of the head or the arm. For face + visual cells, the spontaneous activity and/or response magnitude was modulated by the position of the eyes (88%), but not of the head or the arm (0%). Visual receptive fields anchored to the arm can encode stimulus location in "arm-centered" coordinates, and would be useful for guiding arm movements. Visual receptive fields anchored to the head can likewise encode stimuli in "head-centered" coordinates, useful for guiding head movements. Sixty-three percent of face + visual neurons responded during voluntary movements of the head. We suggest that "body-part-centered" coordinates provide a general solution to a problem of sensory-motor integration: sensory stimuli are located in a coordinate system anchored to a particular body part.
In the macaque, neurons in ventral premotor cortex and in the putamen have tactile receptive fields on the face or arms, and corresponding visual receptive fields which extend outward from the tactile fields into the space near the body. For cells with tactile receptive fields on the arm, when the arm is moved, the corresponding visual receptive fields move with it. However, when the eyes move, the visual receptive fields remain stationary, “attached” to the arm. We suggest that these “arm-centered" visual responses play a role in visuo-motor guidance. We predict that other portions of the somatotopic map in premotor cortex and the putamen contain similar receptive fields, centered on the corresponding body parts. This "body-part-centered" representation of space is only one of several ways in which space is represented in the brain.
Graziano, M. S. A., & Gross, C. G. (1995). From eye to hand. In J. King & K. H. Pribram (Ed.), Scale in Conscious Experience: Is the Brain too Important to be Left to Specialists to Study? (pp. 117-129) . Laurence Erlbaum Associates.Abstract
How arewe able to reach accurately toward objects near us and avoid ones that are approaching, even though the objects and our own eyes, head, limbs and body may be continually changing positions? How does the brain construct a representation of the visual space surrounding the body, and how does this representation guide movement?
Lesions of posterior parietal cortex have long been known to produce visuo-spatial deficits in both humans and monkeys. Yet there is no known “map” of space in parietal cortex. Posterior parietal cortex projects to a number of other areas which are involved in specialized spatial functions. In these areas space is represented at the level of single neurons and in many of them there is a topographically organized map of space. These extra-parietal areas include premotor cortex and the putamen, involved in visuomotor space, the frontal eye fields and the superior colliculus, involved in oculomotor space, the hippocampus, involved in environmental space, and dorsolateral prefrontal cortex, involved in mnemonic space. In many of these areas, space is represented by means of a coordinate system that is fixed to a particular body part. Thus, the processing of space is not unitary, but is divided among several brain areas and several coordinate systems in addition to those in posterior parietal cortex.
We propose that extrapersonal space is represented in the brain by bimodal, visual-tactile neurons in: 1) inferior area 6 in the frontal lobe, 2) area 7b in the parietal lobe, and 3) the putamen. In each of these areas, there are cells which respond both to tactile and visual stimuli. In each area, the tactile receptive fields are arranged to form a somatotopic map. The visual receptive fields are usually adjacent to the tactile ones and extend outward from the skin about 20 cm. Thus each area contains a somatotopically organized map of the visual space that immediately surrounds the body. These three areas are monosynaptically interconnected, and may form a distributed system for representing extrapersonal visual space. For many neurons with tactile receptive fields on the arm or hand, when the arm was moved, the visual receptive field moved with it. Thus, these neurons appear to code the location of visual stimuli in arm centered coordinates. More generally, we suggest that the bimodal cells represent near, extrapersonal space in a body part centered fashion, rather than in an exclusively head centered or trunk centered fashion.
Cells in the dorsal division of the medial superior temporal area (MSTd) have large receptive fields and respond to expansion/contraction, rotation, and translation motions. These same motions are generated as we move through the environment, leading investigators to suggest that area MSTd analyzes the optical flow. One influential idea suggests that navigation is achieved by decomposing the optical flow into the separate and discrete channels mentioned above, i.e. expansion/contraction, rotation, and translation. We directly tested whether MSTd neurons perform such a decomposition by examining whether there are cells which are preferentially tuned to intermediate spiral motions, which combine both expansion/contraction and rotation components. The finding that many cells in MSTd are preferentially selective for spiral motions indicates that this simple 3 channel decomposition hypothesis for MSTd is incorrect. Instead, there appears to be a continuum of patterns to which MSTd cells are selective. In addition, we find that MSTd cells maintain their selectivity when stimuli are moved to different locations in their large receptive fields. This position invariance indicates that MSTd cells selective for expansion cannot give precise information about the retinal location of the focus of expansion. Thus individual MSTd neurons cannot code the direction of heading by using the location of the focus of expansion. The only way this navigational information could be derived from MSTd is through the use of a coarse, population encoding. Positional invariance and selectivity for a wide array of stimuli suggest that MSTd neurons encode patterns of motion per se, regardless of whether these motions are generated by moving objects or by motion induced by observer locomotion.
The left fielder squints at the baseball as it curves toward him. He adjusts his hand and body, and the ball lands in his mitt. Somehow, the changing pattern of light on his retina was transformed into a motor command which brought his hand to the correct location for catching the ball. How was this accomplished? Is there a map of visual space in the brain which encodes the location of the ball and the fielder's glove? In this article, we review some recent experiments using monkeys, on visuo-motor transformations in the brain. We ask how neurons represent the location of a stimulus for the purposes of looking at it, reaching toward it, or avoiding it.
In primates, the premotor cortex is involved in the sensory guidance of movement. Many neurons in ventral premotor cortex respond to visual stimuli in the space adjacent to the hand or arm. These visual receptive fields were found to move when the arm moved but not when the eye moved; that is, they are in arm-centered, not retinocentric, coordinates. Thus, they provide a representation of space near the body that may be useful for the visual control of reaching.
Two monkeys were trained on an auditory-visual (AV) delayed matching-to-sample (DMS) task with auditory cues serving as sample stimuli and visual cues serving as comparison stimuli. To determine whether the monkeys were remembering auditory or visual information during the delay period, auditory and visual interference were presented following the sample stimulus. Auditory interference had little effect on AV DMS performance. In contrast, visual interference severely impaired AV DMS performance, indicating that the monkeys were remembering visual information during the delay period. This finding may reflect a predisposition of monkeys toward remembering information via their dominant visual modality.
The macaque putamen contains neurons that respond to somatosensory stimuli such as light touch, joint movement, or deep muscle pressure. Their receptive fields are arranged to form a map of the body. In the face and arm region of this somatotopic map we found neurons that responded to visual stimuli. Some neurons were bimodal, responding to both visual and somatosensory stimuli, while others were purely visual, or purely somatosensory. The bimodal neurons usually responded to light cutaneous stimulation, rather than to joint movement or deep muscle pressure. They responded to visual stimuli near their tactile receptive field and were not selective for the shape or the color of the stimuli. For cells with tactile receptive fields on the face, the visual receptive field subtended a solid angle extending from the tactile receptive field to about 10 cm. For cells with tactile receptive fields on the arm, the visual receptive field often extended further from the animal. These bimodal properties provide a map of the visual space that immediately surrounds the monkey. The map is organized somatotopically, that is, by body part, rather than retinotopically as in most visual areas. It could function to guide movements in the animal's immediate vicinity. Cortical areas 6, 7b, and VIP contain bimodal cells with very similar properties to those in the putamen. We suggest that the bimodal cells in area 6, 7b, VIP, and the putamen form part of an interconnected system that represents extra personal space in a somatotopic fashion.
Dr. Stein begins by claiming "very little evidence has been found for the existence of a topographic map of perceptual space" and ends by stating that "indeed there is no evidence for a region...where egocentric space is represented topographically." In lieu of a map, he offers us a "neural network," that is, "a distributed system of rules for information processing that can be used to transform signals from one coordinate system to another." Such a computational scheme might indeed work; however, it is quite unnecessary since a neuronal topographic map of visual space does exist, at least for the region adjacent to the body, i.e. immediate extrapersonal space. As there is good evidence for more than one such map in the primate brain the question would seem to be what are their different functions rather than how can we erect a computational network to do without them.
Recent work on the visual system of primates has delineated several cortical fields involved in the processing of visual motion. These cortical areas appear to be connected anatomically in stages, which suggests that there is a hierarchy in the machinery for motion perception. In this paper, we outline experiments that we have performed along the most prominent pathway for motion analysis, which begins in area V1 and proceeds through the middle temporal area (MT) to the medial superior temporal area (MST).