Spatial Updating of Goal Directions

 

Behavioral Aspects.

 

Animals are not always motionless when planning goal-directed movements; it is thus important to compensate changes in the spatial relationship between the internal representation and the external goal induced by self motion. The best studied case of this the laboratory is the double-step task, where subjects are required to look, reach, or point toward a target that was viewed before an intervening eye movement (). It has been demonstrated that saccades can be aimed with reasonable accuracy toward remembered targets after an intervening saccade (), smooth pursuit eye movement (), eye-head gaze shift (), full body rotation (), and torsional rotation of the eyes, head, and body (). Likewise, humans and monkeys are able to reach or point toward remembered targets after an intervening eye movement or full body rotation (). This ability to compensate for self-motion is called spatial updating.

 

Most of the studies quoted above were done in completely dark conditions that forced subjects to rely on their own egocentric sense of target direction. Why would the brain develop such egocentric representations, when in real life targets usually remain and view and it can also rely on a rich set of allocentric cues? In some cases targets may no longer be in view after a large gaze shift. But probably the most important reason is speed. A basket ball player cannot afford the time to stop and visually re-calculate the direction of the basket after each movement; they need to constantly update its location in order to be ready to place an accurate shot at a moment’s notice. Moreover, it has been shown experimentally that humans are more accurate at aiming movements when they can combine both spatially updated memory of the goal and new visual information ().

 

Possible Mechanisms

 

When talking about updating visual targets, the problem of spatial updating occurs during any type of self motion that changes the retinal location of the goal. This is particularly a problem when the goal is internally represented in gaze-centered coordinates, as described in the last lecture. If the brain simply retains the same gaze-centered information, it will be no longer valid at the new body position.

 

One way to get around this is by transforming the information into some more stable frame of reference. For example, by comparing the retinal location of an object to eye position information, the brain could calculate target direction relative to the head, which would then be independent of subsequent eye movements (although it would still need to be updated during head movement). By comparing this to head orientation (and accounting for the linkage geometry described in lecture 2) one could calculate, for example, a reach goal with respect to the shoulder (although this would still need to be updated during body motion), and so on.

 

A problem with this scheme is that, although motor commands are eventually encoded in effector-specific coordinates (and other systems use entirely different coding systems), there is little evidence that early visual goals for saccades and reach are encoded in anything but visual coordinates. Areas like LIP and PRR show gaze position-dependent modulations that could be consistent with the calculation of the visual goal in other frames, but such signals have not been explicitly observed. Stimulation of some brain areas produces saccades toward a head or body-centered goal, but all of these observations might have more to do with motor implementation than spatial updating.

 

An alternative is to use an internal sense of self-motion to ‘remap’ the representation of the goal within gaze-centered coordinates to a new, appropriate location. In 2-D models the saccade system, this has been simulated by subtracting a vector representing the first intervening eye movement from another vector representing the initial retinal location of the 2nd saccade target, to obtain a vector representing the final saccade direction. In the real world this does not quite work because, for example, during torsional rotations of the eyes goals on the opposite side of gaze need to be updated in opposite directions (). Moreover, the vector subtraction predicts errors across non-torsional saccades related to the non-commutativity of rotations (). However, if we replace the idea of vector subtraction with rotation (a non-linear operation) the principle is the same.

 

Experimental Evidence for Remapping for Saccades and Reach

 

It is now well established that remapping occurs in every area of the monkey brain associated with saccade goal coding, including early visual areas (), LIP (), SEF (), FEF (), and the superior colliculus (). Many neurons in these areas show peri-saccadic changes consistent with a recalculation of future saccade goals with respect to the new eye position, sometimes beginning even before the saccade ().

 

But what about reach, and how does it work in the human?

 

Some years ago, we used a pychophysical approach to demonstrate that humans also use gaze centered remapping to update manual pointing goals across saccades (Figure). In brief, subjects were required to make a saccade in the interim between seeing and pointing toward a remembered visual target (). Their pointing errors were not consistent with the pattern predicted if the encoded targets in non-retinal coordinates at the time of viewing, but rather with the pattern predicted if the visual goal were remapped in gaze-centered coordinates (Figure). This experiment has been repeated a number of times showing the same result for near targets…….(). The only case in which the opposite result has been reported to occur for simple flashed targets was in optic ataxia patients with bilateral parietal damage, who could still indicate the general direction of the target but were apparently forced to rely on some other non-retinal mechanism ().

 

This has received independent confirmation from a number of methodologies. First, saccade-related remapping signals were observed in PRR of the monkey (). Second, in an event related fMRI paradigm, both saccade and pointing related activity was shown to remap between the intraparietal sulci on opposite hemispheres of human cortex during saccades (Figure). Consistent with these results, the gaze-centered deficits associated with unilateral optic ataxia remap during saccades, not only producing degeneration of behavior when goals are remapped from the ‘good’ to ‘bad’ hemifield, but remarkably leading to more accurate reaching when the opposite remapping occurs ().

 

Other corroborating evidence for remapping in the human includes…(Heide, Morris, Colby).

 

These findings do not show that gaze-centered remapping is the only mechanism used by the brain, even for spatial updating of movement goals, but they do suggest that this is the dominant mechanism for updating egocentric representations during simple visual goal directed movements.