Goal Coding.

 

1) What is a spatial goal for movement, and where is it encoded?

 

Goal is a word often used and rarely defined. When I refer to a goal for a saccade or a reach, I am referring to a high-level representation of the desired end point location for an effector (gaze direction, hand position).

 

One way to define this is as the spatially selective activity that remains in the (short term) memory interval between seeing and acting. Consistent with this, spatially-selective activation in PPC areas like SPOC and middle IPS endures when subjects are required to wait between seeing and acting upon a target. The ultimate purpose of having such representations is to allow us to dissociate our intentions from sensory and motor events, for example so that we can plan complex sequences of movement in time where goals for future movements are encoded simultaneously as new information is acquired and Previous commands are being implimented. Likewise, goals need not be associated with current gaze direction (never for saccades, and not necessarily for reaches).

 

This suggests that goals should be separable spatially from both purely sensory or purely motor events. This is best illustrated using real experimental examples.

 

In anti-saccade (or anti-pointing) tasks, subjects are trained or asked to move in the direction opposite to the stimulus (pro-saccades are the controls where the movement is made directly to the target). In this case, subjects are required to internally construct a goal that is opposite to the stimulus, and move in that direction.

 

Recordings from monkey PPC (LIP/MIP) during anti tasks suggest that most neurons are tuned for the movement direction, some encode the visual stimulus direction, and some switch from the latter to the former during the trial. Similarly, human fMRI has shown that the middle IPS is spatially selective for direction in pro-saccades/reaches, and ‘remaps’ this activity to the opposite direction during anti-tasks.

 

Thus, anti paradigms can be used to dissociate the visual stimulus from the goal and the movement. But can the movement be dissociated from the goal?

 

One attempt to do this was to train subjects to point while looking through reversing prisms. This can be done so rapidly that subjects are often unaware they have been trained. In this situation, the see the stimulus on one side (and they think they are still aiming toward it), but in reality the stimulus and the movement are on the opposite side.

 

When human PPC was tested in this paradigm, the spatial activity in most areas (in SPOC, middle IPS, Area 7, V3) remained tied to the perceived direction of the goal, not to the direction of the movement. Only angular gyrus showed the opposite effect.

 

 

Consistent with this, perturbations of reach trajectories during TMS to SPOC are not corrected when subjects are provided with visual feedback of the hand (suggesting it is not hand position, but rather goal position that was disrupted).

 

fMRI does not reveal detailed cell-to-cell differences, but comparing the overall patterns observed in these two paradigms (reversing prism vs. anti-saccades) suggests that the earlier occipital-parietal areas (V3, SPOC) are primarily concerned with goal coding. We will return to the function of the other areas in a subsequent lecture.

 

2) Egocentric vs. Allocentric coding.

 

There are two basic ways to encode goals. First, in egocentric coordinates. For example, goals could be encoded relative to the retina, relative to the head, or some part of the torso. We will return to the details of which in a subsequent lecture. For visually guided movements, all of these signals are present, along with proprioceptive signals of eye, head, and arm position, and numerous studies have shown that both monkeys and humans are able to aim movements based on this information alone (with certain small systematic errors).

 

The second general method is to encode goals in allocentric coordinates. In this case, goals are coded relative to some other spatial cue in the environment that is deemed stable enough, and is well known enough, to act as a good frame of reference. This could be anything from a the corner of a desktop (as a reference for where I put my coffee cup), to the location of a city relative to distinctive set of mountain peaks.

 

It is thought that when both types of cue are available but do not agree perfectly, the brain weighs them to a point somewhere in between. One advantage of allocentric coding is that it does not appear to decay as rapidly as egocentric coding, so it is thought that allocentric coding predominates as memory delays increase. On the other hand, when allocentric cues are judged to be unreliable, they are weighted less.

 

The medial temporal cortex, particularly the hippocampus, is well known to use a combination of egocentric allocentric signals to encode navigational goals in long term memory. However, much less is known about the neural mechansisms for allocentric coding for simple goal-directed actions.

 

It is generally agreed that the ‘dorsal stream’ of vision terminating in parietal-frontal movement areas are by default involved in egocentric coding whereas the ventral stream (including occipital-temporal areas involved in object recognition) is more associated with allocentric coding. Conmittri (sp?) et al. (2004) found that movements aimed toward stable objects activated parieto-frontal cortex, whereas movements aimed toward locations defined releative to mobile objects also activated ventral-lateral temporal cortex.

 

Wherever the allocentric signals originate, they must influence behavior somehow. This suggest they must enter some high level of the ‘action’ stream somehow. Consistent with this Olsen and colleagues have shown object-centered spatial tuning in SEF units in monkeys trained and cued to saccade toward one particular end of an object. Similar responses have been shown for area 7a and LIP.

 

Much more is known about the egocentric mechanisms, and this is where we will concentrate for the rest of the course.

 

3) Extrapolation of goals

 

Goals are not only static visible targets, or even stationary targets that disappear. They might be inferred from other information. One example we have already seen: anti-saccades/reaches. In the case of allocentric coding, one might infer the location of a goal from some other reference without actually seeing this goal (for example, when giving directions to a distant location we often point relative to visible targets).

 

Finally, goals might be inferred from a combination of position and velocity information, i.e., when predicting the interception point of a moving target. When this information is limited (for example by occlusion), subjects tend to extrapolate linearly from the last available trajectory information (Engel and Soechting).

 

There are two general ways this might happen. First, the brain might extrapolate the entire predicted path of the moving object. If it can do this, then the problem of interception is reduced to calculating where it will be along this path at the correct time.

 

Second, it might explicityly calculate the future location at a future time from the current position (P), velocity (V), and change in time (TC) relative to the current time (t) like so:

 

Goal = X(t) + TC(t) x V(t).

 

Very little is known about the neural mechanisms of such goal coding, but presumably it requres the combination of signals from the visual motion processing complex (MT/MST in monkeys) and the reach/saccade areas described above.