Calculation of the Movement Vector

 

One classical view of motor control is that the brain programs the final position, setting muscular tensions to a point where the effector will drift rapidly toward that set point, at a rate determined by the visco-elestic properties of muscle. Although this theoretical view survives in some modified forms, neurophysiological recordings have shown that movements are associated with a mix of static and dynamic signals (for example, velocity and orientation signals in the case of saccades). Moreover, the early visuomotor signals associated with movement planning generally seem to code the vector required to take the effector from is current to final position.

 

In the case of visually-guided gaze shifts, the initial calculation of this vector is someone trivial, because the retina automatically encodes visual direction relative to the fovea. In this special case (eye-fixed, gaze centered coordinates, distant targets), desired gaze direction and desired gaze displacement are equivalent. This becomes more complicated for saccades in depth, where information about current and desired binocular fixation must be compared to program a disconjugate component; consistent with this, saccade-related neurons in LIP show modulations according to both initial and desired depth (e.g. Genovesio et al. 2007). This calculation also becomes more complicated for non-visual targets (we will take this up in a later section).

 

For visually-guided reach, the situation is always complicated because there is no fixed relationship between initial or desired hand position and the retina. The only way to compute the reach vector is to compare initial and desired hand position. Since we are now talking about translational motion, it is sufficient to subtract a vector representing initial hand position from a vector representing desired hand position, assuming they are both defined in the same frame.

 

 

There are two ways to derive an estimate of initial hand position, through vision and proprioception.  Sober and Sabes showed that when both are present and the target is visual, humans used both, optimally integrated, but tend to rely more on vision. (although they relied more on proprioception to build up the internal models discussed in the last section).

 

The question then arises, which frame is used for these comparisons? It was assumed for many years that both goal and hand position signals were transformed into somatosensory coordinates before the movement vector was calculated. However, Buneo showed that neurons in PPC (area 5, and especially PRR) can show gaze-centered responses with hand position modulations, consistent with calculation of the movement vector in visual coordinates. Moreover, these responses persisted even when hand was not visible, suggesting that proprioceptively derived estimates were being transformed into gaze-centered coordinates.

 

Since then, evidence has accumulated to support this notion. For example, PRR neurons with gaze-centered goal responses with anti-correlated gaze and hand position modulations consistent with the calculation of a gaze-centered reach vector (Chen and Snyder). Similar ‘relative position codes’ have also been reported in premotor cortex (Pesarin). As with saccades, these relative position codes would have to be 3-D (including target depth components and 3-D estimates of hand position), but this aspect of the comparison has not yet been investigated(?).

 

Recent psychophysical experiments in healthy and brain damaged humans have supported the notion that the reach vector is calculated either in gaze-centered coordinates (), or a mix of gaze and somatosensory coordinates (Khan). Moreover, TMS experiments have confirmed that human PPC (angular gyrus and mIPS) is involved in the comparison of hand and goal position required to calculate the reach vector (Vesia).

 

Once the desired eye rotation or hand translation is calculated in some frame –any frame- it might be tempting to assume that this vector could be used interchangeably in any other frame without any further comparisons with eye and head position. However, this mistaken intuition is based on the math of frames that translate with respect to each other. In frames that show relative rotation (i.e., eye, head, and body) the same rotation or displacement vector requires different representations in different frames. This is the subject of the next section.