### Difference of covariances method

#### by exactnature

Neural responses in higher cortical areas seem rather elusive. Three observations about these responses are largely responsible for this seeming elusiveness (I will focus on the first two observations in this post): i) they display mixed selectivity, meaning that they respond to a combination of task variables, such as the identity of the stimuli presented on a given trial, the decision made on that trial, the context or the rule in effect on that trial etc., rather than to individual task variables alone; ii) they are very heterogeneous, meaning that the particular combination of task variables a neuron responds to can be wildly different from the combination that the next neuron responds to; iii) neurons can be responsive to task-irrelevant variables, such as the elapsed time during the delay period of a working memory task.

However, this apparent complexity may be misleading. A simple model where the responses of individual neurons are just random, linear mixtures of a small number of underlying response components that themselves do not display mixed selectivity or heterogeneity may be able to explain the apparent complexity of responses at the single neuron level. Indeed, it turns out that this simple model does a pretty good job of explaining the responses of individual neurons, at least for neurons in prefrontal cortex during the delay period of working memory tasks (see, e.g., Machens, Romo & Brody, 2010). This implies that population dynamics may be low-dimensional in these situations.

Let’s be a little more concrete. Suppose we suspect that there are essentially two task variables neurons respond to, and (for stimulus and decision respectively, although they can be anything). The claim is then that the responses of neurons can be represented as follows (Machens, 2010):

where is the vector of neural responses, and are the underlying response components (same for all neurons, no heterogeneity) that depend only on and respectively (no mixing), and and are vectors of mixing coefficients for the neurons. is a noise vector that I will ignore for the rest of this post. As mentioned before, it is assumed that the mixing coefficients of neurons are random and independent of each other so that and are orthogonal on average, i.e. (the angular brackets denote an average). Note that if this model gives an accurate description of neural responses, it means that although there are neurons, trajectories of neural responses actually live in a two-dimensional subspace (spanned by and ) rather than in the full -dimensional space. This is what we mean by ‘low-dimensional population dynamics.’ Of course the noise term might push the trajectories out of this two-dimensional subspace (assuming it’s sufficiently large), but I am ignoring the noise term here.

I should note here that population dynamics in higher cortical areas does not always seem to be low-dimensional. There is some evidence for high-dimensional dynamics in similar tasks as well (e.g. Harvey, Coen & Tank, 2012; Pastalkova, Itskov, Amarasingham & Buzsaki, 2008; Baeg, Kim, Huh, Mook-Jung, Kim, Jung, 2003). I will hopefully write about possible causes and consequences of this discrepancy in a later post.

Now, returning to our equation above, is observable. We can record the responses of neurons during the delay period and sort the responses according to the time into the delay period, the stimulus presented prior to the delay and the decision made after the delay. Then the question is: given only , how can we recover the underlying unmixed response components , and the mixing coefficients , ? A simple suggestion is to use principal component analysis (PCA). Unfortunately, PCA does not work in this case, because although it would recover the two-dimensional subspace spanned by and , in general it would not recover and themselves. Instead, PCA finds the orthogonal directions and along which variation in is maximal. Mathematically, and are the eigenvectors of the covariance matrix with the largest eigenvalues. The covariance matrix is given by:

where is the mean responses of neurons averaged over . The eigenvector basis maximizes which is the total variance of the responses in the subspace. However, there is no reason to expect the directions of maximum total variance to coincide neatly with the directions of maximum variance due to changes in or changes in alone. To illustrate this, consider the following simple example:

In this example, there are two neurons whose responses are represented by and . We assume that and are binary variables that can take the values 0 and 1. For , the responses of the neurons are shown by the point A, for by point B, for , by point C and for by point D. Thus, intuitively, most of the response variation along the y-axis is caused by changing and most of the variation along the x-axis is caused by changing .

The first and the second eigenvectors of the covariance matrix of these four points are represented by and respectively. When projected onto this eigenvector basis, the total variance of the data points is maximized, irrespective of whether the variance is caused by changes in or by changes in . However, the two orthogonal directions that maximize the variance due to changes in alone and the variance due to changes in alone are and (unit vectors along the x- and y-axis respectively). So, the two bases differ.

In order to find the direction that maximizes the variance due to changes in alone, Machens (2010) suggests that we maximize where is the covariance of responses due to changes in alone:

where is the vector of mean responses marginalized over . Similarly, to find the direction that maximizes the variance due to changes in alone, we maximize where is the covariance of responses due to changes in alone:

where . Combining the two into a single objective, we can maximize with the additional constraint that and be orthogonal to each other. It turns out that this optimization problem is equivalent to finding the eigenvectors of the difference matrix, (hence the name *difference of covariances*), so it can be solved very efficiently (see the Appendix in Machens, 2010). Applied to our problem above, it can be shown that this method returns and as solutions to and respectively. Hence, unlike PCA, it recovers the correct mixing coefficients in our problem.

Below, I applied this method to a toy problem with 100 neurons whose responses were simulated according to:

where took five possible values from 0 to 25 and took two possible values, either -10 or 10 (Matlab code available here). The mixing coefficients were all randomly drawn from a standard normal distribution. Note that in this example, and . The following is a picture of the time-course of the responses of three selected neurons for all combinations of and values. Trajectories live in a two-dimensional subspace spanned by and .

The following figure shows the underlying response functions returned by PCA and by the difference of covariances method. The difference of covariances method recovers the correct unmixed response function nearly perfectly, whereas the function returned by PCA still mixes -dependence with -dependence.

The results for the other response function are shown below. Again, the difference of covariances method recovers the correct unmixed function nearly perfectly, whereas the function returned by PCA still mixes -dependence with -dependence. In general, PCA and the difference of covariances method return different solutions when there are temporal correlations between and or when the variance generated by changes in is similar to the variance generated by changes in .

I should finally note that there is a more recent, and apparently better, version of this method that works for nonlinearly mixed components (not just for linearly mixed components) and also provides a probabilistic interpretation of the method: see this paper by Brendel, Romo & Machens (2011).

An interesting question to think about is why there is mixing in neural responses in the first place. It has recently been suggested that mixed responses are useful for flexible, context-dependent computations (e.g. Rigotti, Rubin, Wang & Fusi, 2010; Rigotti, Barak, Warden, Wang, Miller & Fusi, 2013; Mante, Sussillo, Shenoy & Newsome, 2013). However, mixed responses are not always desirable. I currently do a little bit of work on this myself. For example, mixing responses to multiple stimuli turns out to be a really bad idea, if the task is to estimate those stimuli simultaneously. I am hoping to write more about this in the next couple of months. An interesting question is to determine, as precisely as one can hope, the types of tasks for which mixed responses are useful versus those for which they are not.