The recent article by Buracas and Albright 1 provides an excellent review of recent progress concerning the neural representation of sensory stimuli. The authors clearly outline the potential pitfalls of the 'direct'2,3 and 'reconstruction'4,5 ,6 methods for estimation of mutual information (namely, the need to estimate a probability distribution of spike trains, considered as vectors in what could be a very large space), but the naive reader might be left to believe that there are no good alternatives.
However, such alternatives do exist. We recently developed7 an approach to the analysis of information in spike trains that encompasses dynamics, but does not require a vector-space embedding nor the estimate of a multivariate probability distribution. This method has been applied in mammalian visual cortex by us 8,9 ,10, and in the insect olfactory system by Laurent and colleagues11.
Any method for estimation of information in spike trains requires making assumptions about the nature of the code. The accuracy of the information estimate depends on both the correctness and the breadth of the assumptions. Fewer assumptions about the nature of the neural code are made by the metric-space methods than by vector-space methods (such as the 'direct' and 'reconstruction' methods). However, this broader scope necessarily incurs a penalty in terms of the amount of mutual information that can be identified. A rough estimate of this penalty can be determined by comparing information calculations based on model data in which the vector-space assumptions are known to be appropriate. This comparison indicates a penalty of about 25% paid by the metric-space methods7 in return for their broader scope. On the other hand, assumptions concerning the nature of the code that are overly narrow could also lead to underestimates of information, since the aspects of the spike train that carry the information might be overlooked.
As reviewed by Buracas and Albright1, even the most efficient neuronal representations are only about 50% efficient, as determined by the 'reconstruction' and 'direct' methods. Residual inefficiency is generally attributed to neuronal noise. However, some of this 'inefficiency' may be only apparent, representing instead a contribution from forms of coding beyond the scope of the analysis method. Moreover, theoretical considerations have raised the possibility that the structure of central neuronal representations has the abstract properties of a metric space, rather than a vector space12,13. For these reasons, as information-theoretic analyses are pursued in a wider variety of settings, the neurophysiologist should be prepared to use a wide range of techniques for information analysis, with both narrow and broad prior assumptions.