ANALYZING NEURAL CODING: JOINTLY TYPICAL
SEQUENCES AND QUANTIZATION
Alexander G. Dimitrov and John P. Miller
Center for Computational Biology
Montana State University
The nature and information content of neural
signals have been discussed extensively
in the neuroscience community. They are an important ingredient of many
theories on neural function, yet there is still no agreement on the details of
neural coding. There have been various
suggestions about how information is encoded in neural spike trains: by the number of spikes, by temporal correlations, through single spikes, or by
spike patterns in one, or across many neurons. The latter scheme is most
general, and encompasses many others.
We shall describe our progress in modeling
it through jointly typical sequences. The search for pattern codes requires exponentially more data than the
search for mean rate or correlation
codes. We will also describe a method that enables optimal use of limited quantities of data, through quantization
to a reproduction set of small finite
size. To asses the quality of the quantization we use an information-based
distortion measure. The quantization is optimized to have minimal
distortion for a fixed size of the
reproduction. This method allows us to study coarse models of coding schemes
which can be refined as more data becomes available.