William B Levy
Department of Neurological
Surgery and Psychology
University of Virginia,
Charlottesville, VA
Compressed
file size is one way of measuring information.
Using an innovation advocated by Bialek and colleagues, spike trains can
be turned into a sequence of zeros and ones.
Based on Ziv-Lempel theory and using Lempel-Ziv-like algorithms, the
words of such binary strings can be automatically discovered and the ultimate
file size might be used to quantify information in the spike train. However, the way forward is not at all
simple. Analogous to a relative
entropy (or equivalently a Kullback’s directed divergence), it is possible to
create a relative file compression scheme.
Such relative compression should allow separation of information due to
specific patternings from information that is just in the spike count. Such relative compression may also avoid the
problem of infinite information when bin width goes to zero. The theorems that justify compressed file
size as an information measure are asymptotic in nature and include an error
term. This implies that detailed tuning
of the specific algorithm used for compression will be important because we
want fast convergence with a very small error term. The talk will point out some of the algorithmic details that may
help speed convergence and reduce the error term. Such details will distinguish any technically useful tool from
simplified algorithms such as the Unix command<<compress.