Unsolved mathematical problems
Here are some unsolved mathematical problems with
potential impact for neuroscience. Not all are tightly posed.
Any
feedback,
not limited to brilliant ideas,
is always appreciated and will be gratefully acknowledged.
spike metrics
multidimensional scaling of symmetric non-Euclidean spaces
nonlinear dynamics
isodipole textures
point processes
Home Page
Last revised: 4/2/11
Spike metrics
brief background
key reference
further reading
-
Develop a computationally efficient algorithm for
Dmotif.
Maybe this is a well-known problem in dynamic programming algorithms.
-
Develop a computationally efficient algorithm for
combinations
of
Dspike,
Dinterval,
and Dmotif.
Maybe this is also a well-known problem in dynamic programming algorithms.
Progress (8/2002) for combinations of Dspike and Dinterval:
Perhaps any optimal path from spike train A to spike train B can be reorganized as an equal-cost path from spike train A to spike train X,
then from X to B, in which A to X
only uses the transformations of Dspike, and X to B only uses the transformations of Dinterval.
Then, find a DP algorithm that finds X. Unfortunately, such reorganizations cannot be guaranteed.
This idea makes use of the "algebraic" properties of the transformations. For general inspiration, see the algebraic dynamic programming work of
Robert Giegerich.
-
Develop a taxonomy of spike metrics which is "complete" in
some biologically-reasonable sense. The topological hierarchy is
interesting, but even topologically equivalent spike metrics
must be distinguished.
-
Understand the relationship of the spike metrics to
Euclidean distances.
One line of attack
Consider a monotonic transformation F(x) applied to the metrics.
For F(x)=xp (0 < p < 1),
F(D(.,.)) is also a metric, with the same
topology, Voronoi neighborhoods, and rank-order
as D(.,.).
I had hypothesized that for p = 1/2, one can always achieve an embedding with Euclidean geometry. (It is easy to show that p <= 1/2 is required for large q, and that p = 1 suffices for q = 0.)
p = 1/2 also seemed plausible because a lattice of points in the Minkowski city block space -- which appears to be related to the spike time metric --
can be embedded in a Euclidean space with F(x)=x1/2, in a manner that preserves the metric. (For a k-dimensional lattice of N1N2...Nk points, then the dimension of the Euclidean space need be no larger than N1+N2+...+Nk-k. My interest is that the embedding requires a uniform power law transformation, independent of the number of points. This is not always possible for a non-euclidean manifold. The more general question of when a non-Euclidean manifold can be embedded in a distance-preserving (up to a power law transformation) manner in a Euclidean space, is an interesting one -- see multidimensional scaling of symmetric non-Euclidean spaces below.)
But it is clear, due to the work of
Dmitriy Aronov,
that no exponent p>0
can guarantee a Euclidean embedding of a set of spike trains under the spike time metric.
See the paper.
Thus it would seem that no universal scale-free transformation of the distance suffices to embed
the space of the spike time distance into Euclidean space.
Another line of attack
Consider two spike trains s1 and s2
generated by time-varying Poisson processes,
with rates r1(t) and r2(t). Can one estimate (perhaps, asymptotically
in the limit of high firing rates) Dspike[q](s1,s2), and then
relate this to the squared Euclidean distance between the rate functions
r1(t) and r2(t)? Intuitively, this relationship should approach
a bilinear function when rates are high, similar to smoothing of the
rate functions r1(t) and r2(t) by a kernel whose support is
[-1/q,1/q].
Is this true?
If such a bilinear function can be found, what are its eigenvalues and eigenvectors?
- Develop an esthetically acceptable extension of
the "direct" method for the calculation of information
(de Ruyter van Steveninck, R. R., Lewen, G.D., Strong, S.P., Koberle, R., & Bialek, W.
(1997) Reproducibility and variability in neural spike trains. Science 275, 1805-1808)
that encompasses the metric approach. The "direct" method
is based on comparison of the volumes that sets of spike trains
occupy in a vector space. Can quantities that serve as these volumes be estimated
if only metrical data are available?
top
Isodipole textures and related
top
brief background
review article
further reading
examples
- What are the
scale-invariant limits of isodipole textures
constructed from
plane Markov processes
other than
the standard even and odd textures?
The third-order
triangle textures are a good place to start.
- What are the scale-invariant limits
of planar textures in general?
- What are the statistics of induced correlations
(i.e., longer-range correlations forced by the defining recursion rule)
of
isodipole textures
constructed from
plane Markov processes
other than
the standard even, odd, and triangle textures?
For 2x2 cliques and binary textures, these textures have been fully parameterized.
F. Champagnat, J. Idier, and Y. Goussard, Stationary Markov random fields on a rectangular finite lattice, IEEE Trans. Inf. Theory 44, 2901-2916 (1998), but this parameterization is awkward, and the general question remains open.
See this paper for a psychophysical exploration of a 2-parameter family of maximum entropy textures contained within this parameterization.
-
What are efficient ways to sample maximum-entropy distributions constrained by local correlations, when the local correlations are of order >=3 and/or the Pickard rules are violated? Many special cases have been solved
(Victor, J.D., Thengone, D., and Conte, M.M.: Analyzing local image statistics via maximum-entropy constructions, CoSyNe 2011 abstract), but the general problem is open.
Multidimensional scaling of non-Euclidean spaces
top
It is well known that a Riemannian manifold can be embedded in a higher-dimensional Euclidean space in a topology-preserving manner. Here we ask about whether it is possible to find an embedding of a Riemannian manifold that preserves, or almost preserves, distance. That is, the geodesic distance between two points a and b, d(a,b), is required to be equal to some function of the Euclidean distance between the embedded images z(a) and z(b) of the points, namely, d(a,b)=F(|z(a)-z(b)|). So that the transformation is scale-free (as discussed above), we require F(x)=xp. We are interested in finding a power p that is independent of the number of points within the manifold to be embedded, an are not concerned whether the number of dimensions is finite or infinite.
- City block distance: p=1/2 suffices, as described above.
- Other Minkowski spaces: not settled
- Lie groups, such as SO(n): demonstrated NOT possible for any p>0, for SO(3) and above
- The surface of an n-sphere: p=1/2 suffices, up to a 5-sphere (2-sphere= ordinary sphere), and appears that p=1/2 suffices for any n-sphere -- extensive numerical evidence; proof will follow from a series that needs to be summed. These results follow from an application of the group representation theory for SO(n), and lead to some interesting series for pi and relationships involving the Catalan numbers. Work in progress.
Nonlinear autoregressive models
top
brief background
key reference
further reading
- What is a systematic way for determining whether a
model dynamical system (e.g., defined by a set of differential
equations) is consistent with the qualitative features of an NLAR
fingerprint?
More generally, what is the relationship between the NLAR fingerprint
and global dynamics?
- What is a sensible way (for multichannel data)
to merge principal components analysis and
NLAR fingerprints?
Empirically, in a multichannel EEG dataset,
it often appears that only one or two
principal components contain the bulk of the nonlinear contributions.
However, attempting to isolate the nonlinear contributions
in a single source via "independent components analysis" fails,
even for simple model systems, whenever the nonlinearity is strong
(consider one channel given by the square of another).
Point processes
top
Consider: (i) The Poisson process is often taken to be a good starting place for modeling spike trains.
(ii) Spike trains from functionally related neurons are often positively or negatively correlated at short lags.
But is it possible to construct a pair of point processes, each of which is Poisson when viewed in isolation,
whose cross-correlation is negative at short lags? If it is impossible, can this be proven?
Note that it is easy to construct Poisson processes that are positively correlated at short lags. Two strategies:
- Create three point processes, X, Y, and Z. Let neuron A's spike train be the union of X and Y, and let neuron B's spike train be the union of Y and Z. IF X, Y and Z are all Poisson, then so are A and B. The spikes in Y lead to positive correlation at 0 lag.
- (Thanks to Bruce Knight) Consider a point process in which the intervals are alternately and independently taken from a gamma process of order g1 and a gamma process of order g2, where g1+g2=1. Assign the even-numbered spikes are assigned to neuron A and the odd-numbered spikes to neuron B. These two spike trains are Poisson (since the convolution of two gamma distributions is another gamma distribution of order equal to the sum of the original orders). A and B are positively correlated for short lags.
But neither of these approaches readily generalizes to a procedure to create negative correlations.
Solution!
(3/2001) Daniel Fisher.
His solution (which is a generalization of the second strategy above) is given here:
The A-to-B point process has
interval distribution
S=y[(a-1)2e-t + ((g-1)2-(a-1)2)e-gt - (g-1)(g-a)2t e-gt] with y=(g/a(g-1))2
and the B-to-A point process has interval distribution
R=z[delta(t) + (2(g-a) + (g-a)2t)e-at] with z=(a/g)2.
Take g > a > 1 with ln(g-a) - ln(a-1) < 1+ (a-1)/(g-a).
These conditions ensure that both R(t) and S(t) are positive and that the cross correlation (with mean subtracted)
has a minimum value
- (1-1/a) e-2/(g-a)
that is negative. By choosing both g and a large, one can get this minimum arbitrarily close to
-1. With these forms, the cross-correlation has a delta function component at t=0, is
large for small t, dips below zero for larger t and asymptotically
approaches zero from below for large t.
(3/15/07) An extensive analysis of these constructions, and of alternating Poisson processes in general, has been posted by
Don Johnson.
Download it.
Related reference
Contributed (8/12/2002) by Dario Ringach:
Griffiths et al., "Aspects of correlation in bivariate Poisson distributions and processes", Aust. J. Statist. 21, 238-255 (1979)
top
Home Page