An extension of the msequence technique for the analysis of
multiinput nonlinear systems
Ethan A. Benardete and Jonathan D. Victor
In Advanced Methods of Physiological Systems Modeling, Volume III, ed.
V. Z. Marmarelis. New York, Plenum. pp. 87110 (1994)
Abstract
Whitenoise analysis and related methods of
nonlinear systems identification
describe a physical system's response to its input in terms of
"kernels" of progressively higher orders.
A popular analytic scheme in the laboratory uses
a class of pseudorandom binary sequences, msequences,
as a test signal. The msequence method has several
advantages for investigating linear and nonlinear systems:
ease of implementation,
rapid calculation of system kernels,
and a solid theoretical framework. One difficulty with
this method for nonlinear analysis comes from the algebraic
structure of msequences: linear and nonlinear terms
can be confounded, especially in the analysis of systems with many inputs.
We have developed a modification of the msequence method which
allows control of these anomalies.
This method is based on input signals consisting of
a superposition of msequences whose lengths are relatively prime.
The fast computational methods which facilitate kernel
calculation for a single msequence input are readily
extended to this new setting. We describe the theoretical
foundation of this method and present an application
to the study of ganglion cells of the macaque retina.
top
nutshell
diagram
technotes
background reading
The algorithm in a (large) nutshell
We consider estimation of the first and secondorder kernels for a system
with two inputs.
The system is presented with
signal s_{[1]}(t) at input 1, and signal s_{[2]}(t)
at input 2. s_{[1]}(t) and s_{[2]}(t) are each
derived from two msequences,
m_{[1]}(t) and m_{[2]}(t), whose
repeat periods M_{1} and M_{2} are relatively prime:
s_{[1]}(t) = m_{[1]}(t) + m_{[2]}(t)
and
s_{[2]}(t) = m_{[1]}(t+T_{1}) +
m_{[2]}(t+T_{2}).
Firstorder kernels
To estimate the firstorder kernels,
crosscorrelate the response
R(t) against the msequence m_{[1]}(t):
C^{1}(t) = < R(u)m_{[1]}(ut)
>.

Beginning at t = 0, the crosscorrelation C^{1}(t)
contains an estimate
h^{^1}_{[1]}(t)
of the firstorder kernel h_{[1]}(t)
with respect to the first input.

Beginning at t = T_{1}, the crosscorrelation C^{1}(t)
contains an estimate
h^{^1}_{[2]}(t)
of the firstorder kernel h_{[2]}(t)
with respect to the second input.
An independent estimate of each of these kernels,
h^{^2}_{[1]} and h^{^2}_{[2]},
can be derived
from the crosscorrelation of the response R(t) against
the msequence m_{[2]}(t):
C^{2}(t) = < R(u)m_{[2]}(ut)
>.
Secondorder self and crosskernels
To estimate the secondorder kernels,
crosscorrelate the response
R(t) against the product of the underlying msequences
m_{[1]}(t) and m_{[1]}(t):
C^{1,2}(t_{[1]},t_{[2]}) = <
R(u)m_{[1]}(ut_{[1]})m_{[2]}(ut_{[2]})
>.
As diagrammed below,
the crosscorrelation C^{1,2}
contains estimates of the secondorder kernels with respect to each
input, as well as the secondorder crosskernel.

Beginning at (t_{[1]},t_{[2]}) = (0,0),
C^{1,2}(t_{[1]},t_{[2]})
contains an estimate
h^{^1,2}_{[1,1]}(t_{[1]},t_{[2]})
of the secondorder selfkernel
h_{[1,1]}(t_{[1]},t_{[2]})
with respect to the first input.

Beginning at (t_{[1]},t_{[2]}) = (T_{1},T_{2}),
C^{1,2}(t_{[1]},t_{[2]})
contains an estimate
h^{^1,2}_{[2,2]}(t_{[1]},t_{[2]})
of the secondorder selfkernel
h_{[2,2]}(t_{[1]},t_{[2]})
with respect to the second input.

Beginning at (t_{[1]},t_{[2]}) = (0,T_{2}),
C^{1,2}(t_{[1]},t_{[2]})
contains an estimate
h^{^1,2}_{[1,2]}(t_{[1]},t_{[2]})
of the secondorder crosskernel
h_{[1,2]}(t_{[1]},t_{[2]}).

Beginning at (t_{[1]},t_{[2]}) = (T_{1},0),
C^{1,2}(t_{[1]},t_{[2]})
contains an independent estimate
h^{^1,2}_{[2,1]}(t_{[1]},t_{[2]})
of the secondorder crosskernel
h_{[2,1]}(t_{[1]},t_{[2]}).
Note that
h_{[2,1]}(t_{[1]},t_{[2]}) =
h_{[1,2]}(t_{[2]},t_{[1]}).
top
nutshell
diagram
technotes
background reading
Diagrammatic view of the algorithm for secondorder kernels
This diagrams estimation of secondorder
kernels for a system with two inputs.
Responses at eact time t
are placed into the two dimensional array on the left,
in a position corresponding to the position of signal with respect
to its component msequences. That is, the response R(t) is
positioned at (t_{1},t_{2}), where
t_{1} = t (mod M_{1})
and
t_{2} = t (mod M_{2}).
The crosscorrelation
C^{1,2}(t_{[1]},t_{[2]}) = <
R(u)m_{[1]}(ut_{[1]})m_{[2]}(ut_{[2]})
>.
can be calculated by applying the
fast mtransform
sequentially along each axis.
The transformed space (right panel) contains estimates of the
secondorder self and crosskernels.
The key to the algorithm is that because the
msequence lengths M_{1} and M_{2}
are relatively prime, all pairs of values of
t_{1} and t_{2} occur exactly once,
as t ranges from 0 to M_{1}M_{2}  1.
This allows encoding of a function of two times
(i.e., the second orderkernels) within a function of one time
(the crosscorrelation). It also allows us to "factor"
the calculation of the crosscorrelation into two stages.
top
nutshell
diagram
technotes
background reading
Technical notes

Variants of the
inverserepeat method
should be used to help remove higherorder "contaminants"
from the kernel estimates.

This approach is not limited to only two inputs.
Each input is provided with a signal s_{[j]}(t),
with distinct taps (e.g., spaced by T_{1} and T_{2})
into each of the underlying msequences. The practical limit
to the number of inputs is determined by the ratio of the shortest
msequence length to the minimum tap separation (which
is the anticipated timespan of the kernels).

In principle, this approach is not limited to secondorder
kernels. For estimation of kthorder kernels, the analogous procedure
relies on k msequences, each of relatively prime length.
Each input is a sum of these k sequences, and has a repeat length
equal to the product of the repeat lengths of the underlying msequences.
For more than two inputs, additional taps in each sequence
(beyond the two taps 0 and T_{j} for each sequence m_{j})
should be used.

Other nearly shiftorthogonal sequences besides
msequences can be used, to avoid restrictions due
to the limited set of repeat periods (7, 15, 31, 63, etc.)
provided by msequences. Such sequences can be constructed
for any length of form p = 4n1, from the sequences of
quadratic residues (mod p). However, the
fast mtransform
cannot be used to calculate crosscorrelations with respect to
these sequences.
top
nutshell
diagram
technotes
background reading
Background reading on msequences
Victor, J.D. (1992)
Nonlinear systems analysis in vision: overview of kernel methods.
In Nonlinear Vision: Determination of Neural Receptive Fields,
Function, and Networks, ed. R. Pinter and B. Nabet.
Cleveland, CRC Press. pp. 137.
The msequence method and the inverserepeat idea
Sutter, E.E. (1987)
A practical nonstochastic approach to nonlinear timedomain analysis.
In Advanced Methods in Physiological System Modelling. Vol.1.
ed. V. Z. Marmarelis.
Los Angeles, Univ. of Southern California. pp. 303315.
The fast mtransform
Sutter, E.E. (1991)
The fast mtransform: a fast computation of crosscorrelation
with binary msequences.
SIAM J. Comput. 20, 686694.
top
nutshell
diagram
technotes
background reading
Download as pdf
Receptive field mapping with msequences
Dynamics of bipolar cells explored with msequences
Generalized orthogonal kernels and Wiener expansions
Publications related to receptive field analysis
Return to publications list