#### FAST CONVERGENT FACTORIAL LEARNING OF THE LOWDIMENSIONAL INDEPENDENT MANIFOLDS IN OPTICAL IMAGING DATA

Penio S. Penev, http://venezia.rockefeller.edu/, http://camelot.mssm.edu/

PenevPS@IEEE.org, kaplane@rockvax.rockefeller.edu

In many functionalimaging scenarios, it is a challenge to sep
arate the response to stimulation from the other, presumably in
dependent, sources that contribute to the image formation. When
the brain is optically imaged, the typical variabilities of some of
these sources force the data to lie close to a lowdimensional, non
linear manifold. When an initial probability model is derived by
the KarhunenLo‘eve Transform (KLT) of the data, and some fac
tors of this manifold happen to be accessibly embedded in suitably
chosen KLT subspaces, vector quantization has been used to char
acterize this embedding as the locus of maximum likelihood of the
data, and to derive an improved probability model, in which the
factors---the dynamics on this locus and away from it---are esti
mated independently. Here we show that such a description can
serve as the starting point for a convergent procedure that alterna
tively refines the estimates of the embedding of, and the dynamics
on, the manifold. Further, we show that even a very crude initial
estimate, from a heavily mixed subspace, is sufficient for conver
gence in a small number of steps. This opens the possibility of
hierarchical semiblind separation of the independent sources in
optical imaging data, even when their contributions are nonlinear.