On-going research

Learning invariant features with novel autoencoder structure

We have developed a novel variant of autoencoder (Autoencoder, Wikipedia) neural network that uses lateral links between encoding and decoding paths to learn rich and invariant features of the data. In contrast to traditional autoencoders, part of the information flows through the lateral links directly to the decoder relieving the pressure from the higher levels to represent all of information and allowing them to concentrate only on information that is not modeled by the lower layers. This is beneficial for two reasons. First, this kind of representation is more suitable to be used with supervised learning tasks where irrelevant information is often discarded on the way up. Second, this is more efficient use of neurons in the network allowing larger models and faster training times. For more information, check out the paper in arXiv . In our forthcoming NIPS paper, we show how how the proposed Ladder network reaches state of the art in some benchmark tasks in both semi-supervised and supervised cases.

Some of the features learned by the model from CIFAR10 image set grouped by similarity by the network.

Natural language processing character-by-character

Most current language processing systems process text on a word-level, considering each word as a unique token (a "one-hot vector"). This approach leads to huge dimensionality in cases where words can change their forms (e.g. colloquial language and inflecting languages such as Finnish). This project uses deep neural networks to transform words, character-by-character, to hidden low-dimensional representations. The goal is to find representations that are robust to small changes in words. Pyry Takala is working on this project.

[Word projections]
Word representations obtained with a neural network can also be visualized. Typically, related words project to nearby locations.

Earlier research topics

You can find more information on the earlier research results of our group under its older name Bayesian learning of latent variable models: 2010-2011, 2008-2009, 2006-2007, 2004-2005, 2002-2003, and 2000-2001.

Some of our research results have been described under the activities of the Independent component analysis (ICA) group of HUT, which studies ICA, blind source separation (BSS), and their extensions. For more detailed information, see the research reports of our ICA group covering the years 2010-2011, 2008-2009, 2006-2007, 2004-2005, and 2002-2003, as well as theoretical ICA research in 2000-2001, and applications of ICA in 2000-2001.