Deep Learning and Bayesian Modelling research group belongs to the Aalto University, School of Science, Department of Computer Science, research area Machine Learning, Data Mining & Probabilistic Modeling.
Deep learning is a machine learning approach inspired by the brain. Consider a typical machine learning task such as classification of images. The raw input data (pixels) is typically first transformed into some abstract feature space, and only then classified. The features could describe e.g. whether there are a lot of vertical stripes in the image, or whether the top part is blueish. This transformation is important since the classification task is very nonlinear: Whether making one pixel darker will make the image look more like a lion, depends totally on the context of other pixels.
Deep learning differs from traditional machine learning methods in two ways: (1) Instead of learning a classifier on top of handcrafted features, also the features themselves are learned. (2) The features are computed in several steps: inputs are mapped to first layer of features, first layer to second layer etc. This is what makes deep learning deep, and how it differs from the shallower neural networks in the 1980s.
The mappings between layers have forms that allow them to perform practically any function, and they have millions of parameters that therefore define what the network actually does. These parameters are tuned by training the network perform well on a given data set. Often there is not enough labelled data for the task at hand, but this can be helped: One can train an auxiliary network to reconstruct corrupted data. It has as inputs the corrupted copies of the data, and as desired output the clean copies of the data. It turns out that features that are useful in making reconstructions, are also useful for other tasks (such as classification). An example of image reconstruction can be seen in the figure, where left half of a face is reconstructed from the right half using a model learned from full images of other people.
Our research group has studied representation learning since 1999 and deep representations since 2001. The term deep learning was invented in 2006. Since around 2010, deep learning has provided breakthroughs is areas such as computer vision, speech recognition and machine translation. It was acknowledged as #1 breakthrough of 2013 by MIT Technology Review. Companies such as Google, Facebook, Microsoft and Baidu have started large research efforts on the topic.For more information, check out the book Deep Learning or the Deep Learning portal.
On our research page, you can find descriptions of our current and
former research topics.
The group is collaborating with ZenRobotics, Nokia Labs, NVIDIA, and VTT.
Here you can find free software packages prepared by our research group.