Abstract:
A Bayesian SOM (BSOM) [8], is proposed and applied to the unsupervised learning of Gaussian mixture distributions and its performance is compared with the expectation-maximisation (EM) algorithm. The BSOM is found to yield as good results as the EM algorithm but with much fewer iterations and, more importantly, it can be used as an on-line training method. The neighbourhood function and distance measures of the traditional SOM are replaced by the neuron's on-line estimated posterior probabilities, which can be interpreted as a Bayesian inference of the neuron's opportunity to share in the winning response and so adapt to the input pattern. Such posteriors starting form uniform priors are gradually sharpened when more and more data samples become available and so improve the estimation of model parameters. Each neuron then converges to one component of the mixture. Experimental results are compared with those of the EM algorithm.