Part of Advances in Neural Information Processing Systems 6 (NIPS 1993)
Jonathan Shapiro, Adam Prügel-Bennett
Neurons learning under an unsupervised Hebbian learning rule can perform a nonlinear generalization of principal component analysis. This relationship between nonlinear PCA and nonlinear neurons is reviewed. The stable fixed points of the neuron learning dynamics correspond to the maxima of the statist,ic optimized under non(cid:173) linear PCA. However, in order to predict. what the neuron learns, knowledge of the basins of attractions of the neuron dynamics is required. Here the correspondence between nonlinear PCA and neural networks breaks down. This is shown for a simple model. Methods of statistical mechanics can be used to find the optima of the objective function of non-linear PCA. This determines what the neurons can learn. In order to find how the solutions are parti(cid:173) tioned amoung the neurons, however, one must solve the dynamics.