Understanding the underlying principles of biological perceptual systems is of vital importance not only to neuroscientists, but, increasingly, to engineers and computer scientists who wish to develop artificial perceptual systems. In this original and groundbreaking work, the authors systematically examine the relationship between the powerful technique of Principal Component Analysis (PCA) and neural networks. Principal Component Neural Networks focuses on issues pertaining to both neural network models (i.e., network structures and algorithms) and theoretical extensions of PCA. In addition, it provides basic review material in mathematics and neurobiology. This book presents neural models originating from both the Hebbian learning rule and least squares learning rules, such as back–propagation. Its ultimate objective is to provide a synergistic exploration of the mathematical, algorithmic, application, and architectural aspects of principal component neural networks. Especially valuable to researchers and advanced students in neural network theory and signal processing, this book offers application examples from a variety of areas, including high–resolution spectral estimation, system identification, image compression, and pattern recognition.
Principal Component Analysis.
PCA Neural Networks.
Channel Noise and Hidden Units.
Signal Enhancement Against Noise.
S. Y. Kung is Professor of Electrical Engineering at Princeton University and received his PhD from Stanford University. He was formerly a professor of electrical engineering at the University of Southern California.