Neural Networks: A Comprehensive FoundationIntroduction; Learning processes; Single layer perceptrons; Multilayer perceptrons; Radial-basis function networks; Support vector machines; Comittee machines; Principal components analysis; Self-organizing maps; Information-theoretic models; Stochastic machines and their approximates rooted in statistical mechanics; neurodynamic programming; Temporal processing using feedforward networks; Neurodynamics; Dynamically driven recurrent networks; Epilogue; Bibliography; Index. |
Contents
Learning Processes | 50 |
Single Layer Perceptrons | 117 |
Multilayer Perceptrons | 156 |
Copyright | |
21 other sections not shown
Other editions - View all
Common terms and phrases
activation function algorithm approximation back-propagation back-propagation algorithm back-propagation learning bias Boltzmann machine Chapter classification computation condition convergence cost function d₁ defined denote derivative described in Eq desired response dimensionality distribution eigenvalue entropy equation error signal error surface estimate example feature map feedforward FIGURE follows function f(x Gaussian gradient Green's function Hebbian Hessian matrix hidden layer hidden neurons induced local field input patterns input space input vector input-output mapping iteration kernel Kullback-Leibler divergence learning algorithm learning machine learning process learning-rate parameter linear LMS algorithm m₁ method minimization multilayer perceptron neural network neuron nodes nonlinear operator optimum output layer output neuron performance probability density function radial-basis function random variable RBF network regression result risk functional Section self-organizing sigmoid sigmoid function signal-flow graph statistical stochastic supervised learning support vector machine theorem tion training data training sample training set VC dimension weight vector x₁ zero



