Catalogue


Neural smithing : supervised learning in feedforward artificial neural networks /
Russell D. Reed and Robert J. Marks II.
imprint
Cambridge, Mass. : The MIT Press, c1999.
description
viii, 346 p. : ill. ; 24 cm.
ISBN
0262181908 (hc : alk. paper)
format(s)
Book
Holdings
More Details
imprint
Cambridge, Mass. : The MIT Press, c1999.
isbn
0262181908 (hc : alk. paper)
general note
"A Bradford book."
catalogue key
2667316
 
Includes bibliographical references (p. [319]-338) and index.
A Look Inside
About the Author
Author Affiliation
Russell D. Reed is an engineer/programmer for a private engineering services firm. Robert J. Marks II is Professor of Electrical Engineering at the University of Washington.
Summaries
Main Description
Artificial neural networks are nonlinear mapping systems whose structure is loosely based on principles observed in the nervous systems of humans and animals. The basic idea is that massive systems of simple units linked together in appropriate ways can generate many complex and interesting behaviors. This book focuses on the subset of feedforward artificial neural networks called multilayer perceptrons (MLP). These are the mostly widely used neural networks, with applications as diverse as finance (forecasting), manufacturing (process control), and science (speech and image recognition). This book presents an extensive and practical overview of almost every aspect of MLP methodology, progressing from an initial discussion of what MLPs are and how they might be used to an in-depth examination of technical factors affecting performance. The book can be used as a tool kit by readers interested in applying networks to specific problems, yet it also presents theory and references outlining the last ten years of MLP research.
Main Description
Artificial neural networks are nonlinear mapping systems whose structure is loosely based on principles observed in the nervous systems of humans and animals. The basic idea is that massive systems of simple units linked together in appropriate ways can generate many complex and interesting behaviors. This book focuses on the subset of feedforward artificial neural networks called multilayer perceptrons (MLP). These are the mostly widely used neural networks, with applications as diverse as finance (forecasting), manufacturing (process control), and science (speech and image recognition).This book presents an extensive and practical overview of almost every aspect of MLP methodology, progressing from an initial discussion of what MLPs are and how they might be used to an in-depth examination of technical factors affecting performance. The book can be used as a tool kit by readers interested in applying networks to specific problems, yet it also presents theory and references outlining the last ten years of MLP research.
Main Description
Artificial neural networks are nonlinear mapping systems whose structure is looselybased on principles observed in the nervous systems of humans and animals. The basic idea is thatmassive systems of simple units linked together in appropriate ways can generate many complex andinteresting behaviors. This book focuses on the subset of feedforward artificial neural networkscalled multilayer perceptrons (MLP). These are the mostly widely used neural networks, withapplications as diverse as finance (forecasting), manufacturing (process control), and science(speech and image recognition). This book presents an extensive and practicaloverview of almost every aspect of MLP methodology, progressing from an initial discussion of whatMLPs are and how they might be used to an in-depth examination of technical factors affectingperformance. The book can be used as a tool kit by readers interested in applying networks tospecific problems, yet it also presents theory and references outlining the last ten years of MLPresearch.
Table of Contents
Prefacep. xi
Introductionp. 1
Supervised Learningp. 7
Objective Functionsp. 9
Alternatives and Extensionsp. 11
Single-Layer Networksp. 15
Hyperplane Geometryp. 15
Linear Separabilityp. 18
Hyperplane Capacityp. 20
Learning Rules for Single-Layer Networksp. 23
Adalines and the Widrow-Hoff Learning Rulep. 29
MLP Representational Capabilitiesp. 31
Representational Capabilityp. 31
Universal Approximation Capabilitiesp. 35
Size versus Depthp. 38
Capacity versus Sizep. 41
Back-Propagationp. 49
Preliminariesp. 50
Back-Propagation: The Derivative Calculationp. 53
Back-Propagation: The Weight Update Algorithmp. 57
Common Modificationsp. 62
Pseudocode Examplesp. 63
Remarksp. 66
Training Timep. 67
Learning Rate and Momentump. 71
Learning Ratep. 71
Momentump. 85
Remarksp. 95
Weight-Initialization Techniquesp. 97
Random Initializationp. 97
Nonrandom Initializationp. 105
The Error Surfacep. 113
Characteristic Featuresp. 113
The Gradient is the Sum of Single-Pattern Gradientsp. 117
Weight-Space Symmetriesp. 118
Remarksp. 120
Local Minimap. 121
Properties of the Hessian Matrixp. 127
Gain Scalingp. 132
Faster Variations of Back-Propagationp. 135
Adaptive Learning Rate Methodsp. 135
Vogl's Method (Bold Driver)p. 136
Delta-Bar-Deltap. 137
Silva and Almeidap. 140
SuperSABp. 142
Rpropp. 142
Quickpropp. 145
Search Then Convergep. 147
Fuzzy Control of Back-Propagationp. 148
Other Heuristicsp. 150
Remarksp. 151
Other Notesp. 153
Classical Optimization Techniquesp. 155
The Objective Functionp. 155
Factors Affecting the Choice of a Methodp. 156
Line Searchp. 158
Evaluation-Only Methodsp. 159
First-Order Gradient Methodsp. 163
Second-Order Gradient Methodsp. 169
Stochastic Evaluation-Only Methodsp. 175
Discussionp. 179
Genetic Algorithms and Neural Networksp. 185
The Basic Algorithmp. 186
Examplep. 189
Application to Neural Network Designp. 191
Remarksp. 194
Constructive Methodsp. 197
Dynamic Node Creationp. 199
Cascade-Correlationp. 201
The Upstart Algorithmp. 204
The Tiling Algorithmp. 206
Marchand's Algorithmp. 209
Meiosis Networksp. 212
Principal Components Node Splittingp. 213
Construction from a Voronoi Diagramp. 215
Other Algorithmsp. 217
Pruning Algorithmsp. 219
Pruning Algorithmsp. 220
Sensitivity Calculation Methodsp. 221
Penalty-Term Methodsp. 226
Other Methodsp. 232
Discussionp. 235
Factors Influencing Generalizationp. 239
Definitionsp. 239
The Need for Additional Informationp. 240
Network Complexity versus Target Complexityp. 241
The Training Datap. 242
The Learning Algorithmp. 249
Other Factorsp. 253
Summaryp. 255
Generalization Prediction and Assessmentp. 257
Cross-Validationp. 257
The Bayesian Approachp. 258
Akaike's Final Prediction Errorp. 260
PAC Learning and the VC Dimensionp. 261
Heuristics for Improving Generalizationp. 265
Early Stoppingp. 265
Regularizationp. 266
Pruning Methodsp. 268
Constructive Methodsp. 268
Weight Decayp. 269
Information Minimizationp. 271
Replicated Networksp. 272
Training with Noisy Datap. 273
Use of Domain-Dependent Prior Informationp. 274
Hint Functionsp. 275
Knowledge-Based Neural Netsp. 275
Physical Models to Generate Additional Datap. 276
Effects of Training with Noisy Inputsp. 277
Convolution Property of Training with Jitterp. 277
Error Regularization and Training with Jitterp. 281
Training with Jitter and Sigmoid Scalingp. 283
Extension to General Layered Neural Networksp. 288
Remarksp. 289
Further Examplesp. 290
Linear Regressionp. 293
Newton's Methodp. 294
Gradient Descentp. 295
The LMS Algorithmp. 298
Principal Components Analysisp. 299
Autoencoder Networks and Principal Componentsp. 303
Discriminant Analysis Projectionsp. 306
Jitter Calculationsp. 311
Jitter: Small-Perturbation Approximationp. 311
Jitter: CDF-PDF Convolution in n Dimensionsp. 311
Jitter: CDF-PDF Convolution in One Dimensionp. 314
Sigmoid-like Nonlinear Functionsp. 315
Referencesp. 319
Indexp. 339
Table of Contents provided by Syndetics. All Rights Reserved.

This information is provided by a service that aggregates data from review sources and other sources that are often consulted by libraries, and readers. The University does not edit this information and merely includes it as a convenience for users. It does not warrant that reviews are accurate. As with any review users should approach reviews critically and where deemed necessary should consult multiple review sources. Any concerns or questions about particular reviews should be directed to the reviewer and/or publisher.

  link to old catalogue

Report a problem