Back propagation is a systematic method for training multilayer artificial neural networks. It has a mathematical foundation that is strong if not highly practical. Despite its limitations, back propagation has dramatically expanded the range of problems to which artificial neural networks can be applied, and it has generated many successful demonstrations of its power.
The neuron used as the fundamental building block for back propagation networks. A set of inputs is applied, either from the outside or from a previous layer. Each of these is multiplied by a weight, and the products are summed. this summation of products is termed NET and must be calculated for each neuron in the network. After NET is calculated, an activation function F is applied to modify it, thereby producing the signal OUT.
- Select the next training pair from the training set: apply the input vector to the network input.
- Calculate the output of the network.
- Calculate the error between the network output and the desired output (the target vector from the training pair).
- Adjust the weights of the network in a way that minimizes the error.
- Repeat steps 1 through 4 for each vector in the training set until the error for the entire set is a acceptably low.
Counter propagation is useful in certain less obvious applications. One of the more interesting examples is data compression. A counter propagation network can be used to compress data prior to transmission, thereby reducing the number of bits that must be sent. Suppose that an Image is to be transmitted. It can be divided into sub-images S. Each sub-image is further divided into pixels. The method of vector quantization finds these shorter bit strings that best represent the sub-images. A counter propagation network can be used to perform vector quantization.
Back propagation has been applied to a wide variety of research applications :
- Japan has announced recently that it has applied back propagation to a new optical-character-recognition system, thereby improving accuracy to over 99%.
- A system that converted printed English text into highly intelligible speech (Net Talk) is just possible because of back propagation.
- It is used in machine recognition of handwritten English words.
- Back propagation acquired accuracies of 99.7% when used with a dictionary filter.
- It is used in a successful image compression application in which images were represented with one bit per pixel, an eightfold improvement over the input data.
Statistical methods are useful both for training artificial neural networks and for producing the output from a previously trained network. Statistical training methods offer important advantages by avoiding local minima in the training process. An artificial neural network is trained by means of some process that modifies its weights. If the training is successful, application of a set of inputs to the network produces the desired set of outputs.