- sensor layer
- associative layer
- output neuron
For every input on the perceptron (including bias), there is a corresponding weight. To calculate the output of the perceptron, every input is multiplied by its corresponding weight. Then weighted sum is computed of all inputs and fed it through a limiter function that evaluates the final output of the perceptron.
The output of neuron is formed by activation of the output neuron, which is function of input:
The activation function F can be linear so that we have a linear network, or nonlinear. In this example I decided to use threshold (signum) function:
Output of network in this case is either +1 or -1 depending on the input. If the total input (weighted sum of all inputs) is positive, then the pattern belongs to class +1, otherwise to class -1. Because of this behavior, we can use perceptron for classification tasks.
Lets consider we have a perceptron with 2 inputs and we want to separate input patterns into 2 classes. In this case the separation between the classes is straight line, given by equation:
When we set x0=-1 and mark w0=θ then we can rewrite equation (3) into form:
Here I will describe learning method for perceptron. Learning method of perceptron is iterative procedure that adjust the weights. A learning sample is presented to the network. For each weight the new value is computed by adding a correction to the old value. The threshold is updated in the same way:
where y is output of perceptron, d is desired output and γ is the learning parameter.
More about program and source code, you can find on www.CodeProject.com