The Single-Layer Perceptron (SLP) was one of the first artificial neural networks to be developed. It consists of a system that can classify a series of inputs based on their weights, and distinguish two different type of classes linearly. The activation function, in the case of the picture below, is a step function, meaning the resulting output can assume only two values. An input constant, also known as bias, determines the system threshold to define the output.
The limitation of the SLP consists of only separating two classes with a single line, in the example below, if one blue dot were in the middle of the red dots, the training algorithm would not converge to an acceptable solution.
Image source: http://ecee.colorado.edu/
The Multilayer Perceptron solves the problem of the SLP linearity, which can address a wider range of applications. In the picture above, the MLP is able to learn all three logic gates including the “XOR”, the two dots classes can’t be separated by one line. Professor Marcelo implemented an MLP code with a single hidden layer, available on Matlab repository, which has an example of an MLP learning the behavior of a XOR gate.
Image source: http://www.mdpi.com/