Ordinary neural networks consists of neurons that have learnable weights and biases. The input is a single vector (of features) that is transfomed through a number of hidden layers. Each hidden layer is made up of a set of neurons, where each neuron is fully connected to all neurons in the previous layer. In case of Convolutional neural networks (CNNs), the inputs are images i.e. the input has a third dimension,
depth in additon to
ConvNets consists of layers. Every layer in ConvNet transform the 3D input volume (image). There are three types of layers used:
The amount of error in voting (right prediction - wrong prediction) tells us how good the features and weights are. Backpropagation is used for assigning the optimum weights to neurons.
Normalization layer: Sometimes, normalization layer is also used in ConvNets. Usually, it is Rectified Linear Unit (ReLU). ReLU,
max(0, x), replaces the negative values in matrix with 0. RELU is just a non linearity which is applied as in neural networks.