|Part of a series on|
and data mining
Activation function of a node in an artificial neural network is a function that calculates the output of the node (based on its inputs and the weights on individual inputs). Nontrivial problems can be solved only using a nonlinear activation function. Modern activation functions include the smooth version of the ReLU, the GELU, which was used in the 2018 BERT model, the logistic (sigmoid) function used in the 2012 speech recognition model developed by Hinton et al, the ReLU used in the 2012 AlexNet computer vision model and in the 2015 ResNet model.
Aside from their empirical performance, activation functions also have different mathematical properties:
These properties do not decisively influence performance, nor are they the only mathematical properties that may be useful. For instance, the strictly positive range of the softplus makes it suitable for predicting variances in variational autoencoders.
The most common activation functions can be divided into three categories: ridge functions, radial functions and fold functions.
An activation function is saturating if . It is nonsaturating if it is not saturating. Non-saturating activation functions, such as ReLU, may be better than saturating activation functions, as networks using are less likely to suffer from the vanishing gradient problem.
Main article: Ridge function
Ridge functions are multivariate functions acting on a linear combination of the input variables. Often used examples include:[clarification needed]
In biologically inspired neural networks, the activation function is usually an abstraction representing the rate of action potential firing in the cell. In its simplest form, this function is binary—that is, either the neuron is firing or not. Neurons also cannot fire faster than a certain rate, motivating sigmoid activation functions whose range is a finite interval.
The function looks like , where is the Heaviside step function.
If a line has a positive slope, on the other hand, it may reflect the increase in firing rate that occurs as input current increases. Such a function would be of the form .
Main article: Radial function
A special class of activation functions known as radial basis functions (RBFs) are used in RBF networks, which are extremely efficient as universal function approximators. These activation functions can take many forms, but they are usually found as one of the following functions:
where is the vector representing the function center and and are parameters affecting the spread of the radius.
Main article: Fold function
Folding activation functions are extensively used in the pooling layers in convolutional neural networks, and in output layers of multiclass classification networks. These activations perform aggregation over the inputs, such as taking the mean, minimum or maximum. In multiclass classification the softmax activation is often used.
The following table compares the properties of several activation functions that are functions of one fold x from the previous layer or layers:
|Name||Plot||Function,||Derivative of ,||Range||Order of continuity|
|Logistic, sigmoid, or soft step|
|Hyperbolic tangent (tanh)|
|Soboleva modified hyperbolic tangent (smht)|
|Rectified linear unit (ReLU)|
|Gaussian Error Linear Unit (GELU)|
|Exponential linear unit (ELU)||
|Scaled exponential linear unit (SELU)||
|Leaky rectified linear unit (Leaky ReLU)|
|Parametric rectified linear unit (PReLU)||
|Sigmoid linear unit (SiLU, Sigmoid shrinkage, SiL, or Swish-1)|
The following table lists activation functions that are not functions of a single fold x from the previous layer or layers:
|Name||Equation,||Derivatives,||Range||Order of continuity|
|Softmax||for i = 1, …, J|||
Main article: Quantum function
In quantum neural networks programmed on gate-model quantum computers, based on quantum perceptrons instead of variational quantum circuits, the non-linearity of the activation function can be implemented with no need of measuring the output of each perceptron at each layer. The quantum properties loaded within the circuit such as superposition can be preserved by creating the Taylor series of the argument computed by the perceptron itself, with suitable quantum circuits computing the powers up to a wanted approximation degree. Because of the flexibility of such quantum circuits, they can be designed in order to approximate any arbitrary classical activation function.