Comparison of ReLU and linear saturated activation functions in neural network for universal approximation
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216275%3A25530%2F19%3A39915451" target="_blank" >RIV/00216275:25530/19:39915451 - isvavai.cz</a>
Result on the web
<a href="http://dx.doi.org/10.1109/PC.2019.8815057" target="_blank" >http://dx.doi.org/10.1109/PC.2019.8815057</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1109/PC.2019.8815057" target="_blank" >10.1109/PC.2019.8815057</a>
Alternative languages
Result language
angličtina
Original language name
Comparison of ReLU and linear saturated activation functions in neural network for universal approximation
Original language description
Activation functions used in hidden layers directly affect the possibilities for describing nonlinear systems using a feedforward neural network. Furthermore, linear based activation functions are less computationally demanding than their nonlinear alternatives. In addition, feedforward neural networks with linear based activation functions can be advantageously used for control of nonlinear systems, as shown in previous authors' publications. This paper aims to compare two types of linear based functions - symmetric linear saturated function and the rectifier linear unit (ReLU) function as activation functions of the feedforward neural network used for a nonlinear system approximation. Topologies with one hidden layer and the combination of defined quantities of hidden layer neurons in the feedforward neural network are used. Strict criteria are applied for the conditions of the experiments; specifically, the Levenberg-Marquardt algorithm is applied as a training algorithm and the Nguyen-Widrow algorithm is used for the weights and biases initialization. Three benchmark systems are then selected as nonlinear plants for approximation, which should serve as a repeatable source of data for testing. The training data are acquired by the computation of the output as a reaction to a specified colored input signal. The comparison is based on the convergence speed of the training for a fixed value of the error function, and also on the performance over a constant number of epochs. At the end of the experiments, only small differences between the performance of both applied activation functions are observed. Although the symmetric linear saturated activation function provides the lesser median of the final error function value across the all tested numbers of neurons in topologies, the ReLU function seems to be also capable of use as the activation function for nonlinear system modeling.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
—
OECD FORD branch
20205 - Automation and control systems
Result continuities
Project
—
Continuities
S - Specificky vyzkum na vysokych skolach
Others
Publication year
2019
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Proceedings of the 2019 22nd International Conference on Process Control, PC 2019
ISBN
978-1-72813-758-2
ISSN
—
e-ISSN
—
Number of pages
6
Pages from-to
146-151
Publisher name
IEEE (Institute of Electrical and Electronics Engineers)
Place of publication
New York
Event location
Štrbské Pleso
Event date
Jun 11, 2019
Type of event by nationality
EUR - Evropská akce
UT code for WoS article
—