Rprop
Encyclopedia
Rprop, short for resilient backpropagation
Backpropagation
Backpropagation is a common method of teaching artificial neural networks how to perform a given task. Arthur E. Bryson and Yu-Chi Ho described it as a multi-stage dynamic system optimization method in 1969 . It wasn't until 1974 and later, when applied in the context of neural networks and...

, is a learning heuristic for supervised learning
Supervised learning
Supervised learning is the machine learning task of inferring a function from supervised training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object and a desired output value...

 in feedforward artificial neural network
Artificial neural network
An artificial neural network , usually called neural network , is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes...

s. This is a first-order optimization
Optimization (mathematics)
In mathematics, computational science, or management science, mathematical optimization refers to the selection of a best element from some set of available alternatives....

 algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...

. This algorithm was created by Martin Riedmiller and Heinrich Braun in 1992.

Similarly to the Manhattan update rule, Rprop takes into account only the sign
Sign (mathematics)
In mathematics, the word sign refers to the property of being positive or negative. Every nonzero real number is either positive or negative, and therefore has a sign. Zero itself is signless, although in some contexts it makes sense to consider a signed zero...

 of the partial derivative
Partial derivative
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant...

 over all patterns (not the magnitude), and acts independently on each "weight". For each weight, if there was a sign change of the partial derivative of the total error function compared to the last iteration, the update value for that weight is multiplied by a factor η, where η < 1. If the last iteration produced the same sign, the update value is multiplied by a factor of η+, where η+ > 1. The update values are calculated for each weight in the above manner, and finally each weight is changed by its own update value, in the opposite direction of that weight's partial derivative, so as to minimise the total error function. η+ is empirically set to 1.2 and η to 0.5.

Next to the cascade correlation algorithm
Cascade correlation algorithm
Cascade-Correlation is an architecture and supervised learning algorithm for artificial neural networks developed by Scott Fahlman at Carnegie Mellon in 1990....

 and the Levenberg–Marquardt algorithm, Rprop is one of the fastest weight update mechanisms.

RPROP is a batch update algorithm.

Variations

Martin Riedmiller developed three algorithms, all named RPROP. Igel and Hüsken assigned a new name to them: Improving the Rprop Learning Algorithm.
  1. RPROP+ is defined at A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm.
  2. RPROP− is defined at Advanced Supervised Learning in Multi-layer Perceptrons – From Backpropagation to Adaptive Learning Algorithms. Backtracking is removed from RPROP+.
  3. iRPROP− is defined at Rprop – Description and Implementation Details. This is reinvented by Igel and Hüsken. This is most popular, most simple, and in many cases most efficient.
  4. iRPROP+ is defined at Improving the Rprop Learning Algorithm.

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK