请输入您要查询的百科知识:

 

词条 Delta rule
释义

  1. Derivation of the delta rule

  2. See also

  3. References

{{multiple issues|{{Refimprove|date=November 2012}}{{confusing|date=September 2012}}
}}

In machine learning, the Delta rule is a gradient descent learning rule for updating the weights of the inputs to artificial neurons in a single-layer neural network.[1] It is a special case of the more general backpropagation algorithm. For a neuron with activation function , the delta rule for 's th weight is given by

,

where

is a small constant called learning rate
is the neuron's activation function
is the target output
is the weighted sum of the neuron's inputs
is the actual output
is the th input.

It holds that and .

The delta rule is commonly stated in simplified form for a neuron with a linear activation function as

While the delta rule is similar to the perceptron's update rule, the derivation is different. The perceptron uses the Heaviside step function as the activation function , and that means that does not exist at zero, and is equal to zero elsewhere, which makes the direct application of the delta rule impossible.

Derivation of the delta rule

The delta rule is derived by attempting to minimize the error in the output of the neural network through gradient descent. The error for a neural network with outputs can be measured as

.

In this case, we wish to move through "weight space" of the neuron (the space of all possible values of all of the neuron's weights) in proportion to the gradient of the error function with respect to each weight. In order to do that, we calculate the partial derivative of the error with respect to each weight. For the th weight, this derivative can be written as

.

Because we are only concerning ourselves with the th neuron, we can substitute the error formula above while omitting the summation:

Next we use the chain rule to split this into two derivatives:

To find the left derivative, we simply apply the chain rule:

To find the right derivative, we again apply the chain rule, this time differentiating with respect to the total input to , :

Note that the output of the th neuron, , is just the neuron's activation function applied to the neuron's input . We can therefore write the derivative of with respect to simply as 's first derivative:

Next we rewrite in the last term as the sum over all weights of each weight times its corresponding input :

Because we are only concerned with the th weight, the only term of the summation that is relevant is . Clearly,

,

giving us our final equation for the gradient:

As noted above, gradient descent tells us that our change for each weight should be proportional to the gradient. Choosing a proportionality constant and eliminating the minus sign to enable us to move the weight in the negative direction of the gradient to minimize error, we arrive at our target equation:

.

See also

  • Stochastic gradient descent
  • Backpropagation
  • Rescorla–Wagner model - the origin of delta rule

References

1. ^{{cite web|last=Russell |first=Ingrid |title=The Delta Rule |url=http://uhavax.hartford.edu/compsci/neural-networks-delta-rule.html |publisher=University of Hartford |accessdate=5 November 2012 |deadurl=yes |archiveurl=https://web.archive.org/web/20160304032228/http://uhavax.hartford.edu/compsci/neural-networks-delta-rule.html |archivedate=4 March 2016 |df= }}
{{DEFAULTSORT:Delta Rule}}LMS-Algorithmus

1 : Artificial neural networks

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/13 17:57:41