英语翻译
Learninginsuchanetworkproceedsthesamewayasforperceptrons:exampleinputsarepresentedtothenetwork,andifthenetworkcomputesanoutputvectorthatmatchesthetarget,nothingisdone.Ifthereisanerror(adifferencebetweentheoutputandtarget),thentheweightsareadjustectoreducethiserror.Thetrickistoassesstheblameforanerroranddivideitamongthecontributingweights.Inperceptrons,thisiseasybecausethereisonlyoneweightbetweeneachinputandtheoutput.Butinmultilayernetworks.Therearemanyweightsconnectingeachinputtoanoutput,andeachoftheseweightscontributestomorethanoneoutput.
在这样一个网络学习收益,感知器相同的方式:例如输入提交给网络,如果网络计算的输出向量相匹配的目标,不采取任何行动.如果有一个错误(一产出和目标之间的差异),则权重adjustec减少这种误差.诀窍是评估错误引咎鸿沟在造成重了.在感知,这很容易,因为只有一间每个输入和输出的重量.但在多层网络.有连接每个输入到输出许多重量,而这些重量每有助于多个输出.
Theback-propagationalgorithmisasensiblyapproachtodividingthecontributionofeachweight.Asintheperceptronlearningalgorithm,wetrytominimizetheerrorbetweeneachtargetoutputandtheoutputactuallycomputedbythenetwork.Attheoutputlayer,theweightupdateruleisverysimilartotherulefortheperceptron.Therearetwodifferences:theactivationofthehiddenunitajisusedinsteadoftheinputvalue;andtherulecontainsatermforthegradientoftheactivationfunction.IfErriistheerror(Ti-Oi)attheoutputnode,thentheweightupdateruleforthelinkfromunitjtounitiis
反向传播算法是一种明智的方法来划分,每个重量的贡献.正如在感知学习算法,我们尽量减少各目标之间的输出和实际的网络计算的输出错误.在输出层,重量更新规则非常类似的感知规则.有两点不同:隐藏的单元欧塞尔激活,而不是输入值使用;和规则包含了激活功能梯度的一个术语.如果Erri是在输出节点错误(钛爱),那么从单位重量的链接j到我单位更新规则
Wj,i