Back propagation with expected source values

Research output: Contribution to journalArticlepeer-review

21 Scopus citations

Abstract

The back propagation learning rule converges significantly faster if expected values of source units are used for updating weights. The expected value of a unit can be approximated as the sum of the output of the unit and its error term. Results from numerous simulations demonstrate the comparative advantage of the new rule.

Original languageEnglish (US)
Pages (from-to)615-618
Number of pages4
JournalNeural Networks
Volume4
Issue number5
DOIs
StatePublished - 1991

Bibliographical note

Copyright:
Copyright 2014 Elsevier B.V., All rights reserved.

Keywords

  • Back propagation
  • Neural networks
  • Supervised learning

Fingerprint

Dive into the research topics of 'Back propagation with expected source values'. Together they form a unique fingerprint.

Cite this