Neural Network Independence Properties with Applications to Adaptive Control

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Neural networks form a general purpose architecture for machine learning and parameter identification. The simplest neural network consists of a single hidden layer connected to a linear output layer. It is often assumed that the components of the hidden layer correspond to linearly independent functions, but proofs of this are only known for a few specialized classes of network activation functions. This paper shows that for wide class of activation functions, including most of the commonly used activation functions in neural network libraries, almost all choices of hidden layer parameters lead to linearly independent functions. These linear independence properties are then used to derive sufficient conditions for persistence of excitation, a condition commonly used to ensure parameter convergence in adaptive control.

Original languageEnglish (US)
Title of host publication2022 IEEE 61st Conference on Decision and Control, CDC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3365-3370
Number of pages6
ISBN (Electronic)9781665467612
DOIs
StatePublished - 2022
Externally publishedYes
Event61st IEEE Conference on Decision and Control, CDC 2022 - Cancun, Mexico
Duration: Dec 6 2022Dec 9 2022

Publication series

NameProceedings of the IEEE Conference on Decision and Control
Volume2022-December
ISSN (Print)0743-1546
ISSN (Electronic)2576-2370

Conference

Conference61st IEEE Conference on Decision and Control, CDC 2022
Country/TerritoryMexico
CityCancun
Period12/6/2212/9/22

Bibliographical note

Publisher Copyright:
© 2022 IEEE.

Fingerprint

Dive into the research topics of 'Neural Network Independence Properties with Applications to Adaptive Control'. Together they form a unique fingerprint.

Cite this