You are here
artificial neural network architecture for interpolation, function approximation, time series modeling and control applications
- Date Issued:
- 1994
- Summary:
- A new artificial neural network architecture called Power Net (PWRNET) and Orthogonal Power Net (OPWRNET) has been developed. Based on the Taylor series expansion of the hyperbolic tangent function, this novel architecture can approximate multi-input multi-layer artificial networks, while requiring only a single layer of hidden nodes. This allows a compact network representation with only one layer of hidden layer weights. The resulting trained network can be expressed as a polynomial function of the input nodes. Applications which cannot be implemented with conventional artificial neural networks, due to their intractable nature, can be developed with these network architectures. The degree of nonlinearity of the network can be directly controlled by adjusting the number of hidden layer nodes, thus avoiding problems of over-fitting which restrict generalization. The learning algorithm used for adapting the network is the familiar error back propagation training algorithm. Other learning algorithms may be applied and since only one hidden layer is to be trained, the training performance of the network is expected to be comparable to or better than conventional multi-layer feed forward networks. The new architecture is explored by applying OPWRNET to classification, function approximation and interpolation problems. These applications show that the OPWRNET has comparable performance to multi-layer perceptrons. The OPWRNET was also applied to the prediction of noisy time series and the identification of nonlinear systems. The resulting trained networks, for system identification tasks, can be expressed directly as discrete nonlinear recursive polynomials. This characteristic was exploited in the development of two new neural network based nonlinear control algorithms, the Linearized Self-Tuning Controller (LSTC) and a variation of a Neural Adaptive Controller (NAC). These control algorithms are compared to a linear self-tuning controller and an artificial neural network based Inverse Model Controller. The advantages of these new controllers are discussed.
Title: | An artificial neural network architecture for interpolation, function approximation, time series modeling and control applications. |
74 views
28 downloads |
---|---|---|
Name(s): |
Luebbers, Paul Glenn. Florida Atlantic University, Degree grantor Pandya, Abhijit S., Thesis advisor Sudhakar, Raghavan, Thesis advisor College of Engineering and Computer Science Department of Computer and Electrical Engineering and Computer Science |
|
Type of Resource: | text | |
Genre: | Electronic Thesis Or Dissertation | |
Issuance: | monographic | |
Date Issued: | 1994 | |
Publisher: | Florida Atlantic University | |
Place of Publication: | Boca Raton, Fla. | |
Physical Form: | application/pdf | |
Extent: | 338 p. | |
Language(s): | English | |
Summary: | A new artificial neural network architecture called Power Net (PWRNET) and Orthogonal Power Net (OPWRNET) has been developed. Based on the Taylor series expansion of the hyperbolic tangent function, this novel architecture can approximate multi-input multi-layer artificial networks, while requiring only a single layer of hidden nodes. This allows a compact network representation with only one layer of hidden layer weights. The resulting trained network can be expressed as a polynomial function of the input nodes. Applications which cannot be implemented with conventional artificial neural networks, due to their intractable nature, can be developed with these network architectures. The degree of nonlinearity of the network can be directly controlled by adjusting the number of hidden layer nodes, thus avoiding problems of over-fitting which restrict generalization. The learning algorithm used for adapting the network is the familiar error back propagation training algorithm. Other learning algorithms may be applied and since only one hidden layer is to be trained, the training performance of the network is expected to be comparable to or better than conventional multi-layer feed forward networks. The new architecture is explored by applying OPWRNET to classification, function approximation and interpolation problems. These applications show that the OPWRNET has comparable performance to multi-layer perceptrons. The OPWRNET was also applied to the prediction of noisy time series and the identification of nonlinear systems. The resulting trained networks, for system identification tasks, can be expressed directly as discrete nonlinear recursive polynomials. This characteristic was exploited in the development of two new neural network based nonlinear control algorithms, the Linearized Self-Tuning Controller (LSTC) and a variation of a Neural Adaptive Controller (NAC). These control algorithms are compared to a linear self-tuning controller and an artificial neural network based Inverse Model Controller. The advantages of these new controllers are discussed. | |
Identifier: | 12357 (digitool), FADT12357 (IID), fau:9258 (fedora) | |
Collection: | FAU Electronic Theses and Dissertations Collection | |
Note(s): |
College of Engineering and Computer Science Thesis (Ph.D.)--Florida Atlantic University, 1994. |
|
Subject(s): | Neural networks (Computer science) | |
Held by: | Florida Atlantic University Libraries | |
Persistent Link to This Record: | http://purl.flvc.org/fcla/dt/12357 | |
Sublocation: | Digital Library | |
Use and Reproduction: | Copyright © is held by the author, with permission granted to Florida Atlantic University to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. | |
Use and Reproduction: | http://rightsstatements.org/vocab/InC/1.0/ | |
Host Institution: | FAU | |
Is Part of Series: | Florida Atlantic University Digital Library Collections. |