Real-time progressive learning: accumulate knowledge from control with neural-network-based selective memory
Memory, as the basis of learning, determines the storage, update, and forgetting of knowledge and further determines the efficiency of learning. Featured with the mechanism of memory, a radial basis function neural network (RBFNN)-based learning control scheme named real-time progressive learning (RTPL) is proposed to learn the unknown dynamics of the system with guaranteed stability and closed-loop performance. Instead of the Lyapunov-based weight update law of conventional neural network learning control (NNLC), which mainly concentrates on stability and control performance, RTPL uses the selective memory recursive least squares (SMRLS) algorithm to update the weights of the neural network and achieves the following merits: 1) improved learning speed without filtering; 2) robustness to hyperparameter setting of neural networks; 3) good generalization ability, i.e., reuse of learned knowledge in different tasks; and 4) guaranteed learning performance under parameter perturbation. Moreover, RTPL realizes continuous accumulation of knowledge as a result of its reasonably allocated memory while NNLC may gradually forget knowledge that it has learned. Corresponding theoretical analysis and simulation studies demonstrate the effectiveness of RTPL.
History
Publication status
- Published
File Version
- Accepted version
Journal
IEEE Transactions on Neural Networks and Learning SystemsISSN
1045-9227Publisher
IEEEPublisher URL
External DOI
Department affiliated with
- Engineering and Design Publications
Institution
University of SussexFull text available
- Yes
Peer reviewed?
- Yes