logo
banner

Journals & Publications

Journals Publications Papers

Papers

Error Bound Analysis of Q-Function for Discounted Optimal Control Problems with Policy Iteration
Jul 14, 2017Author:
PrintText Size A A

Title: Error Bound Analysis of Q-Function for Discounted Optimal Control Problems with Policy Iteration

 Authors: Yan, PF; Wang, D; Li, HL; Liu, DR

 Author Full Names: Yan, Pengfei; Wang, Ding; Li, Hongliang; Liu, Derong

 Source: IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 47 (7):1207-1216; 10.1109/TSMC.2016.2563982 JUL 2017

 Language: English

 Abstract: In this paper, we present error bound analysis of the Q-function for the action-dependent adaptive dynamic programming for solving discounted optimal control problems of unknown discrete-time nonlinear systems. The convergence of Q-functions derived by a policy iteration algorithm under ideal conditions is given. Considering the approximated errors of the Q-function and control policy in the policy evaluation step and policy improvement step, we establish error bounds of approximate Q-functions in each iteration. With the given boundedness conditions, the approximate Q-function will converge to a finite neighborhood of the optimal Q-function. To implement the presented algorithm, two three-layer neural networks are employed to approximate the Q-function and the control policy, respectively. Finally, a simulation example is utilized to verify the validity of the presented algorithm.

 ISSN: 2168-2216

 IDS Number: EY9YI

 Unique ID: WOS:000404354600014

*Click Here to View Full Record