Asma Al-Tamimi - Profile on Academia.edu (original) (raw)

Papers by Asma Al-Tamimi

Research paper thumbnail of Discrete-time nonlinear HJB solution using approximate dynamic programming: convergence proof

Systems, Man, and …, Jan 1, 2008

Fig. 1. Flowchart of the proposed algorithm.

Research paper thumbnail of Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control

Automatica, Jan 1, 2007

In this paper, the optimal strategies for discrete-time linear system quadratic zero-sum games re... more In this paper, the optimal strategies for discrete-time linear system quadratic zero-sum games related to the H-infinity optimal control problem are solved in forward time without knowing the system dynamical matrices. The idea is to solve for an action dependent value function Q(x, u, w) of the zero-sum game instead of solving for the state dependent value function V (x) which satisfies a corresponding game algebraic Riccati equation (GARE). Since the state and actions spaces are continuous, two action networks and one critic network are used that are adaptively tuned in forward time using adaptive critic methods. The result is a Q-learning approximate dynamic programming (ADP) modelfree approach that solves the zero-sum game forward in time. It is shown that the critic converges to the game value function and the action networks converge to the Nash equilibrium of the game. Proofs of convergence of the algorithm are shown. It is proven that the algorithm ends up to be a model-free iterative algorithm to solve the GARE of the linear quadratic discrete-time zero-sum game. The effectiveness of this method is shown by performing an H-infinity control autopilot design for an F-16 aircraft. ᭧

Research paper thumbnail of Adaptive Critic Designs for Discrete-Time Zero-Sum Games With Application to $ H_ {infty} $ Control

Adaptive Critic Designs for Discrete-Time Zero-Sum Games With Application to $ H_ {infty} $ Control

Systems, Man, and …, Jan 1, 2007

In this correspondence, adaptive critic approximate dynamic programming designs are derived to so... more In this correspondence, adaptive critic approximate dynamic programming designs are derived to solve the discrete-time zero-sum game in which the state and action spaces are continuous. This results in a forward-in-time reinforcement learning algorithm that converges to the Nash equilibrium of the corresponding zero-sum game. The results in this correspondence can be thought of as a way to solve the Riccati equation of the well-known discrete-time H(infinity) optimal control problem forward in time. Two schemes are presented, namely: 1) a heuristic dynamic programming and 2) a dual-heuristic dynamic programming, to solve for the value function and the costate of the game, respectively. An H(infinity) autopilot design for an F-16 aircraft is presented to illustrate the results.

Research paper thumbnail of Model-free approximate dynamic programming schemes for linear systems

Model-free approximate dynamic programming schemes for linear systems

Neural Networks, 2007 …, Jan 1, 2007

Abstract— In this paper, we present online model-free adaptive critic (AC) schemes based on appro... more Abstract— In this paper, we present online model-free adaptive critic (AC) schemes based on approximate dynamic programming (ADP) to solve optimal control problems in both discrete-time and continuous-time domains for linear systems with unknown dynamics. In ...

Research paper thumbnail of Neural Network-based Optimal Control Design for Advanced Vehicular Thermal Management Systems

Neural Network-based Optimal Control Design for Advanced Vehicular Thermal Management Systems

Attention: This paper is not yet published. If you are interested in purchasing it - via Mail, Fa... more Attention: This paper is not yet published. If you are interested in purchasing it - via Mail, Fax or Download - please click on the "Mail" shopping cart icon below and proceed through the checkout process once you are finished browsing our web site. When this paper becomes ...

Research paper thumbnail of Discrete-time nonlinear HJB solution using approximate dynamic programming: convergence proof

Systems, Man, and …, Jan 1, 2008

Fig. 1. Flowchart of the proposed algorithm.

Research paper thumbnail of Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control

Automatica, Jan 1, 2007

In this paper, the optimal strategies for discrete-time linear system quadratic zero-sum games re... more In this paper, the optimal strategies for discrete-time linear system quadratic zero-sum games related to the H-infinity optimal control problem are solved in forward time without knowing the system dynamical matrices. The idea is to solve for an action dependent value function Q(x, u, w) of the zero-sum game instead of solving for the state dependent value function V (x) which satisfies a corresponding game algebraic Riccati equation (GARE). Since the state and actions spaces are continuous, two action networks and one critic network are used that are adaptively tuned in forward time using adaptive critic methods. The result is a Q-learning approximate dynamic programming (ADP) modelfree approach that solves the zero-sum game forward in time. It is shown that the critic converges to the game value function and the action networks converge to the Nash equilibrium of the game. Proofs of convergence of the algorithm are shown. It is proven that the algorithm ends up to be a model-free iterative algorithm to solve the GARE of the linear quadratic discrete-time zero-sum game. The effectiveness of this method is shown by performing an H-infinity control autopilot design for an F-16 aircraft. ᭧

Research paper thumbnail of Adaptive Critic Designs for Discrete-Time Zero-Sum Games With Application to $ H_ {infty} $ Control

Adaptive Critic Designs for Discrete-Time Zero-Sum Games With Application to $ H_ {infty} $ Control

Systems, Man, and …, Jan 1, 2007

In this correspondence, adaptive critic approximate dynamic programming designs are derived to so... more In this correspondence, adaptive critic approximate dynamic programming designs are derived to solve the discrete-time zero-sum game in which the state and action spaces are continuous. This results in a forward-in-time reinforcement learning algorithm that converges to the Nash equilibrium of the corresponding zero-sum game. The results in this correspondence can be thought of as a way to solve the Riccati equation of the well-known discrete-time H(infinity) optimal control problem forward in time. Two schemes are presented, namely: 1) a heuristic dynamic programming and 2) a dual-heuristic dynamic programming, to solve for the value function and the costate of the game, respectively. An H(infinity) autopilot design for an F-16 aircraft is presented to illustrate the results.

Research paper thumbnail of Model-free approximate dynamic programming schemes for linear systems

Model-free approximate dynamic programming schemes for linear systems

Neural Networks, 2007 …, Jan 1, 2007

Abstract— In this paper, we present online model-free adaptive critic (AC) schemes based on appro... more Abstract— In this paper, we present online model-free adaptive critic (AC) schemes based on approximate dynamic programming (ADP) to solve optimal control problems in both discrete-time and continuous-time domains for linear systems with unknown dynamics. In ...

Research paper thumbnail of Neural Network-based Optimal Control Design for Advanced Vehicular Thermal Management Systems

Neural Network-based Optimal Control Design for Advanced Vehicular Thermal Management Systems

Attention: This paper is not yet published. If you are interested in purchasing it - via Mail, Fa... more Attention: This paper is not yet published. If you are interested in purchasing it - via Mail, Fax or Download - please click on the "Mail" shopping cart icon below and proceed through the checkout process once you are finished browsing our web site. When this paper becomes ...