Nahid Mahmud | North South University (original) (raw)

Nahid Mahmud

Uploads

Papers by Nahid Mahmud

Research paper thumbnail of Online inverse reinforcement learning with limited data

2020 59th IEEE Conference on Decision and Control (CDC), 2020

Research paper thumbnail of Safe Model-Based Reinforcement Learning for Systems With Parametric Uncertainties

Frontiers in Robotics and AI, 2021

Reinforcement learning has been established over the past decade as an effective tool to find opt... more Reinforcement learning has been established over the past decade as an effective tool to find optimal control policies for dynamical systems, with recent focus on approaches that guarantee safety during the learning and/or execution phases. In general, safety guarantees are critical in reinforcement learning when the system is safety-critical and/or task restarts are not practically feasible. In optimal control theory, safety requirements are often expressed in terms of state and/or control constraints. In recent years, reinforcement learning approaches that rely on persistent excitation have been combined with a barrier transformation to learn the optimal control policies under state constraints. To soften the excitation requirements, model-based reinforcement learning methods that rely on exact model knowledge have also been integrated with the barrier transformation framework. The objective of this paper is to develop safe reinforcement learning method for deterministic nonlinear...

Research paper thumbnail of Safety aware model-based reinforcement learning for optimal control of a class of output-feedback nonlinear systems

ArXiv, 2021

The ability to learn and execute optimal control policies safely is critical to realization of co... more The ability to learn and execute optimal control policies safely is critical to realization of complex autonomy, especially where task restarts are not available and/or the systems are safety-critical. Safety requirements are often expressed in terms of state and/or control constraints. Methods such as barrier transformation and control barrier functions have been successfully used, in conjunction with model-based reinforcement learning, for safe learning in systems under state constraints, to learn the optimal control policy. However, existing barrier-based safe learning methods rely on full state feedback. In this paper, an output-feedback safe model-based reinforcement learning technique is developed that utilizes a novel dynamic state estimator to implement simultaneous learning and control for a class of safety-critical systems with partially observable state.

Research paper thumbnail of A Safety Aware Model-Based Reinforcement Learning Framework for Systems with Uncertainties

2021 American Control Conference (ACC), 2021

Safety awareness is critical in reinforcement learning when task restarts are not available and/o... more Safety awareness is critical in reinforcement learning when task restarts are not available and/or when the system is safety critical. Safety requirements are often expressed in terms of state and/or control constraints. In the past, model-based reinforcement learning approaches combined with barrier transformations have been used as an effective tool to learn the optimal control policy under state constraints for systems with fully known models. In this paper, a reinforcement learning technique is developed that utilizes a novel filtered concurrent learning method to realize simultaneous learning and control in the presence of model uncertainties for safety critical systems.

Research paper thumbnail of Model-based inverse reinforcement learning for deterministic systems

Research paper thumbnail of Online inverse reinforcement learning with limited data

2020 59th IEEE Conference on Decision and Control (CDC), 2020

Research paper thumbnail of Safe Model-Based Reinforcement Learning for Systems With Parametric Uncertainties

Frontiers in Robotics and AI, 2021

Reinforcement learning has been established over the past decade as an effective tool to find opt... more Reinforcement learning has been established over the past decade as an effective tool to find optimal control policies for dynamical systems, with recent focus on approaches that guarantee safety during the learning and/or execution phases. In general, safety guarantees are critical in reinforcement learning when the system is safety-critical and/or task restarts are not practically feasible. In optimal control theory, safety requirements are often expressed in terms of state and/or control constraints. In recent years, reinforcement learning approaches that rely on persistent excitation have been combined with a barrier transformation to learn the optimal control policies under state constraints. To soften the excitation requirements, model-based reinforcement learning methods that rely on exact model knowledge have also been integrated with the barrier transformation framework. The objective of this paper is to develop safe reinforcement learning method for deterministic nonlinear...

Research paper thumbnail of Safety aware model-based reinforcement learning for optimal control of a class of output-feedback nonlinear systems

ArXiv, 2021

The ability to learn and execute optimal control policies safely is critical to realization of co... more The ability to learn and execute optimal control policies safely is critical to realization of complex autonomy, especially where task restarts are not available and/or the systems are safety-critical. Safety requirements are often expressed in terms of state and/or control constraints. Methods such as barrier transformation and control barrier functions have been successfully used, in conjunction with model-based reinforcement learning, for safe learning in systems under state constraints, to learn the optimal control policy. However, existing barrier-based safe learning methods rely on full state feedback. In this paper, an output-feedback safe model-based reinforcement learning technique is developed that utilizes a novel dynamic state estimator to implement simultaneous learning and control for a class of safety-critical systems with partially observable state.

Research paper thumbnail of A Safety Aware Model-Based Reinforcement Learning Framework for Systems with Uncertainties

2021 American Control Conference (ACC), 2021

Safety awareness is critical in reinforcement learning when task restarts are not available and/o... more Safety awareness is critical in reinforcement learning when task restarts are not available and/or when the system is safety critical. Safety requirements are often expressed in terms of state and/or control constraints. In the past, model-based reinforcement learning approaches combined with barrier transformations have been used as an effective tool to learn the optimal control policy under state constraints for systems with fully known models. In this paper, a reinforcement learning technique is developed that utilizes a novel filtered concurrent learning method to realize simultaneous learning and control in the presence of model uncertainties for safety critical systems.

Research paper thumbnail of Model-based inverse reinforcement learning for deterministic systems

Log In