Create Custom Reinforcement Learning Agents - MATLAB & Simulink (original) (raw)

To implement your own custom reinforcement learning algorithms, you can create a custom agent by creating a subclass of a custom agent class. You can then train and simulate this agent in MATLAB® and Simulink® environments. For more information about creating classes in MATLAB, see User-Defined Classes.

Create Template Class

To define your custom agent, first create a class that is a subclass of therl.agent.CustomAgent class. As an example, this topic describes the custom LQR agent trained in Create and Train Custom LQR Agent. As a starting point for your own agent, you can open and modify this custom agent class. To download the example files in a local folder and open the main example live script, at the MATLAB command line, type the following code.

openExample('rl/TrainCustomLQRAgentExample')

Close the TrainCustomLQRAgentExample.mlx file and open theLQRCustomAgent.m class file.

After saving the class to your own working folder, you can remove the example files and the local folder in which they were downloaded.

The class defined in LQRCustomAgent.m has the following class definition, which indicates the agent class name and the associated abstract agent.

classdef LQRCustomAgent < rl.agent.CustomAgent

To define your agent, you must specify the following:

Agent Properties

In the properties section of the class file, specify any parameters necessary for creating and training the agent. These parameters can include:

For more information on potential agent properties, see the option objects for the built-in Reinforcement Learning Toolbox™ agents.

The rl.Agent.CustomAgent class already includes properties for the agent sample time (SampleTime) and the action and observation specifications (ActionInfo and ObservationInfo, respectively).

The custom LQR agent defines the following agent properties.

properties % Q Q

% R
R

% Feedback gain
K

% Discount factor
Gamma = 0.95

% Critic
Critic

% Buffer for K
KBuffer  
% Number of updates for K
KUpdate = 1

% Number for estimator update
EstimateNum = 10

end

properties (Access = private) Counter = 1 YBuffer HBuffer end

Constructor Function

To create your custom agent, you must define a constructor function that:

For example, the LQRCustomAgent constructor defines continuous action and observation spaces and creates a critic. The createCritic function is an optional helper function that defines the critic.

function obj = LQRCustomAgent(Q,R,InitialK) % Check the number of input arguments narginchk(3,3);

% Call the abstract class constructor
obj = obj@rl.agent.CustomAgent();

% Set the Q and R matrices
obj.Q = Q;
obj.R = R;

% Define the observation and action spaces
obj.ObservationInfo = rlNumericSpec([size(Q,1),1]);
obj.ActionInfo = rlNumericSpec([size(R,1),1]);

% Create the critic
obj.Critic = createCritic(obj);

% Initialize the gain matrix
obj.K = InitialK;

% Initialize the experience buffers
obj.YBuffer = zeros(obj.EstimateNum,1);
num = size(Q,1) + size(R,1);
obj.HBuffer = zeros(obj.EstimateNum,0.5*num*(num+1));
obj.KBuffer = cell(1,1000);
obj.KBuffer{1} = obj.K;

end

Actor and Critic

If your learning algorithm uses a critic to estimate the long-term reward, an actor for selecting an action, or both, you must add these as agent properties. You must then create these objects when you create your agent; that is, in the constructor function. For more information on creating actors and critics, see Create Policies and Value Functions.

For example, the custom LQR agent uses a critic, stored in its Critic property, and no actor. The critic creation is implemented in thecreateCritic helper function, which is called from theLQRCustomAgent constructor.

function critic = createCritic(obj) nQ = size(obj.Q,1); nR = size(obj.R,1); n = nQ+nR; w0 = 0.1ones(0.5(n+1)*n,1); critic = rlQValueFunction({@(x,u) computeQuadraticBasis(x,u,n),w0},... getObservationInfo(obj),getActionInfo(obj)); critic.Options.GradientThreshold = 1; end

In this case, the critic is an rlQValueFunction object. To create this object, you must specify the handle to a custom basis function, in this case the computeQuadraticBasis function. For more information, seeCreate and Train Custom LQR Agent.

Required Functions

To create a custom reinforcement learning agent you must define the following implementation functions. To call these functions in your own code, use the wrapper methods from the abstract base class. For example, to call getActionImpl, usegetAction. The wrapper methods have the same input and output arguments as the implementation methods.

Function Description
getActionImpl Selects an action by evaluating the agent policy for a given observation
getActionWithExplorationImpl Selects an action using the exploration model of the agent
learnImpl Learns from the current experiences and returns an action with exploration

Within your implementation functions, to evaluate your actor and critic, you can use thegetValue, getAction, andgetMaxQValue functions.

For each of these cases, if your actor or critic network uses a recurrent neural network, the functions can also return the current values of the network state after obtaining the corresponding network output.

getActionImpl Function

The getActionImpl function evaluates the policy of your agent and selects an action. This function must have the following signature, whereobj is the agent object, Observation is the current observation, and action is the selected action.

function action = getActionImpl(obj,Observation)

For the custom LQR agent, you select an action by applying the _u_=-K x control law.

function action = getActionImpl(obj,Observation) % Given the current state of the system, return an action action = -obj.K*Observation{:}; end

getActionWithExplorationImpl Function

The getActionWithExplorationImpl function selects an action using the exploration model of your agent. Using this function you can implement algorithms such as epsilon-greedy exploration. This function must have the following signature, whereobj is the agent object, Observation is the current observation, and action is the selected action.

function action = getActionWithExplorationImpl(obj,Observation)

For the custom LQR agent, the getActionWithExplorationImpl function adds random white noise to an action selected using the current agent policy.

function action = getActionWithExplorationImpl(obj,Observation) % Given the current observation, select an action action = getAction(obj,Observation);

% Add random noise to the action
num = size(obj.R,1);
action = action + 0.1*randn(num,1);

end

learnImpl Function

The learnImpl function defines how the agent learns from the current experience. This function implements the custom learning algorithm of your agent by updating the policy parameters and selecting an action with exploration. This function must have the following signature, where obj is the agent object,exp is the current agent experience, and action is the selected action.

function action = learnImpl(obj,exp)

The agent experience is the cell array exp = {state,action,reward,nextstate,isdone}.

For the custom LQR agent, the critic parameters are updated every N steps.

function action = learnImpl(obj,exp) % Parse the experience input x = exp{1}{1}; u = exp{2}{1}; dx = exp{4}{1};
y = (x'obj.Qx + u'obj.Ru); num = size(obj.Q,1) + size(obj.R,1);

% Wait N steps before updating the critic parameters
N = obj.EstimateNum;
h1 = computeQuadraticBasis(x,u,num);
h2 = computeQuadraticBasis(dx,-obj.K*dx,num);
H = h1 - obj.Gamma* h2;
if obj.Counter<=N
    obj.YBuffer(obj.Counter) = y;
    obj.HBuffer(obj.Counter,:) = H;
    obj.Counter = obj.Counter + 1;
else
    % Update the critic parameters based on the batch of
    % experiences
    H_buf = obj.HBuffer;
    y_buf = obj.YBuffer;
    theta = (H_buf'*H_buf)\H_buf'*y_buf;
    obj.Critic = setLearnableParameters(obj.Critic,{theta});
    
    % Derive a new gain matrix based on the new critic parameters
    obj.K = getNewK(obj);
    
    % Reset the experience buffers
    obj.Counter = 1;
    obj.YBuffer = zeros(N,1);
    obj.HBuffer = zeros(N,0.5*num*(num+1));    
    obj.KUpdate = obj.KUpdate + 1;
    obj.KBuffer{obj.KUpdate} = obj.K;
end

% Find and return an action with exploration
action = getActionWithExploration(obj,exp{4});

end

Optional Functions

Optionally, you can define how your agent is reset at the start of training by specifying a resetImpl function with the following function signature, where obj is the agent object. Using this function, you can set the agent into a known or random condition before training.

Also, you can define any other helper functions in your custom agent class as required. For example, the custom LQR agent defines a createCritic function for creating the critic and a getNewK function that derives the feedback gain matrix from the trained critic parameters.

Create Custom Agent

After you define your custom agent class, create an instance of it in the MATLAB workspace. For example, to create the custom LQR agent, define theQ, R, and InitialK values and call the constructor function.

Q = [10,3,1;3,5,4;1,4,9]; R = 0.5*eye(3); K0 = place(A,B,[0.4,0.8,0.5]); agent = LQRCustomAgent(Q,R,K0);

After validating the environment object, you can use it to train a reinforcement learning agent. For an example that trains the custom LQR agent, see Create and Train Custom LQR Agent.

See Also

Functions

More About